id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
9c8bc209561905c0a89485e0130460d7fb2357a4
|
Experiences with Multi-threading and Dynamic Class Loading in a Java Just-In-Time Compiler
Daryl Maier, Pramod Ramarao, Mark Stoodley, Vijay Sundaresan
TestaRossa JIT compiler team
IBM Toronto Laboratory
Outline
- Overview of J9/TestaRoss
- Brief overview of our paper
- Class loading/unloading
- Profiling in a multi-threaded environment
- Code patching
- Focus on code patching
- Because it’s cool!
- Summary
J9 virtual machine
- High performance production Java VM from IBM
- Java 1.4.2 and Java 5.0 compliant
- Common code base for J2SE and J2ME products
- Support on 12 different processor/OS platforms
- Used in hundreds of IBM products including
- Websphere Application Server (WAS 6.x)
- Rational Application Developer (RAD)
- DB2
- XML parsers
TR (TestaRossa) JIT compiler
- Just-In-Time (JIT) compiler for J9 VM
- Fast startup time
- Adaptive compilation: multiple optimization levels
- Target ‘hot spots’ with higher opt level
- Classical and Java-specific optimizations
- Speculative optimizations
- Low overhead PDF (profiling) framework
- Code patching in many scenarios
# Program characteristics
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Loaded Classes</th>
<th>Unloaded Classes</th>
<th>Number of threads</th>
</tr>
</thead>
<tbody>
<tr>
<td>SPECjvm98</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>compress</td>
<td>383</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>jess</td>
<td>523</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>db</td>
<td>378</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>javac</td>
<td>537</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>mpegaudio</td>
<td>431</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>mtrt</td>
<td>404</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>jack</td>
<td>429</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>SPECjbb2000</td>
<td>1098</td>
<td>0</td>
<td>8*</td>
</tr>
<tr>
<td>Trade6</td>
<td>11639</td>
<td>341</td>
<td>>> 10</td>
</tr>
</tbody>
</table>
- Middleware programs load order of magnitude more classes
- Memory leak: classes must be unloaded on an ongoing basis
- Lots of active threads executing tons of code: no method-level hotspots
- Target only jvm/jbb: ignore critical correctness/performance issues!
The One Page Paper Overview
- Class loading and unloading
- Unloading a class requires significant clean-up
- Danger of class references materialized in the code
- Profiling when there are a lot of threads
- Must ensure timely recompilation and good scalability
- Code patching
- Resolution, efficient dispatch, recompilation, speculative optimizations
- Tricky stuff
Code Patching Overview
- Code patching scenarios, from easy to hard
1. All threads stopped (scalability suffers)
2. Single site, many active threads
3. Multiple sites, many active threads
- Patch site alignment problems
- Trade-offs impact designs on each platform
- e.g. number of PIC slots
Code Patching Example: Intel IA32 Field Resolution
- **Store to unresolved field**
- Field offset unknown at compile-time
- **When writing instruction, offset initialized to 0**
- Opcode, operand bytes assume largest offset (4B)
Code Patching Example Cont...
- **Resolution by site-specific generated code**
- Calls a VM function to resolve the field
```
0x803F
E8 3B 01 00 00
resolveField803F:
0x8180
E8 DF 23 97 00
0x8185
89 82 00 00 00 00
```
Instruction bytes are copied to a site-specific resolution code out of line from the main instruction sequence...
CALL resolveField
and the original bytes are overlaid with a 5-byte CALL that reaches the site-specific resolution code
original instruction bytes
Code Patching Example Cont…
- ‘resolveField’ determines field offset at runtime
The call to the site specific resolution code can now be replaced with the 6 bytes from the original instruction...
Code Patching Example Cont…
- **BUT there’s a problem…**
- Atomic updates needed to guarantee other threads execute correct code
- X86 can only patch $2^N$ bytes atomically: example needs 6B
- **Solution: atomically overlay a thread barrier (self loop)**
- JMP -2 instruction for X86, similar on other platforms
- **Guarantee all processors observe barrier before patching**
- Only one thread resolves the field
- MFENCE, CLFLUSH instructions for X86
Code Patching Example Cont...
Spin loop prevents other threads from executing instruction as its being patched
Atomically inserted with LOCK CMPXCHG instruction followed by a memory fence
If the CMPXCHG failed, then branch to 0x803F: another thread put in the self-loop already
Field offset copied into original instruction (spin loop still present)
Followed by memory fence
Finally, spin loop removed with single 2-byte write
We’re done! ...or are we?
Code Patching Example: Still not correct
- **Patched bytes can’t straddle patching boundary**
- Not all instruction stores guaranteed to be atomically visible
- Patching boundary is address that can’t be straddled by code locations being patched…empirically:
- 8-bytes on AMD K7 or K8 cores
- 32-bytes on Intel Pentium 3
- 64-bytes on Intel Pentium 4 and EM64T
- **Insert NOPs to align patchable instructions**
- e.g. spin loop JMP-2 instruction can’t straddle patching boundary
- Increases code footprint by 1-2% on AMD: need more NOPs
- Extra NOPs can have surprising performance effects (!)
Code Patching Example Cont…
- Single byte NOP inserted to align spin loop site
- Patching infrastructure otherwise unaffected
First two bytes no longer straddle a patching boundary
Code Patching On Other Architectures
- **pSeries**
- Uniform instruction length
- Challenges:
- Multiple instructions required for immediate addresses
- **zSeries**
- Variable instruction length
- Challenges:
- Overcoming I-cache coherence costs, efficiency of atomic instructions
Summary
- Middleware applications are highly multi-threaded and load and unload LOTS of classes
- Implications for patching, profiling, optimization design
- Our paper describes
- Class unloading pain
- Profiling correctly when lots of threads around
- Code patching trickiness
Backup Slides
Contributions of Our Paper
- Highlight issues relevant in a production JIT compiler running multi-threaded and/or large applications
- Class loading and unloading
- Best code patching techniques vary by platform
- Low overhead profiling with multiple active threads
- Describe our solutions to these problems
Class loading and CHTable
- **Class loading is not a ‘stop the world’ event**
- Allows other Java threads to make progress while one thread loads a class
- Allows compilation thread to compile while classes are being loaded
- **JIT compiler maintains a class hierarchy table**
- Superclass/interface relationships are updated
- Compensate for violated run time assumptions
- All updates performed after acquiring CHTable lock
- Compiler does not hold CHTable lock throughout a compilation
- Compile-time CHTable queries must acquire CHTable lock
Class loading and CHTable
- **JIT compiler optimizations using class hierarchy table**
- Guarded devirtualization
- Conditionally convert virtual call to direct call
- Assumption is registered in the CHTable
- If assumption is violated, compensate at run time
- Patch code to execute the backup code (virtual call)
- Invariant argument pre-existence
- Devirtualize virtual calls using an invariant parameter as a receiver
- Re-compile the method containing devirtualized calls if assumption is violated
- **Class hierarchy might have changed while a compilation was in progress**
- Acquire CHTable lock just before binary code is generated
- Generate binary code
- Compensate for any assumptions violated during the compilation
- Release CHTable lock
Garbage collection and class unloading in the J9 VM
- **Class unloading**: memory allocated for class is re-claimed and the class ‘dies’
- **Class unloading done during garbage collection**
- **Garbage collection is a ‘stop the world’ event in J9**
- Co-operative model (Java threads execute ‘yield points’ to check if GC is pending)
- Java classes are also objects on the heap and can therefore be collected (and unloaded)
- Class objects are never ‘moved’, i.e. a class is always at the same address throughout it’s lifetime
- **All classes in a class loader unloaded together**
- **A class is unloaded when**
- No objects of that class type are ‘live’ on the Java heap
- No loaded bytecodes explicitly refer to the class
Class unloading in the J9 VM
Impacts the JIT compiler significantly
- Class hierarchy table
- Profiling data
- Compilation issues
- Code memory reclamation
- Persistent data reclamation
- ‘Materialized’ references in generated code
Class unloading and ‘materialized’ references
Interface I { public void foo(); }
class C1 implements I
{
public void foo() { System.out.println("In C1.foo"); }
}
class C2 implements I
{
public void foo() { System.out.println("In C2.foo"); }
}
Class unloading and ‘materialized’ references
public I createC1orC2(int x) {
if (x % 2)
return new C1();
else
return new C2();
}
public void bar() {
x++;
I obj = this.createC1orC2(x);
obj.foo(); // Polymorphic interface call
}
Class unloading and ‘materialized’ references
De-virtualized interface call conditionally
```java
public void bar() {
x++;
I obj = this.createClorC2(x);
if (obj.class == C1) // ‘materialized’ reference to C1
C1.foo(); // called with obj as the receiver object
else if (obj.class == C2) // ‘materialized’ reference to C2
C2.foo(); // called with obj as the receiver object
else
obj.foo(); // Polymorphic interface call
}
```
Class unloading and ‘materialized’ references
After replacing ‘materialized’ reference when C1 is unloaded
```java
public void bar() {
x++;
I obj = this.createC1orC2(x);
if (obj.class == -1) // 'materialized' reference to C1 changed
C1.foo(); // called with obj as the receiver object
else if (obj.class == C2) // 'materialized' reference to C2
C2.foo(); // called with obj as the receiver object
else
obj.foo(); // Polymorphic interface call
}
```
Class unloading and ‘materialized’ references
- List of code locations containing ‘materialized’ references is maintained for each class
- Addition to the list is done both at compile time and at run time
- Only add to the list if the class loader of ‘materialized’ class is different from the class loader of some other class referred to in the constant pool
- Compare with class loader of method being compiled
- Compare with class loader of super class/interface referred to in the constant pool
- Patching can be done without any race conditions because all threads have yielded for a GC
Class unloading and CHTable
- Remove unloaded classes from superclasses/interfaces in CHTable
- Grouping unloading requests avoids excessive traversals over data structures
- Problematic scenario
- Interface I is implemented by N classes
- Each implemented class loaded by a different class loader (N class loaders)
- Each class loader is unloaded and CHTable updates are performed independently
- O(N²) to remove all implementors of I
- We have seen N ~ 10,000 in customer applications
Class unloading and compilation
- **Asynchronous compilation**
- Java threads queue methods for compilation and continue executing (in most cases)
- **Class containing a queued method could be unloaded before it is actually compiled**
- Solution: Walk the compilation queue every time a class is unloaded and delete methods that belonging to the class
- **Class might be unloaded when a compilation is in progress**
- Solution: Check if an unloaded class was used by the compilation in any manner; if so, abort the compilation
Class unloading and profiling
- Minimize work at run time and instead, move work to compile time as much as possible
- Profile data is for Java bytecodes that have been unloaded
• Raw data is generated while program runs
• Periodically, raw data is read and 'processed'
• Bytecodes that generated raw data might have been unloaded
• Solution: purge all raw data when class unloading occurs
• What about 'processed' data for unloaded code?
• Solution: maintain bytecode address range for unloaded code and avoid returning information from compile-time queries for profiling information for bytecodes in that range
- Profile data contains references to unloaded classes
• Keep track of unloaded classes' addresses
• Avoid returning class whose address matched an unloaded class
- Alternatives
• Cleanse profiling data as unloading occurs (costly at run time?)
Class unloading and memory reclamation
- Common tasks like serialization sometimes create class loaders with short lifetimes
- Unbounded memory increase over time (server applications can run for days)
- Re-claim code and data memory for compiled method(s) in unloaded class
- Problem: Might involve expensive searches each time at run time
- Solution: Maintain per-class loader information about compiled methods and persistent data
- Example: check if ‘any’ method belonging to an unloaded class loader was compiled
Profiling
- **When is profiling done**
- Profile methods deemed to be 'hot' based on sampling
- **When a method is chosen to be profiled**
- Compile the method with embedded profiling code
- Execute the method body for a while collecting data
- Recompile the method using profiling data
Profiling in the TR JIT
- Loosely based on Jikes RVM approach
- Arnold et al (PLDI 2001)
- Compiler creates a clone of the method to be profiled
- Clone contains the profiling code
- Transition paths at equivalent points allow flow of control between two bodies
- Original method body executes more frequently
Profiling in the TR JIT (cont…)
- Profiling approach
- Every \( M \) execution paths in the non-profiled version, transition to profiled version
- Execute one execution path in the profiled version and transition back to non-profiled version
- Do these steps \( N \) times
- ‘\( M \)’ is the profiling PERIOD
- 19, 29, 47… (increasing number of back edges)
- ‘\( N \)’ is the profiling COUNT
- 100, 625, 1250, 2500 … (increasing number of back edges)
Preliminaries
- **“Async checks”**
- Inserted at each loop back edge to test if thread needs to yield to GC
- Profiler uses async checks to mark loop back edges
- **“Execution Path”**
- Starts at method entry or an async check
- Ends at method entry or an async check
- **After one execution path is completed in profiled version, return to non-profiled version**
- Ensures execution is not stuck in a loop in profiled version
Preliminaries (cont...)
Execution Path
1 --> 2 --> 6
6 --> 3 --> 5
5 --> 4 --> 5
6 --> 7 ... --> 1
5 --> 6
Profiling Transitions
Non-profiled
1
2
3
4
5
6
7
Profiled
1
2
3
4
5
6
7
Profiling Transitions (cont...)
METHOD ENTRY
if (recompCounter < 0)
RECOMPILE
profilingCount < 0
profilingPeriod--
asyncChk
profilingPeriod < 0
N1:
...program code...
asyncChk
goto N1
P1:
profilingPeriod = PERIOD
profilingCount--
profilingCount > 0
recompCounter--
profilingPeriod = MAXINT
P2:
...program code...
Effects of Multi-threading
- Recompilation may not occur for a long time
Initially
Thread1: count = 0 period = MAXINT
Thread2: period--
Thread3: at method entry
Thread Interaction
Thread3: set initial values
count = 29 period = 625
Thread3
......
Thread2
profilingPeriod--
......
Thread1: read period (MAXINT)
Thread1:
period-- => period = MAXINT-1
Thread1: count = 0 period = MAXINT
profilingCount < 0
recompCounter-- profilingPeriod = MAXINT
count = 29 period = MAXINT-1
transition won't occur for a long time!
Effects of Multi-threading
- Poor scalability with increasing number of threads
- Multiple threads could transition to profiling code
- Possibility of threads manipulating ‘period’ multiple times
- Guarantee of profiling path being executed once every PERIOD paths no longer true
- Imprecision in basic block profiling counts
- Multiple threads may manipulate basic block counts
- Basic block counts may no longer reflect the hotness of an execution path
Profiling in the TR JIT
- To improve scalability, use synchronization to access global 'period' and 'count' variables
- **At Method Entry**
- Synchronization is used to read global variables into thread-local storage
- Basic block counters are also thread-local
- **At Method Exit**
- Global variables are updated from thread-local storage at each method exit under synchronization
- **Adds overhead**
- Each thread has now to allocate extra storage
- Two locking operations introduce runtime overhead
Results
- **Statistics of stack usage and runtime overhead of synchronization in profiled methods**
- Only period and count variables are allocated as thread-local
- All counters are allocated as thread-local (including basic block counts)
- **Average stack usage increase was 14.7% across SPECjvm98 and SPECJbb2000**
- **Runtime overhead was negligible**
## Stack usage
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>NumSlots</th>
<th>% Increase</th>
<th>NumSlots</th>
<th>% Increase</th>
</tr>
</thead>
<tbody>
<tr>
<td>_201_compress</td>
<td>81</td>
<td>87</td>
<td>7.4</td>
<td>348</td>
</tr>
<tr>
<td>_202_jess</td>
<td>100</td>
<td>105</td>
<td>5.0</td>
<td>832</td>
</tr>
<tr>
<td>_209_db</td>
<td>50</td>
<td>58</td>
<td>16.0</td>
<td>260</td>
</tr>
<tr>
<td>_213_javac</td>
<td>258</td>
<td>293</td>
<td>13.5</td>
<td>2462</td>
</tr>
<tr>
<td>_222_mpegaudio</td>
<td>111</td>
<td>121</td>
<td>9.0</td>
<td>311</td>
</tr>
<tr>
<td>_227_mtrt</td>
<td>99</td>
<td>112</td>
<td>13.1</td>
<td>763</td>
</tr>
<tr>
<td>_228_jack</td>
<td>148</td>
<td>211</td>
<td>42.5</td>
<td>1462</td>
</tr>
<tr>
<td>SPECjbb2000</td>
<td>183</td>
<td>203</td>
<td>10.9</td>
<td>954</td>
</tr>
<tr>
<td><strong>Average</strong></td>
<td></td>
<td></td>
<td><strong>14.7</strong></td>
<td><strong>562</strong></td>
</tr>
</tbody>
</table>
Results (cont…)
- Only _202_jess shows some overhead
- Contains many small methods that get profiled
- Runtime overhead in the two multi-threaded benchmarks were negligible
|
{"Source-Url": "http://www.cgo.org/cgo2006/html/progslides/session2_talk3_maier.pdf", "len_cl100k_base": 4721, "olmocr-version": "0.1.50", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 67487, "total-output-tokens": 6308, "length": "2e12", "weborganizer": {"__label__adult": 0.0003192424774169922, "__label__art_design": 0.00017726421356201172, "__label__crime_law": 0.00028967857360839844, "__label__education_jobs": 0.0002574920654296875, "__label__entertainment": 3.5643577575683594e-05, "__label__fashion_beauty": 0.00011712312698364258, "__label__finance_business": 0.00015091896057128906, "__label__food_dining": 0.0002636909484863281, "__label__games": 0.0003690719604492187, "__label__hardware": 0.0010662078857421875, "__label__health": 0.00027370452880859375, "__label__history": 0.00013577938079833984, "__label__home_hobbies": 7.37309455871582e-05, "__label__industrial": 0.0003352165222167969, "__label__literature": 0.00011330842971801758, "__label__politics": 0.0001914501190185547, "__label__religion": 0.0003657341003417969, "__label__science_tech": 0.00370025634765625, "__label__social_life": 5.996227264404297e-05, "__label__software": 0.003993988037109375, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0003001689910888672, "__label__transportation": 0.00040268898010253906, "__label__travel": 0.00018393993377685547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18301, 0.06495]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18301, 0.38855]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18301, 0.8069]], "google_gemma-3-12b-it_contains_pii": [[0, 205, false], [205, 421, null], [421, 772, null], [772, 1109, null], [1109, 2325, null], [2325, 2704, null], [2704, 3005, null], [3005, 3240, null], [3240, 3728, null], [3728, 3926, null], [3926, 4390, null], [4390, 4671, null], [4671, 4849, null], [4849, 5467, null], [5467, 5652, null], [5652, 5951, null], [5951, 6239, null], [6239, 6253, null], [6253, 6570, null], [6570, 7132, null], [7132, 7921, null], [7921, 8660, null], [8660, 8894, null], [8894, 9148, null], [9148, 9457, null], [9457, 9938, null], [9938, 10440, null], [10440, 11037, null], [11037, 11547, null], [11547, 12081, null], [12081, 12959, null], [12959, 13480, null], [13480, 13777, null], [13777, 14094, null], [14094, 14558, null], [14558, 14998, null], [14998, 15107, null], [15107, 15183, null], [15183, 15510, null], [15510, 15674, null], [15674, 16040, null], [16040, 16507, null], [16507, 17023, null], [17023, 17384, null], [17384, 18126, null], [18126, 18301, null]], "google_gemma-3-12b-it_is_public_document": [[0, 205, true], [205, 421, null], [421, 772, null], [772, 1109, null], [1109, 2325, null], [2325, 2704, null], [2704, 3005, null], [3005, 3240, null], [3240, 3728, null], [3728, 3926, null], [3926, 4390, null], [4390, 4671, null], [4671, 4849, null], [4849, 5467, null], [5467, 5652, null], [5652, 5951, null], [5951, 6239, null], [6239, 6253, null], [6253, 6570, null], [6570, 7132, null], [7132, 7921, null], [7921, 8660, null], [8660, 8894, null], [8894, 9148, null], [9148, 9457, null], [9457, 9938, null], [9938, 10440, null], [10440, 11037, null], [11037, 11547, null], [11547, 12081, null], [12081, 12959, null], [12959, 13480, null], [13480, 13777, null], [13777, 14094, null], [14094, 14558, null], [14558, 14998, null], [14998, 15107, null], [15107, 15183, null], [15183, 15510, null], [15510, 15674, null], [15674, 16040, null], [16040, 16507, null], [16507, 17023, null], [17023, 17384, null], [17384, 18126, null], [18126, 18301, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18301, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18301, null]], "pdf_page_numbers": [[0, 205, 1], [205, 421, 2], [421, 772, 3], [772, 1109, 4], [1109, 2325, 5], [2325, 2704, 6], [2704, 3005, 7], [3005, 3240, 8], [3240, 3728, 9], [3728, 3926, 10], [3926, 4390, 11], [4390, 4671, 12], [4671, 4849, 13], [4849, 5467, 14], [5467, 5652, 15], [5652, 5951, 16], [5951, 6239, 17], [6239, 6253, 18], [6253, 6570, 19], [6570, 7132, 20], [7132, 7921, 21], [7921, 8660, 22], [8660, 8894, 23], [8894, 9148, 24], [9148, 9457, 25], [9457, 9938, 26], [9938, 10440, 27], [10440, 11037, 28], [11037, 11547, 29], [11547, 12081, 30], [12081, 12959, 31], [12959, 13480, 32], [13480, 13777, 33], [13777, 14094, 34], [14094, 14558, 35], [14558, 14998, 36], [14998, 15107, 37], [15107, 15183, 38], [15183, 15510, 39], [15510, 15674, 40], [15674, 16040, 41], [16040, 16507, 42], [16507, 17023, 43], [17023, 17384, 44], [17384, 18126, 45], [18126, 18301, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18301, 0.05463]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
2f82bb187d432c51076ad53f0188ba81a751b7de
|
Standardizing the Machine Learning Lifecycle
From experimentation to production with MLflow
mlflow
databricks
Preface
Technology changes quickly. Data science and machine learning (ML) are moving even faster. In the short time since we first published this eBook, businesses across industries have rapidly matured their machine learning operations (MLOps) — implementing ML applications and moving their first models into production. This has turned ML models into corporate assets that need to be managed across the lifecycle.
That’s why MLflow, an open-source platform developed by Databricks, has emerged as a leader in automating the end-to-end ML lifecycle. With 1.8 million1 downloads a month — and growing support in the developer community — this open-source platform is simplifying the complex process of standardizing and productionizing MLOps. This updated eBook explores the advantages of MLflow and introduces you to the newest component: MLflow Model Registry. You’ll also discover how MLflow fits into the Databricks Unified Data Analytics Platform for data engineering, science and analytics.
---
1 As of March 2020
Building machine learning models is hard. Putting them into production is harder. Enabling others — data scientists, engineers or even yourself — to reproduce your pipeline and results is equally challenging. How many times have you or your peers had to discard previous work because it was either not documented properly or too difficult to replicate?
Getting models up to speed in the first place is significant enough that it can be easy to overlook long-term management. What does this involve in practice? In essence, we have to compare the results of different versions of ML models along with corresponding artifacts — code, dependencies, visualizations, intermediate data and more — to track what’s running where, and to redeploy and roll back updated models as needed. Each of these requires its own specific tools, and it’s these changes that make the ML lifecycle so challenging compared with traditional software development lifecycle (SDLC) management.
This represents a serious shift and creates challenges compared with a more traditional software development lifecycle for the following reasons:
- The diversity and number of ML tools involved, coupled with a lack of standardization across ML libraries and frameworks
- The continuous nature of ML development, accompanied by a lack of tracking and management tools for machine learning models and experiments
- The complexity of productionizing ML models due to the lack of integration among data pipelines, ML environments and production services
Let’s look at each of these areas in turn.
The diversity and number of ML tools involved
While the traditional software development process leads to the rationalization and governance of tools and platforms used for developing and managing applications, the ML lifecycle relies on data scientists’ ability to use multiple tools, whether for preparing data and training models, or deploying them for production use. Data scientists will seek the latest algorithms from the most up-to-date ML libraries and frameworks available to compare results and improve performance.
However, due to the variety of available tools and the lack of detailed tracking, teams often have trouble getting the same code to work again in the same way. Reproducing the ML workflow is a critical challenge, whether a data scientist needs to pass training code to an engineer for use in production or go back to past work to debug a problem.
The continuous nature of ML development
Technology never stands still. New data, algorithms, libraries and frameworks impact model performance continuously and, thus, need to be tested. Therefore, machine learning development requires a continuous approach, along with tracking capabilities to compare and reproduce results. The performance of ML models depends not only on the algorithms used, but also on the quality of the data sets and the parameter values for the models.
Whether practitioners work alone or on teams, it’s still very difficult to track which parameters, code and data went into each experiment to produce a model, due to the intricate nature of the ML lifecycle itself.
The complexity of productionizing ML models
In software development, the architecture is set early on, based on the target application. Once the infrastructure and architecture have been chosen, they won’t be updated or changed due to the sheer amount of work involved in rebuilding applications from scratch. Modern developments, such as the move to microservices, are making this easier, but for the most part, SDLC focuses on maintaining and improving what already exists.
With machine learning the first goal is to build a model. And keep in mind: a model’s performance in terms of accuracy and sensitivity is agnostic from the deployment mode. However, models can be heavily dependent on latency, and the chosen architecture requires significant scalability based on the business application. End-to-end ML pipeline designs can be great for batch analytics and looking at streaming data, but they can involve different approaches for real-time scoring when an application is based on a microservice architecture working via REST APIs, etc.
One of today’s key challenges is to effectively transition models from experimentation to staging and production — without needing to rewrite the code for production use. This is time-consuming and risky as it can introduce new bugs. There are many solutions available to productionize a model quickly, but practitioners need the ability to choose and deploy models across any platform, and scale resources as needed to manage model inference effectively on big data, in batch or real time.
Many data science and machine learning projects fail due to preventable issues that have been resolved in software engineering for more than a decade. However, those solutions need to be adapted due to key differences between developing code and training ML models.
- **Expertise, code and data** — With the addition of data, data science and ML, code not only needs to deal with data dependencies but also handle the inherent nondeterministic characteristics of statistical modeling. ML models are not guaranteed to behave the same way when trained twice, unlike traditional code, which can be easily unit tested.
- **Model artifacts** — In addition to application code, ML products and features also depend on models that are the result of a training process. Those model artifacts can often be large — on the order of gigabytes — and often need to be served differently from code itself.
- **Collaboration** — In large organizations, models that are deployed in an application are usually not trained by the same people responsible for the deployment. Handoffs between experimentation, testing and production deployments are similar but not identical to approval processes in software engineering.
The need for standardization
Some of the world’s largest tech companies have already begun solving these problems internally with their own machine learning platforms and lifecycle management tools. These internal platforms have been extremely successful and are designed to accelerate the ML lifecycle by standardizing the process of data preparation, model training, and deployment via APIs built for data scientists. The platforms not only help standardize the ML lifecycle but also play a major role in retaining knowledge and best practices, and maximizing data science team productivity and collaboration, thereby leading to greater ROI.
Internally driven strategies still have limitations. First, they are limited to a few algorithms or frameworks. Adoption of new tools or libraries can lead to significant bottlenecks. Of course, data scientists always want to try the latest and the best algorithms, libraries and frameworks — the most recent versions of PyTorch, TensorFlow and so on. Unfortunately, production teams cannot easily incorporate these into the custom ML platform without significant rework. The second limitation is that each platform is tied to a specific company’s infrastructure. This can limit sharing of efforts among data scientists. As each framework is so specific, options for deployment can be limited.
The question then is: Can similar benefits to these systems be provided in an open manner? This evaluation must be based on the widest possible mix of tools, languages, libraries and infrastructures. Without this approach, it will be very difficult for data scientists to evolve their ML models and keep pace with industry developments. Moreover, by making it available as open source, the wider industry will be able to join in and contribute to ML’s wider adoption. This also makes it easier to move between various tools and libraries over time.
2 Facebook has implemented its FBLearner Flow platform. Uber has a service called Michelangelo. And Google has TFX.
At Databricks, we believe that there should be a better way to manage the ML lifecycle. So in June 2018, we unveiled MLflow, an open-source machine learning platform for managing the complete ML lifecycle.
“MLflow is designed to be a cross-cloud, modular, API-first framework, to work well with all popular ML frameworks and libraries. It is open and extensible by design, and platform agnostic for maximum flexibility.”
With MLflow, data scientists can now package code as reproducible runs, execute and compare hundreds of parallel experiments, and leverage any hardware or software platform for training, hyperparameter tuning and more. Also, organizations can deploy and manage models in production on a variety of clouds and serving platforms.
“With MLflow, data science teams can systematically package and reuse models across frameworks, track and share experiments locally or in the cloud, and deploy models virtually anywhere,” says Zaharia. “The flurry of interest and contributions we’ve seen from the data science community validates the need for an open-source framework to streamline the machine learning lifecycle.”
Key benefits
EXPERIMENT TRACKING As mentioned previously, getting ML models to perform takes significant trial and error, and continuous configuration, building, tuning, testing, etc. Therefore, it is imperative to allow data science teams to track all that goes into a specific run, along with the results. With MLflow, data scientists can quickly record runs and keep track of model parameters, results, code and data from each experiment, all in one place.
Key benefits
REPRODUCIBLE PROJECTS The ability to reproduce a project — entirely or just parts of it — is key to data science productivity, knowledge sharing and, hence, accelerating innovation. With MLflow, data scientists can build and package composable projects, capture dependencies and code history for reproducible results, and quickly share projects with their peers.
FLEXIBLE DEPLOYMENT There is virtually no limit to what machine learning can do for your business. However, there are different ways to architect ML applications for production, and various tools can be used for deploying models, which often lead to code rewrites prior to deploying ML models into production. With MLflow, your data scientists can quickly download or deploy any saved models to various platforms — locally or in the cloud — from experimentation to production.
Key benefits
**MODEL MANAGEMENT** Use one central place to share ML models, collaborate on moving them from experimentation to online testing and production, integrate with approval and governance workflows, and monitor ML deployments and their performance. This is powered by the latest MLflow component, MLflow Model Registry.
Use case examples
Let’s examine three use cases to explore how users can leverage some of the MLflow components.
**EXPERIMENT TRACKING** A European energy company is using MLflow to track and update hundreds of energy-grid models. This company’s goal is to build a time-series model for every major energy producer (e.g., power plant) and consumer (e.g., factory), monitor these models using standard metrics, and combine the predictions to drive business processes, such as pricing. Because a single team is responsible for hundreds of models, possibly using different ML libraries, it’s important to have a standard development and tracking process. The team has standardized on Jupyter notebooks for development, MLflow Tracking for metrics, and Databricks Jobs for inference.
**REPRODUCIBLE PROJECTS** An online marketplace is using MLflow to package deep learning jobs using Keras and run them in the cloud. Each data scientist develops models locally on a laptop using a small data set, checks them into a Git repository with an MLproject file, and submits remote runs of the project to GPU instances in the cloud for large-scale training or hyperparameter search. Using MLflow Projects makes it easy to create the same software environment in the cloud and share project code among data scientists.
**MODEL PACKAGING** An e-commerce site’s data science team is using MLflow Model Registry to package recommendation models for use by application engineers. This presents a technical challenge because the recommendation application includes both a standard, off-the-shelf recommendation model and custom business logic for pre- and post-processing. For example, the application might include custom code to ensure the recommended items are diverse. This business logic needs to change in sync with the model, and the data science team wants to control both the business logic and the model, without having to submit a patch to the web application each time the logic has to change. Moreover, the team wants to A/B test distinct models with distinct versions of the processing logic. The solution was to package both the recommendation model and the custom logic using the `python_function` flavor in an MLflow Model, which can then be deployed and tested as a single unit.
Open and extensible by design
Since we unveiled and open sourced MLflow in June 2018 at the Spark + AI Summit in San Francisco, community engagement and contributions have led to an impressive array of new features and integrations:
**SUPPORT FOR MULTIPLE PROGRAMMING LANGUAGES**
To give developers a choice, MLflow supports R, Python, Java and Scala, along with a REST server interface that can be used from any language.
**INTEGRATION WITH POPULAR ML LIBRARIES AND FRAMEWORKS**
MLflow has built-in integrations with the most popular machine learning libraries — such as scikit-learn, TensorFlow, Keras, PyTorch, H2O, and Apache Spark™ MLib — to help teams build, test and deploy machine learning applications.
**CROSS-CLOUD SUPPORT**
Organizations can use MLflow to quickly deploy machine learning models to multiple cloud services, including Databricks, Azure Machine Learning and Amazon SageMaker, depending on their needs. MLflow leverages AWS S3, Google Cloud Storage and Azure Data Lake Storage, allowing teams to easily track and share artifacts from their code.
Rapid community adoption
- 2.5M monthly downloads
- 200+ code contributors
- 100+ contributing organizations
Organizations using and contributing to MLflow
Source: mlflow.org
MLflow originally introduced the ability to track metrics, parameters and artifacts as part of experiments, package models and reproducible ML projects, and deploy models to batch or to real-time serving platforms.
The latest MLflow component — MLflow Model Registry — builds on MLflow’s original capabilities to provide organizations with one central place to share ML models, collaborate on moving them from experimentation to testing and production, and implement approval and governance workflows.
The MLflow Model Registry complements the MLflow offering and is designed to help organizations implement good engineering principles with machine learning initiatives, such as collaboration, governance, reproducibility and knowledge management. The next few pages highlight some of the key features of this new component.
One hub for managing ML models collaboratively
Building and deploying ML models is a team sport. Not only are the responsibilities along the machine learning model lifecycle often split across multiple people (e.g., data scientists train models whereas production engineers deploy them), but also at each lifecycle stage, teams can benefit from collaboration and sharing (e.g., a fraud model built in one part of the organization could be reused in others).
MLflow facilitates sharing of expertise and knowledge across teams by making ML models more discoverable and providing collaborative features to jointly improve on common ML tasks. Simply register an MLflow model from your experiments to get started. The MLflow Model Registry will then let you track multiple versions of the model and mark each one with a lifecycle stage: development, staging, production or archived.
Flexible CI/CD pipelines to manage stage transitions
MLflow lets you manage your models’ lifecycles either manually or through automated tools. Analogous to the approval process in software engineering, users can manually request to move a model to a new lifecycle stage (e.g., from staging to production), and review or comment on other users’ transition requests.
Alternatively, you can use the Model Registry’s API to plug in continuous integration and deployment (CI/CD) tools, such as Jenkins, to automatically test and transition your models. Each model also links to the experiment run that built it — in MLflow Tracking — to let you easily review models.
Visibility and governance for the full ML lifecycle
In large enterprises, the number of ML models that are in development, staging and production at any given point in time may be in the hundreds or thousands. Having full visibility into which models exist, what stages they are in and who has collaborated on and changed the deployment stages of a model allows organizations to better manage their ML efforts.
MLflow provides full visibility and enables governance by keeping track of each model’s history and managing who can approve changes to the model’s stages.
Identify versions, stages and authors of each model
Standardizing the ML lifecycle with MLflow is a great step to ensure that data scientists can share and track experiments, compare results, reproduce runs and productionize faster.
In addition to increasing data science team productivity and collaboration and applying good engineering practices to machine learning, organizations also need to do the following:
- Reliably ingest, ETL and catalog big data
- Work with state-of-the-art ML frameworks and tools
- Easily scale compute from single to multi-node
Databricks excels at all the above. Learn more at databricks.com
Databricks accelerates innovation by unifying data science, engineering and business. Through a fully managed, cloud-based service built by the original creators of Apache Spark, Delta Lake and MLflow, the Databricks Unified Data Analytics Platform lowers the barrier for enterprises to innovate with AI and accelerates their innovation.
Data engineering
Speed up the preparation of high-quality data, essential for best-in-class ML applications, at scale
Data science
Collaboratively explore large data sets, build models iteratively and deploy across multiple platforms
Providing managed MLflow on Databricks
MLflow is natively integrated with the Databricks Unified Data Analytics Platform so that ML practitioners and engineers can benefit from out-of-the-box tracking, packaging, deployment and management capabilities for ML models with enterprise reliability, security and scale.
By using MLflow as part of Databricks, data scientists can:
**WORKSPACES**
Benefit from a streamlined experiment tracking experience with Databricks Workspace and collaborative Notebooks
**BIG DATA SNAPSHOTS**
Track large-scale data that fed the models, along with all the other model parameters, then reproduce training runs reliably
**JOBS**
Easily initiate jobs remotely, from an on-premises environment or from Databricks notebooks
**SECURITY**
Take advantage of one common security model for the entire machine learning lifecycle
Read our blog to learn more about these integrations.
Getting data ready for ML with Delta Lake
Delta Lake is a storage layer that brings reliability to data lakes. Delta Lake provides ACID transactions and scalable metadata handling, and it unifies streaming and batch data processing. Delta Lake runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
By using Delta Lake, data engineers and data scientists can keep track of data used for model training.
**Ready-to-use ML environments**
Databricks Runtime for Machine Learning provides data scientists and ML practitioners with on-demand access to ready-to-use machine learning clusters that are preconfigured with the latest and most popular machine learning frameworks, including TensorFlow, Keras, PyTorch, scikit-learn, XGBoost and Horovod.
By using the Databricks Runtime for ML, data scientists can get to results faster with one-click access to ML clusters, optimized performance on popular ML algorithms, and simplified distributed deep learning on Horovod and GPUs. It also supports Conda for further customization.
---
**Packages and optimizes most common ML frameworks**
- TensorFlow
- Keras
- PyTorch
- scikit-learn
- XGBoost
- Horovod
**Built-in optimization for distributed deep learning**
- Distribute and Scale any Single-Machine ML Code to thousands of machines
**Built-in AutoML and experiment tracking**
- Auto ML and Tracking / Visualizations with MLflow
**Customized environments using Conda**
- requirements.txt
- conda.yaml
---
**Databricks**
CHAPTER 7: Standardizing the Machine Learning Lifecycle on Databricks
CHAPTER 8: **Getting Started**
Take the next step toward standardizing your ML lifecycle — test drive MLflow and the Databricks Unified Data Analytics Platform.
- [START YOUR FREE TRIAL](#)
- [REQUEST A PERSONALIZED DEMO](#)
- [LEARN MORE](#)
- [JOIN THE COMMUNITY](#)
## CHAPTER 8: Comparison Matrix
<table>
<thead>
<tr>
<th><strong>EXPERIMENT TRACKING</strong></th>
<th><strong>OPEN SOURCE MLFLOW</strong></th>
<th><strong>MANAGED MLFLOW ON DATABRICKS</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>MLflow Tracking API</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>MLflow Tracking Server</td>
<td>✓ Self-hosted</td>
<td>✓ Fully managed</td>
</tr>
<tr>
<td>Notebook Integration</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Workspace Integration</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>REPRODUCIBLE PROJECTS</strong></th>
<th><strong>OPEN SOURCE MLFLOW</strong></th>
<th><strong>MANAGED MLFLOW ON DATABRICKS</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>MLflow Projects</td>
<td>✓</td>
<td>✓ With remote execution</td>
</tr>
<tr>
<td>GitHub and Conda Integration</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Scalable Cloud/Clusters for Project Runs</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>MODEL MANAGEMENT</strong></th>
<th><strong>OPEN SOURCE MLFLOW</strong></th>
<th><strong>MANAGED MLFLOW ON DATABRICKS</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>MLflow Model Registry</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Model Versioning</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Stage Transitions and Comments</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>CI/CD Workflow Integration</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Model Stage</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>FLEXIBLE DEPLOYMENT</strong></th>
<th><strong>OPEN SOURCE MLFLOW</strong></th>
<th><strong>MANAGED MLFLOW ON DATABRICKS</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>MLflow Models</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Built-In Batch Inference</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Built-In Streaming Analytics</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>SECURITY AND MANAGEMENT</strong></th>
<th><strong>OPEN SOURCE MLFLOW</strong></th>
<th><strong>MANAGED MLFLOW ON DATABRICKS</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>High Availability</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Automated Updates</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Role-Based Access Control</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://pages.databricks.com/rs/094-YMS-629/images/standardizing-the-ml-lifecycle-ebook-databricks-0626120-v8.pdf", "len_cl100k_base": 5116, "olmocr-version": "0.1.50", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 43198, "total-output-tokens": 5729, "length": "2e12", "weborganizer": {"__label__adult": 0.0003070831298828125, "__label__art_design": 0.00030612945556640625, "__label__crime_law": 0.0002713203430175781, "__label__education_jobs": 0.0005726814270019531, "__label__entertainment": 8.046627044677734e-05, "__label__fashion_beauty": 0.00014269351959228516, "__label__finance_business": 0.0003986358642578125, "__label__food_dining": 0.0003554821014404297, "__label__games": 0.0005750656127929688, "__label__hardware": 0.0007014274597167969, "__label__health": 0.0004100799560546875, "__label__history": 0.0001709461212158203, "__label__home_hobbies": 8.308887481689453e-05, "__label__industrial": 0.0003883838653564453, "__label__literature": 0.00019252300262451172, "__label__politics": 0.0001971721649169922, "__label__religion": 0.0003924369812011719, "__label__science_tech": 0.03692626953125, "__label__social_life": 0.00011605024337768556, "__label__software": 0.022918701171875, "__label__software_dev": 0.93359375, "__label__sports_fitness": 0.0002548694610595703, "__label__transportation": 0.00029349327087402344, "__label__travel": 0.00016796588897705078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24755, 0.00449]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24755, 0.10463]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24755, 0.92245]], "google_gemma-3-12b-it_contains_pii": [[0, 113, false], [113, 1139, null], [1139, 2702, null], [2702, 3577, null], [3577, 4271, null], [4271, 5810, null], [5810, 7014, null], [7014, 9021, null], [9021, 10155, null], [10155, 10616, null], [10616, 11473, null], [11473, 11803, null], [11803, 14085, null], [14085, 15159, null], [15159, 15336, null], [15336, 16163, null], [16163, 17709, null], [17709, 18331, null], [18331, 18907, null], [18907, 19245, null], [19245, 19480, null], [19480, 20391, null], [20391, 20828, null], [20828, 21903, null], [21903, 21973, null], [21973, 22244, null], [22244, 24755, null]], "google_gemma-3-12b-it_is_public_document": [[0, 113, true], [113, 1139, null], [1139, 2702, null], [2702, 3577, null], [3577, 4271, null], [4271, 5810, null], [5810, 7014, null], [7014, 9021, null], [9021, 10155, null], [10155, 10616, null], [10616, 11473, null], [11473, 11803, null], [11803, 14085, null], [14085, 15159, null], [15159, 15336, null], [15336, 16163, null], [16163, 17709, null], [17709, 18331, null], [18331, 18907, null], [18907, 19245, null], [19245, 19480, null], [19480, 20391, null], [20391, 20828, null], [20828, 21903, null], [21903, 21973, null], [21973, 22244, null], [22244, 24755, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24755, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24755, null]], "pdf_page_numbers": [[0, 113, 1], [113, 1139, 2], [1139, 2702, 3], [2702, 3577, 4], [3577, 4271, 5], [4271, 5810, 6], [5810, 7014, 7], [7014, 9021, 8], [9021, 10155, 9], [10155, 10616, 10], [10616, 11473, 11], [11473, 11803, 12], [11803, 14085, 13], [14085, 15159, 14], [15159, 15336, 15], [15336, 16163, 16], [16163, 17709, 17], [17709, 18331, 18], [18331, 18907, 19], [18907, 19245, 20], [19245, 19480, 21], [19480, 20391, 22], [20391, 20828, 23], [20828, 21903, 24], [21903, 21973, 25], [21973, 22244, 26], [22244, 24755, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24755, 0.175]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
7e5c80efc0ef7e26d2850e952dea440a1a30b61a
|
EasyLab: Model-Based Development of Software for Mechatronic Systems
Simon Barner, Michael Geisinger, Christian Buckl and Alois Knoll
Department of Informatics
Technische Universität München
Garching b. München, Germany
{barner,geisinge,buckl,knoll}@in.tum.de
Abstract—Model-based development tools are one possible solution to handle the increasing complexity of mechatronic systems. While traditional approaches often separate design of hardware and software, especially in mechatronic systems hardware/software interaction is the most critical component. Hence, both aspects must be considered in this context. The goal is a model-based development tool for software/hardware co-design including the generation of efficient code for the respective target platforms. EasyLab is a modular and easily expandable development tool especially suitable for such applications. Its objectives are to facilitate reusability and to accelerate the development process. It raises the level of abstraction and thus simplifies the development of mechatronic systems even for unexperienced users. A graphical user interface provides various modeling languages that are easy to use. By employing platform optimized generation of the code, efficiency of the resulting programs can be guaranteed, which we demonstrate on a set of experimental mechatronic systems.
I. INTRODUCTION
Mechatronic and embedded systems are becoming increasingly complex. While sophisticated tools for the development of the mechanical and electronic parts are available, the implementation of the software is typically done from scratch. Due to shortened product life cycles and an emerging need for flexibility, this approach is not feasible any more. Development tools for hardware/software co-design are required that raise the level of abstraction to accelerate the development process, but guarantee an efficient and reliable implementation to match the resource constraints of mechatronic systems.
For standard software, model-based development [1] has become state of the art in software engineering. Several model-based development tools, such as Matlab/Simulink [2] or Scade [3] are available for the domain of embedded systems. However, these tools rely on the generation of ANSI-C code and therefore can only be used to implement the application functionality. Code for hardware related aspects and other non-functional properties has to be manually implemented by the developer. This code however forms the majority of the code required for mechatronic systems. The major reason why the generation of such code is not supported is the platform dependency of the code. Due to the vast heterogeneity of hardware [4], it is not possible to implement a code generator that supports all possible hardware platforms a priori. Rather, a suitable development tool must be designed in a way that it can be easily expanded to support further platforms.
This paper presents the development tool EasyLab that targets the development of mechatronic systems. The main contributions of this work are a tool with a high level of abstraction that simplifies and accelerates development, a fully modular design to support reusability, a completely integrated solution for hardware/software co-design, and the possibility to generate efficient, hardware-dependent code.
The developer benefits from the abstraction of a graphical development interface similar to widespread graphical environments such as Matlab/Simulink or LabView. In contrast to existing tools, EasyLab allows the specification of hardware characteristics and generation of corresponding code. The tool is expandable in two dimensions: regarding the modeling and the code generation functionality. Expandability with respect to the modeling functionality is achieved by relying on actor-oriented design [5]. An actor is a software component and can for instance realize a PID controller, but also the triggering of a sensor or actuator. Expandability with respect to the code generation functionality is achieved by a template-based approach [6]. The main difference between template-based and component-based [7] approaches is the high adaptability of templates. It is therefore possible to generate very efficient code [8].
The outline of this paper is as follows. First, we introduce the Match-X construction kit, which is one of the main target platforms of EasyLab. The main part of the paper covers design and implementation of EasyLab. We introduce two graphical programming languages as well as other key features of the application, such as code generation and simulation. To illustrate the software modeling process with EasyLab, we present two experimental setups that were built during the development of EasyLab. Finally, we list related and future work and summarize the goals and results presented in this paper.
II. HARDWARE TARGET PLATFORMS
EasyLab is designed to support different microcontroller and processor architectures with a rich set of peripherals and has a focus on resource-constrained systems. For an overview of EasyLab’s capability to model these aspects of hardware, see section III-B. To demonstrate the potential of our approach, a modular hardware architecture is preferable. We therefore chose the Match-X construction kit as a reference platform which is explained in the following.
A. The Match-X Construction Kit
The Match-X construction kit consists of modular components that can be flexibly assembled into complex mechatronic systems. The construction kit forms the first stage of controller hardware development for mechatronic systems and provides a way to efficiently prototype such systems. The development process along with mechanical and electrical interfaces of the hardware building blocks are specified in a standard of the VDMA\(^1\) [9]. The standard also describes the transition to small batch production as well as series production. Figure 1 shows the geometry of a building block from the Match-X construction kit. Figure 2 shows a completely assembled stack of three building blocks. The block at the bottom contains a voltage regulator and features an RS485 interface, the block in the middle contains a Microchip PIC18F2520 microprocessor and the topmost block allows attachment of sensors and actuators. Although the size of these building blocks is very small, the microcontroller is clocked with 20 MHz and is thus suitable to perform complex controller tasks.
B. Other Target Platforms
To point out the universality of this approach, EasyLab does not only focus on the Match-X hardware construction kit, but also on a variety of other microcontroller platforms. In this paper, we show the results of an evaluation where we used EasyLab to design the application logic for a complex control task running on an ATMEL ATmega128 microprocessor. We also plan to support other microprocessor types like ARM and Fujitsu processors.
III. EASYLAB
EasyLab provides a high-level programming environment for the design of software for mechatronic systems. Due to the raised level of abstraction, even people unexperienced in microcontroller programming can develop complex applications using various graphical modeling languages. The design process is based on the selection of functional components (actors) that can be connected to form the application. One can distinguish between two different types of actors: hardware dependent actors (e.g., a software component controlling a sensor) and hardware independent actors (e.g., basic mathematical operations or more complex ones like various controllers). As model of computation, the synchronous data flow model is chosen as it reflects the typical engineering approach. However, a pure data flow model is too inflexible to express typical application behavior. This issue can be targeted by the combination of different models of computation (modal models [5]). EasyLab combines state machines with data flow graphs to allow an easy and intuitive modeling of the application. Based on the models, the tool allows both simulation and the generation of efficient code. The core features of EasyLab are explained in the following.
A. Graphical Modeling Languages
EasyLab supports two graphical modeling languages:
1) Structured flow chart: The structured flow chart language (SFC) describes the states of a program and how state transitions are performed. The language has been designed in the style of EN 61131-3 [10] (part “SFC”).
States of SFC programs are references to sub-programs that can be described in any of the available languages. Thus, a state is either an SFC program itself or a reference to an SDF program (see below) which is executed periodically. Consequently, SDF programs are the leaves in the recursive specification of an SFC program. An SFC program has exactly one initial state.
Elements in the SFC language that determine the control flow are state sequences, alternative branches (conditional execution), parallel branches, jumps and program termination.
Both sequential and alternative composition are based on conditions that are defined as Boolean expressions consisting of Boolean constants and variables as well as comparisons. Comparisons contain arithmetic expressions composed of functions and operators on constants and global and local variables. Local variables refer to values computed right before the respective condition.
Figure 3 shows two example programs in SFC language.
---
\(^1\)The German Association of Machinery Manufacturers.
Various algorithms have been found that statically compute valid schedules for multi-rate synchronous data flow graphs (if they exist) and have bounded buffer memory requirements [12], [5]. Depending on the structure of the graph, single appearance schedules can be found to minimize the amount of code [13], [14] and buffer sizes [15].
The SDF language is designed in the style of EN 61131-3 [10] (part “FBD”). We will present an example program in SDF language in figure 5 later in this paper.
B. Hardware Model
A key feature in mechatronics is the interaction between software and hardware. However, the interface between these components and of course also the hardware itself varies between different systems. This means that designing applications for mechatronic systems can only succeed if the modeling tool also allows modeling hardware aspects of the respective systems.
EasyLab’s hardware model includes all aspects that are necessary to describe the interaction of the software part of a mechatronic system and the hardware, e.g., sensors, actuators, input and output interfaces. Hence, a device type library specifies which actor instances may be used in a certain hardware environment. In conjunction with a resource management algorithm, it restricts the user to the set of hardware resources that is actually available.
The device descriptions also specify how hardware resources are represented in the application logic and how these representations are mapped to the real hardware (see also section III-D). As for actor types in the SDF language, device types are defined using actions start, stop and step specifying the behavior when the application starts, terminates or when a single step in the SFC program is performed. To guarantee correct operation, access to the underlying hardware is buffered using a set of input and output variables.
C. Code Generation
EasyLab features an integrated code generator that transforms the application model into code suitable for the respective compiler. This is achieved by assigning a code template to each primitive element in the respective modeling language. Currently, models are transformed into C code (supporting the mcc18 and avr-gcc compiler tool chains).
In the generated code, each state and transition condition of an SFC program is represented as a function performing an action and returning the address of the function to be executed next (inspired by the continuation passing style of functional programming languages). References to subprograms have the effect of executing the respective program and returning a function to be executed next, usually the subsequent transition condition. Functions representing transition conditions have no side effects and evaluate the respective Boolean expression.
While the transformation of SFC programs into executable programs is straightforward, the implementation of multi-rate data flow graphs into efficient code relies on static scheduling techniques as pointed out in section III-A.2.
Invoking the compiler as well as transferring the program to the target device is also integrated into EasyLab. We are working on further model transformations that optimize the generated code in terms of memory usage and runtime.
D. Runtime Library
In the sections above, we implicitly stated that employing code templates is sufficient to distinguish how a certain functionality is implemented across different microprocessor architectures. Although this might be true, writing a separate code template for each target architecture is tedious and scales badly. Furthermore, it would be very hard to ensure the functional consistency of the various code templates. Therefore, we added another layer in between the code templates and the actual hardware: a set of platform-specific runtime libraries.
Each runtime library provides basic hardware-related functionality that is common among almost all types of microprocessors (digital and analog I/O, communication via UART, etc.) as well as some software-only features (data structures optimized for low resource usage, fixed-point arithmetic, etc.). The runtime libraries for all target platforms implement a common interface that is designed to allow implementation of a subset of all possible features if some features are not supported by the respective microprocessor.
After code generation, the runtime library corresponding to the selected microcontroller is linked against the generated program. This allows code templates to use the common interface exposed by all runtime libraries, hence making them much more compact and less error-prone. Furthermore, this design will improve the possibility to optimize the code contained in the code templates because they have less dependencies on the actual hardware and are formulated on a higher level of abstraction.
Actually, EasyLab’s runtime library offers at least a subset of the functionality of operating systems for resource-constrained platforms (e.g., Contiki [16], TinyOS [17], FreeRTOS [18]). With these systems, application code is developed against a given API and a firmware image is obtained by linking both the user-supplied and the operating system’s code into a monolithic firmware image. In fact, EasyLab’s modular approach (see section III-F) allows for the implementation of an alternative runtime library on top of existing operating systems like those mentioned above.
E. Simulation
Besides the generation of code, EasyLab also features direct simulation of models (i.e., without code generation). In order to simulate as much of the mechatronic system as possible, EasyLab simulates both application code and hardware devices. While simulation can be used to detect design errors early in the development process or if the hardware is currently not available, it is also possible to perform control tasks directly in simulation mode.
In this context, EasyLab has successfully been used to control the mobile robot platform Robotino over a wireless network connection using appropriate device plugins (see next section). A line follow application that is based on the robot’s camera and several image processing actors has been used to verify the performance of the simulation component.
F. Expandability
The application is built up in a modular way to ensure reusability of programs developed for one target application in other projects. A developer may expand the functionality of EasyLab in the following dimensions:
- New actor types can be added to the actor type library. For each actor type, an annotated code template as well as a simulation plugin have to be specified. Actors can be reused in any project developed with EasyLab.
- New hardware models may be added to the device type library. This allows a developer to add completely new combinations of hardware components. This is especially useful for the Match-X construction kit, where hardware modules may be combined in many ways. Hardware-specific actor types can be configured to be only available if a certain device instance is added to the project.
- New compiler tool chains may be added. This is necessary if a new type of processor is to be used as target platform for EasyLab. Tool chains specify which external programs are used to build the application and transfer it to the target device. They also influence how the code is generated. As this approach may not provide enough flexibility for all situations, a developer also has the choice of implementing the requested functionality as part of the EasyLab runtime library that is then linked to the project according to the selected tool chain.
IV. Evaluation
Two sample applications were implemented to demonstrate the benefits of the developed tool, namely the modular design of the system with respect to both hardware and software and the efficiency of the generated code.
A. Pneumatic Cylinder
The first evaluation scenario involves a pneumatic cylinder that is controlled by a microcontroller from the Match-X construction kit. The cylinder has two magnetic valves for expanding and retracting a piston. Furthermore, an analog sensor measures the current position of the piston. Figure 4 shows the experimental setup. The Match-X stack used in this experiment consists of the following building blocks: voltage regulator, CPU, A/D converter (for analog sensor input) and two drivers for inductive loads (one per valve).
The control task that should be achieved is to move the piston to a predefined position and to hold that position even if some force is applied to the piston. Since no proportional valves were available at the time of writing this paper, the corresponding control program is quite simple.
For all hardware sensors and actuators, adequate software actors are available in EasyLab. Figure 5 shows the program as an SDF graph. The constant pos defines the set point and
Match-X building blocks
Analog sensor
Piston
Valves (on the side)
**Fig. 4.** Pneumatic cylinder experimental setup
tol gives a certain tolerance for the position of the piston to avoid oscillation. The controller regulates the piston to a position in the interval \([pos - tol, pos + tol]\).
**Fig. 5.** Pneumatic cylinder controller in SDF language
\[ \text{Value} \quad \text{Const} \quad \text{Pos} \]
\[ \text{Addition} \quad s1 \quad s2 \quad \text{Sum} \]
\[ \text{Less than} \quad \text{Left} \quad \text{Right} \quad \text{Valve1} \quad \text{Retract} \]
\[ \text{Value} \quad \text{Const} \quad \text{Pos} \]
\[ \text{Subtraction} \quad \text{Sub} \quad \text{Min} \]
\[ \text{Greater than} \quad \text{Left} \quad \text{Right} \quad \text{Valve2} \quad \text{Expand} \]
**Fig. 6.** Inverted pendulum experimental setup
The goal of the experiment is to accelerate the pendulum by moving the cart from one side to the other until the rod is in an upright position and hold the rod in that position afterwards. During both stages, the cart should be aligned to the center of the rail to prevent it from moving off the rail.
The experimental setup features two sensors and one actuator: one sensor measures the position of the cart, the second one measures the inclination of the rod. The only actuator is the electric motor, for which we regulate the voltage.
To demonstrate the efficiency of the code generated by EasyLab, we implemented the system using an ATMEL ATmega128 microcontroller with a clock frequency of 16 MHz. First, we augmented EasyLab to allow us to access the sensor values necessary to achieve the task. For this purpose, we added a new actor type that returns the current value of the respective sensor as well as its derivative.
A suitable controller for a self-erecting inverted pendulum consists of two modes (swing-up and balance), which are implemented in terms of the SFC program depicted in figure 7. The controller used to erect the rod is based on a heuristic using a proportional-velocity cart position controller that swings the rod back and forth until the rod has a maximum deviation of \(\varepsilon\) to the upright position (first state in SFC program and subsequent transition condition). The second state corresponds to the execution of the balance SDF program according to equation (1) (see below), which can be derived from a linear state-space model of the pendulum using the linear quadratic regulator (LQR) technique. While the balance state is never left, the final jump was added to obtain a well-formed program.
\[ [\alpha_i < \varepsilon] \]
**Fig. 7.** Inverted pendulum SFC program
For the \(i\)th step, let \(\alpha_i\) be the inclination of the rod (in radians), \(\partial \alpha_i\) the angular velocity and let \(\alpha_i = 0\) if the rod is in erect position. Let \(c_i\) be the position of the cart, \(\partial c_i\) its velocity and \(c = 0\) if the cart is in its center position. Then a PID controller used to balance the pendulum in upright position can be defined as follows, where \(U_n\) is the voltage to drive the motor with at step \(n \geq 0\) (constants \(k_1, \ldots, k_6\) depend of the physical characteristics of the rod):
\[
U_n = k_1 \cdot \alpha_n + k_2 \cdot \partial \alpha_n + k_3 \cdot \sum_{i=0}^{n} \alpha_i + k_4 \cdot c_n + k_5 \cdot \partial c_n + k_6 \cdot \sum_{i=0}^{n} c_i
\]
(1)
The transformation of above formula into an SDF program is straightforward.
V. RELATED WORK
EasyLab is unique in the sense that it supports hardware/software co-design without focusing on a specific set of hardware platforms. It is expandable both regarding modeling and code generation.
Several other tools and projects influenced the development of EasyLab. The graphical user interface is in the style of state-of-the-art tools like Matlab/Simulink [2] that also provide data flow design based on actors. The theory behind actors is however derived from the project Ptolemy [5]. In contrast to both tools, EasyLab augments the notion of actors by introducing hardware related actors which is a precondition for successful hardware/software co-design. The introduction of typed data flow allows for the generation of code tailored to resource-constrained target architectures.
EasyLab uses synchronous data flow graphs and structured flow charts as model of computation, as defined in EN 61131-3 [10]. While there are several other tools such as CoDeSys that are compliant with this norm, EasyLab is the only one that can be augmented for a specific hardware platform and can thus generate optimized hardware-dependent code. The concept for code generation is based on previous work on template-based code generation [6], [20].
VI. SUMMARY
In this work, we presented a new model-based programming tool for mechatronic systems. In contrast to well-known tools from the world of embedded systems, EasyLab also provides components that are specialized for mechatronic systems, where the diversity of the used hardware is much more relevant than in other types of embedded systems.
We have also shown that EasyLab allows both modeling of software and hardware functionality and introduced the two graphical modeling languages that have been implemented so far, namely synchronous data flow (SDF) and state flow charts (SFC).
Finally, we presented two demonstration setups that showed that the workflow of programming mechatronic systems is the same for different types of microprocessor platforms. The inverted pendulum setup demonstrates that the code generated by EasyLab performs well and is suitable for real-time control tasks.
VII. FUTURE WORK
We are currently working on a remote debugging facility for programs generated by EasyLab whose purpose is to provide transparent real-time access to the state of a mechatronic system at the granularity of the underlying model. It is based on code instrumentation and a marshaling layer that is tailored to the resources and communication interfaces that are available in typical embedded systems. Thus, firmly integrated inspection and manipulation of both data and program state at the model level will increase the abstraction also in the debugging phase of the product development cycle.
We also plan to extend EasyLab to support networked (distributed) mechatronic systems. The time-triggered model [21] seems to be a feasible approach that integrates well with the synchronous data flow employed in the current stage.
VIII. ACKNOWLEDGMENT
This work is funded by the German Ministry of Education and Research.
REFERENCES
|
{"Source-Url": "http://www6.in.tum.de/Main/Publications/Barner2008a.pdf", "len_cl100k_base": 4926, "olmocr-version": "0.1.48", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20392, "total-output-tokens": 6507, "length": "2e12", "weborganizer": {"__label__adult": 0.0005555152893066406, "__label__art_design": 0.0005083084106445312, "__label__crime_law": 0.00043487548828125, "__label__education_jobs": 0.000518798828125, "__label__entertainment": 8.052587509155273e-05, "__label__fashion_beauty": 0.00023794174194335935, "__label__finance_business": 0.0002124309539794922, "__label__food_dining": 0.0004940032958984375, "__label__games": 0.0008678436279296875, "__label__hardware": 0.00975799560546875, "__label__health": 0.0007367134094238281, "__label__history": 0.00030422210693359375, "__label__home_hobbies": 0.00021326541900634768, "__label__industrial": 0.0013246536254882812, "__label__literature": 0.00017952919006347656, "__label__politics": 0.0002765655517578125, "__label__religion": 0.0006537437438964844, "__label__science_tech": 0.07122802734375, "__label__social_life": 7.659196853637695e-05, "__label__software": 0.00646209716796875, "__label__software_dev": 0.90283203125, "__label__sports_fitness": 0.0005035400390625, "__label__transportation": 0.0014486312866210938, "__label__travel": 0.00023305416107177737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28712, 0.01806]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28712, 0.42347]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28712, 0.89894]], "google_gemma-3-12b-it_contains_pii": [[0, 5352, false], [5352, 9530, null], [9530, 12551, null], [12551, 18420, null], [18420, 21895, null], [21895, 28712, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5352, true], [5352, 9530, null], [9530, 12551, null], [12551, 18420, null], [18420, 21895, null], [21895, 28712, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28712, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28712, null]], "pdf_page_numbers": [[0, 5352, 1], [5352, 9530, 2], [9530, 12551, 3], [12551, 18420, 4], [18420, 21895, 5], [21895, 28712, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28712, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
c570591bf2f664e1bbacb438d115fd6d0bd3a734
|
[REMOVED]
|
{"len_cl100k_base": 6832, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 34728, "total-output-tokens": 10999, "length": "2e12", "weborganizer": {"__label__adult": 0.0005245208740234375, "__label__art_design": 0.0011491775512695312, "__label__crime_law": 0.0006871223449707031, "__label__education_jobs": 0.011016845703125, "__label__entertainment": 0.0004487037658691406, "__label__fashion_beauty": 0.000438690185546875, "__label__finance_business": 0.0008816719055175781, "__label__food_dining": 0.0004775524139404297, "__label__games": 0.001781463623046875, "__label__hardware": 0.0008211135864257812, "__label__health": 0.0010099411010742188, "__label__history": 0.0007576942443847656, "__label__home_hobbies": 0.00022125244140625, "__label__industrial": 0.00064849853515625, "__label__literature": 0.00235748291015625, "__label__politics": 0.0005927085876464844, "__label__religion": 0.0007486343383789062, "__label__science_tech": 0.384765625, "__label__social_life": 0.00047397613525390625, "__label__software": 0.050750732421875, "__label__software_dev": 0.5380859375, "__label__sports_fitness": 0.0003786087036132813, "__label__transportation": 0.0005979537963867188, "__label__travel": 0.00028252601623535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40756, 0.03863]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40756, 0.13148]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40756, 0.84659]], "google_gemma-3-12b-it_contains_pii": [[0, 4127, false], [4127, 7136, null], [7136, 11474, null], [11474, 14059, null], [14059, 15732, null], [15732, 19548, null], [19548, 23970, null], [23970, 28947, null], [28947, 33936, null], [33936, 35507, null], [35507, 38558, null], [38558, 40756, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4127, true], [4127, 7136, null], [7136, 11474, null], [11474, 14059, null], [14059, 15732, null], [15732, 19548, null], [19548, 23970, null], [23970, 28947, null], [28947, 33936, null], [33936, 35507, null], [35507, 38558, null], [38558, 40756, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40756, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40756, null]], "pdf_page_numbers": [[0, 4127, 1], [4127, 7136, 2], [7136, 11474, 3], [11474, 14059, 4], [14059, 15732, 5], [15732, 19548, 6], [19548, 23970, 7], [23970, 28947, 8], [28947, 33936, 9], [33936, 35507, 10], [35507, 38558, 11], [38558, 40756, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40756, 0.16327]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
eb0c2d2016ba9a62b1ea89374f36d70dcc5b3a6c
|
Sorting Algorithms and Run-Time Complexity
Leanne R. Hinrichs
May 2015
Abstract
In combinatorics, sometimes simple questions require involved answers. For instance, we often want to compare multiple algorithms engineered to perform the same task to determine which is functioning most efficiently. Here, we introduce the bubble sort and merge sort algorithms for arranging objects in a row, and discuss the run-time complexity of both.
1 Introduction
Given a list of six integers, it would hardly be a challenge to arrange the values from smallest to largest. Some would begin by selecting the largest integer from the list, correctly positioning it at the rightmost position, and continuing to place the second and third largest digits and so on to the left of the largest until the integers are fully sorted. Others might perform the task in a slightly different way, perhaps making pairwise comparisons of adjacent integers, but it’s safe to say that the time required to complete the sorting would be relatively analogous no matter the method chosen.
If we consider, instead, a list of sixty thousand distinct integers requiring sorting, the strategy would have to be much more precise. For instance, the previous technique of finding the largest integer and moving it to the end of the list would prove ridiculously laborious. Fortunately, computers are of great assistance on such tasks. Programming algorithms for seemingly simple tasks such as sorting is common practice, but the efficiency of the design must be considered. If we implement one sorting algorithm that must make many comparisons to exhaustively sort the list, when a different algorithm could have accomplished the task in fewer comparisons, time and computing capacity of the machine are lost.
Here, we introduce two sorting algorithms and discuss the process of each. Pseudocode is given for each method, and run-time complexity is examined. We consider best, average, and worst case scenarios for each algorithm.
2 Bubble Sort Algorithm
Bubble Sort functions by creating a sequence of pairwise comparisons of adjacent elements from an array. Beginning with the leftmost two integers from an array of size \( n \) of distinct integers, the algorithm compares the two numbers. If the left entry is larger than the right entry, the integers swap positions. If the left entry is smaller than the right entry, the two remain in position, and the algorithm continues. Next, Bubble Sort compares the (possibly just changed) second entry and the third entry from the array and makes the comparison step again. Once all adjacent elements have been compared pairwise and necessary swaps have been completed, a full pass of the algorithm is complete. In general, the algorithm will require a maximum of \( n - 1 \) passes to entirely sort the array. We discuss possible improvements of the algorithm using pseudocode after an example.
2.1 Example of Bubble Sort
We first examine a single pass of the Bubble Sort algorithm on an array of size 5.
\[
\begin{array}{c}
4 & 3 & 1 & 6 & 2 \\
\end{array}
\]
Comparing the leftmost digits, the 4 is larger than the 3, so the two will swap position.
\[
\begin{array}{c}
3 & 4 & 1 & 6 & 2 \\
\end{array}
\]
In the second pairwise comparison, the 4 is larger than the 1, so the two will again swap position.
\[
\begin{array}{c}
3 & 1 & 4 & 6 & 2 \\
\end{array}
\]
In the next comparison, the 4 is less than the 6, so the two remain in position, and the algorithm continues. This is still counted analogous to a “swap” by the Bubble Sort algorithm, even though no action takes place since a comparison must still be made.
\[
\begin{array}{c}
3 & 1 & 4 & 6 & 2 \\
\end{array}
\]
In the final pairwise comparison of this pass, the 6 is larger than the 2, so the two swap position.
\[
\begin{array}{c}
3 & 1 & 4 & 2 & 6 \\
\end{array}
\]
This displays the final state of our algorithm after the first pass is complete. Clearly, however, the list is not entirely sorted, so the algorithm must continue to make subsequent passes.
Note, however, that the largest digit in the array (6) has assumed the correct, rightmost position. In general, we can guarantee that the largest digit in any array of unique positive integers will be placed correctly after the first pass since it wins all pairwise comparisons. In fact, we are guaranteed that we secure one more correct position after each pass of the algorithm; the second largest entry will be in correct position after the second pass of Bubble Sort, and so forth. This idea will be important in making improvements geared toward efficiency of the skeleton Bubble Sort pseudocode [4].
2.2 Bubble Sort Pseudocode
The most simple form for a single pass of the Bubble Sort algorithm is shown below:
```
for i from 0 to n-2
if (A[i] > A[i + 1])
swap(A[i],A[i+1])
```
The algorithm simply compares each adjacent pair of digits as the swaps are taking place. Note that we only run i from 0 to n-2 because the final digit in the array is not compared with anything beyond.
As discussed previously, a single pass does not typically sort the algorithm entirely and successive passes are necessary. We introduce an improved version of the Bubble Sort full algorithm and then discuss the changes made.
```
for k from 0 to n - 1
set fullsort = 0
for i from 0 to n - k - 1
if (A[i] > A[i + 1])
swap(A[i],A[i+1])
fullsort = 1
if fullsort == 0, break
```
The first improvement is that the variable “fullsort” is initialized. In the simple version of the Bubble Sort code, we rely solely on the fact that we are guaranteed one more correctly sorted digit with each pass as previously
discussed. This is an opportunity for great efficiency improvement. If we are able to recognize that the array is entirely sorted after, say, the second pass, it is unproductive to continue making the full $n - 1$ passes. Using the variable “fullsort” we can recognize if a full pass is completed without any swaps taking place. If this is the case, the list is entirely sorted and we fall out of the algorithm.
The other change we introduce is the variable $k$, used to track subsequent passes, and this improvement actually refines what we do inside of a single pass. Since we know that after one pass, the largest integer is in correct rightmost position, it is inefficient to compare anything to this digit on the second pass; it is already known to be larger. Thus, the $n - k - 1$ portion of the $i$ counting variable enables the pass to complete before making these repetitive comparisons for the portion of the array we already recognize to be sorted.
### 2.3 Bubble Sort Run Time
When discussing run-time, the concept of “Big O” is critical. Big O, also referred to as Landau’s behavior, is used to described a rate of growth or behavior of a function. With many implications in computer science, we are able to strip coefficients and lower order terms to view the highest order term of a function.
For example,
$$f(n) = 3n^2 + 9n + 2 \implies f = O(n^2)$$
From this simplified rate of growth, we are more easily able to compare functions side by side to determine traits of efficiency. For instance, if we were comparing two sorting algorithms, one polynomial degree three and another degree two, the algorithm with run-time $O(n^2)$ would be more efficient for large arrays. An algorithm running $n^3$ is better than $1000n^2$ for small $n$, but eventually as $n$ increases $1000n^2$ is better.
Considering Bubble Sort in particular,
$$\sum_{k=0}^{n-1} n - k - 1 = (n - 1) + (n - 2) + \ldots + 0$$
$$= \frac{n(n - 1)}{2}$$
$$= O(n^2)$$
Algorithms whose behavior is polynomial are actually considered quite inefficient. So, while Bubble Sort is simple and a nice beginning example, other sorting algorithms are much more efficient in general. Bubble Sort does, however, perform quite well on certain arrays. Let’s consider Bubble Sort for specific arrangements.
2.4 Bubble Sort Case Scenarios
2.4.1 Best Case
The best case scenario is somewhat trivial. If the array is already sorted before we begin, all the algorithm must do is verify this to be the case. As such, we compare each of the \( n \) values to only one other entry, and stop one short so we do not compare the final entry to anything beyond. The run-time for this configuration is shown below.
\[
n - 1 \leq n \\
\implies \mathcal{O}(n)
\]
Note, this uses the “fullsort” variable from (2.2).
2.4.2 Worst Case
The worst case scenario for the Bubble Sort algorithm is that the list is written in exactly reverse order. As a by-product of the second largest digit being compared only to the largest digit in the first pass of the algorithm, it is placed in the rightmost position after the completion of the pass. On the second pass, the third largest number is positioned in the rightmost place, so all of the possible \( n - 1 \) passes of the algorithm must be executed before completion. In this case, we must compare every digit to every other digit, instead of comparing every digit to only one other digit like in the best case. For this configuration,
\[
\binom{n}{2} = \frac{n!}{2!(n-2)!} \\
= \frac{n(n-1)}{2} \\
= \frac{n^2 - n}{2} \\
\implies \mathcal{O}(n^2)
\]
Let’s consider another sorting algorithm used to accomplish the same task in a different fashion, and compare the efficiency.
3 Merge Sort Algorithm
Merge Sort is a recursively-defined algorithm which continues to break an array down into smaller and smaller portions and then sorts those smaller bits before reassembling the sorted array. The number of comparisons required in the reassembly process is less than those needed in Bubble Sort, which increases efficiency. Let’s examine an example.
3.1 Example of Merge Sort
\[ n = 5 \]
\[ 4 \ 3 \ 1 \ 6 \ 2 \]
Beginning with this array of size 5, we break into as evenly sized pieces as possible. In this example, we always go “left heavy,” so if the array is of odd size, the additional entry will fall on the left side.
\[ 4 \ 3 \ 1 \ \underline{6} \ 2 \]
Again, we break these smaller arrays into two as evenly as possible.
\[ 4 \ 3 \ \underline{1} \ 6 \ 2 \]
Continuing the process yet another time, we arrive at the base case.
\[ 4 \ 3 \ \underline{1} \ 6 \ 2 \]
This array of singletons cannot be broken down further, so we begin the process of reassembly. Remembering that the 4 and the 3 entries were grouped together at the previous step, we make this comparison first and swap that positioning, then consider the 6 and the 2 on the rightmost part of the array. The 1 entry does not get compared to anything in this step.
\[ 3 \ 4 \ \underline{1} \ 2 \ 6 \]
Now, since the 1 was initially part of the left array, we will recombine it into that portion of the array. This is the portion that creates the opportunity for efficiency beyond that of the Bubble Sort algorithm. Since the 3 and 4 in the array are already known to be sorted, once we recognize that the 1 is less than the 3, no further comparisons are necessary.
\[ 1 \ 3 \ 4 \ 2 \ 6 \]
The same idea as the last step is now abstracted into this final merge. Now we compare the leftmost element of the left array, 1, to the leftmost element of the right array, the 2. Once we see that 1 is less than 2, we can definitively say that 1 is the smallest entry in the final array. Now we compare the 2 to the second from left position in the left array. Since 3 is greater than 2, place 2 to the right of 1 (the first entry) - no further comparison of this entry is needed. Thus, we compared the 1 only to the 2 in this step, and the 2 only to the 3, so
every comparison guarantees that we discover final placement for one element. Due to this we find one pass with a maximum of \( n - 1 \) comparisons is required to reassemble the list. Since we are able to accomplish sorting in only one pass of the Merge Sort algorithm, we expect that run-time will be an improvement to Bubble Sort. To segue into that discussion, we first introduce Merge Sort Pseudocode.
### 3.2 Merge Sort Pseudocode
Let’s examine just one merge process to understand how the algorithm functions in pseudocode. So, let’s call the “parent” array from which the two partial lists are derived \( A \), and the two partial lists \( L \) and \( R \) for right and left, respectively [3].
\[
\text{Merge}(L,R,A)
\]
\[
\begin{align*}
nL &= \text{length}(L) \\
nR &= \text{length}(R) \\
i &= j = k = 0 \\
\text{while} \ (i < nL \ \&\& \ j < nr) & \ \\
\quad \text{if} \ (L[i] <= R[j]) & \\
\quad \quad A[k] = L[i] & \\
\quad \quad k++ & \\
\quad \quad i++ & \\
\quad \text{else} & \\
\quad \quad A[k] = R[j] & \\
\quad \quad k++ & \\
\quad \quad j++ & \\
\text{while} \ (i < nL) & \\
\quad A[k] = L[i] & \\
\quad i++ & \\
\quad k++ & \\
\text{while} \ (j < nR) & \\
\quad A[k] = R[j] & \\
\quad j++ & \\
\quad k++ &
\end{align*}
\]
The first while loop is the typical execution of the algorithm. Moving through the partial lists, we compare elements to place them into the larger “parent” array. The two subsequent while loops only come into play in the case that the original while loops becomes false, that is \( i \geq nL \) or \( j \geq nR \). This situation occurs when one partial array is exhausted prematurely. For instance, if we were comparing an already sorted list, the algorithm would place all of the elements of \( L \), the left list correctly before placing any elements of the right list. The while loops just say to place all the “leftover” elements of the array that is not exhausted in order since they are known to be sorted. Again, this code only describes one reassembly, called “Merge”; the entire Merge Sort algorithm
requires the development of the recursive process. Let’s now view the recursive portion which will call on the “Merge” process we just built.
\begin{verbatim}
MergeSort(A)
n = length(A)
if (n < 2)
return
mid = n/2
left = array of left half
right = array of right half
for i = 0 to mid - 1
left[i] = A[i]
for i = mid to n - 1
right[i - mid] = A[i]
MergeSort(left)
MergeSort(right)
Merge(left,right,A)
\end{verbatim}
Let’s assume for the sake of simplicity that we have an array with the order of a power of 2. That way when we are continuing to do splits, the breaks will remain even throughout. In the case that it’s not, we would introduce a ceiling function. In practice, however, when considering the large lists where this algorithm would be necessary, the number of extra comparisons needed would be all but negligible and would not give information about the fundamental behavior of the algorithm. The first condition of this algorithm is checking to see if we have hit the base case where we have a singleton entry. If \( n \geq 2 \) we have work to do, so the next few lines of code are introduced to split the “parent” array into right and left components. Once this has been completed, we recursively call on Merge Sort on the two smaller lists. Then, finally, we call upon the Merge method that we previously built to reassemble into a sorted “parent” algorithm.
### 3.3 Merge Sort Run Time
Let \( M(n) \) be the number of comparisons required to sort a list of size \( n \). Then \( M(1) = 0 \) and \( M(2) = 1 \) since a singleton is trivially sorted, and sorting an array of two entries only requires one comparison.
If we consider the case that the array is of even size, that is \( n = 2k \), then \( M(2k) = 2M(k) + 2k - 1 \). Breaking this equation down, we are essentially claiming that the number of comparisons necessary to sort the full list will be the number of comparisons required to sort each half of the list, \( M(k) + M(k) = 2M(k) \), and then also the additional comparisons required to reassemble the list \( 2k - 1 \). This is the same as the Bubble Sort best case \( n - 1 \) where each entry is only compared to one other entry, and comes as the result of knowing that each smaller portion of the array is necessarily sorted before reassembly.
To further analyze the number of comparisons necessary to run the Merge Sort algorithm, we will examine again the case that \( n = 2^m \). If this were
the case, the array would be able to split exactly evenly at each step of the breakdown, so this case is nice to examine [1].
Let \( f(m) \) be the number of comparisons required to sort an array of size \( 2^m \). This size is a special case but the result will be valid asymptotically.
\[
\begin{align*}
f(m) &= 2f(m-1) + 2^m - 1 \\
&= 2(2f(m-2) + 2^{m-1} - 1) + 2^m - 1 \\
&= 2^2f(m-2) + 2 \cdot 2^m - 2 - 1 \\
&= 2^3f(m-3) + 3 \cdot 2^m - 2^2 - 2 - 1 \\
&\vdots \\
&= 2^{m-2}(2f(m-m) + 2 - 1) + (m-1)2^m - 2^{m-2} - 2^{m-3} - \ldots - 1 \\
&= m2^m - 2^{m-1} - 2^{m-2} - \ldots - 1 \\
&= m2^m - (2^{m-1} + 2^{m-2} + \ldots + 1) \\
&= m2^m - (2^m - 1) \\
&= (m-1)2^m - 1 \\
\end{align*}
\]
In (1), we detail that the total number of comparisons will be the sum of the comparisons required to sort each half, left and right, and then the number of comparisons needed to reassemble the list previously discussed to be at most \( n - 1 \) since \( n = 2^m \). Now, we continue to call upon this recursive definition repeatedly hoping that a pattern emerges to help us generate closed form.
By (2), we have found the pattern and do some algebra to make it seem a bit more manageable.
Since \( n = 2^m \), we have \( m = \log_2 n \). Therefore \( f(m) \) is roughly \( n \log_2 n \). To be fully rigorous, we should use this technique to suggest the form \( f(m) = (m-1)2^m - 1 \) and then prove it by induction but from this, we can gather an approximate run-time of \( O(n \log n) \). In what scenarios does this algorithm execute with this efficiency? Let us examine scenarios of Merge Sort.
### 3.4 Merge Sort Case Scenarios
Using the Merge Sort algorithm, the best and worst case scenarios both have run-time \( O(n \log n) \) [2]. Why is this? If you have the same number of elements in the array, it will take the same number of comparisons to split into subarrays all the way down to the base case. This process is required even if you have a fully sorted array to begin with. The only change in how the algorithm functions comes in the number of comparisons required to reassemble into the sorted “parent array.” If we are working with a presorted array, the left side of the subarray would exhaust by placing all elements into the “parent array” before the right subarray has placed a single element. Since the left subarray will contain no more elements to be placed, the pseudocode will place all the elements of the right array into the parent array without making comparisons within,
since the subarray is known to be sorted, and this indicates an improvement. The number of comparisons, however, required for reassembly runs in constant time and does not affect overall run-time since we are only concerned with overall growth behavior. Thus, the big-O run time remains the same even though the best and worst case scenarios are not identical.
4 Conclusion
How much better is a run-time of $n \log n$ compared to $n^2$? In an array of size 10, running these algorithms results in $10 \log(10) = 23.026$ with Merge Sort and $10^2 = 100$ using Bubble Sort. In an array of size 1,000, however, we are comparing 6908 to 1,000,000 computational units. Even worse, in an actual application, the lists could be longer yet. We have introduced two very different algorithms for accomplishing the same task, but many, many more exist. Clearly, with vast gains to be made in efficiency improvements, the effort required to select an algorithm design and properly code the process pays off in the long run, even if it is a little laborious in the developmental stages.
References
|
{"Source-Url": "http://www.austinmohr.com/15spring4980/HinrichsPaper.pdf", "len_cl100k_base": 5028, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22411, "total-output-tokens": 5753, "length": "2e12", "weborganizer": {"__label__adult": 0.00042557716369628906, "__label__art_design": 0.0005559921264648438, "__label__crime_law": 0.0006361007690429688, "__label__education_jobs": 0.001312255859375, "__label__entertainment": 0.00015234947204589844, "__label__fashion_beauty": 0.0002206563949584961, "__label__finance_business": 0.00034236907958984375, "__label__food_dining": 0.0007152557373046875, "__label__games": 0.0011806488037109375, "__label__hardware": 0.001972198486328125, "__label__health": 0.0011587142944335938, "__label__history": 0.0004420280456542969, "__label__home_hobbies": 0.00023508071899414065, "__label__industrial": 0.0009031295776367188, "__label__literature": 0.0003998279571533203, "__label__politics": 0.0004611015319824219, "__label__religion": 0.0006146430969238281, "__label__science_tech": 0.305419921875, "__label__social_life": 0.00013589859008789062, "__label__software": 0.00818634033203125, "__label__software_dev": 0.67236328125, "__label__sports_fitness": 0.0006809234619140625, "__label__transportation": 0.0009016990661621094, "__label__travel": 0.00028634071350097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20127, 0.04115]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20127, 0.5772]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20127, 0.90225]], "google_gemma-3-12b-it_contains_pii": [[0, 1998, false], [1998, 3859, null], [3859, 5702, null], [5702, 7985, null], [7985, 9763, null], [9763, 11645, null], [11645, 13706, null], [13706, 16207, null], [16207, 18709, null], [18709, 20127, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1998, true], [1998, 3859, null], [3859, 5702, null], [5702, 7985, null], [7985, 9763, null], [9763, 11645, null], [11645, 13706, null], [13706, 16207, null], [16207, 18709, null], [18709, 20127, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20127, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20127, null]], "pdf_page_numbers": [[0, 1998, 1], [1998, 3859, 2], [3859, 5702, 3], [5702, 7985, 4], [7985, 9763, 5], [9763, 11645, 6], [11645, 13706, 7], [13706, 16207, 8], [16207, 18709, 9], [18709, 20127, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20127, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
e5dadda46dbfab7cb7471d605f07c3c67688b2cc
|
Towards a Multi-views Approach for Software Requirement Engineering: Requirements Management Tools
Omer Dawood, Abd-El-Kader Sahraoui
To cite this version:
HAL Id: hal-01704437
https://hal.laas.fr/hal-01704437
Submitted on 15 Feb 2018
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Towards a Multi-views Approach for Software Requirement Engineering: Requirements Management Tools
Omer Salih Dawood¹, Abd-El-Kader Sahraoui²
¹ Department of Computer Science, College of Arts and Science, Wadi Aldawasir
Prince Sattam Bin Abdulaziz University, KSA
Sudan University of Science and Technology, Khartum, Sudan
o.dawood@psau.edu.sa, omercomail@gmail.com
² LAAS-CNRS, Université de Toulouse, CNRS,UT2J, Toulouse, France
ABSTRACT
This paper is on Requirements methods and associated tools. It provides general overview of requirement engineering approaches and propose a multi-view approach, and deep comparison on tools and techniques used to manage requirements. The comparison is based on five items, requirement traceability, requirements integration, requirement prioritizing, requirement status, and customer satisfaction. By this comparison it becomes easy to assist the requirement tools and techniques. The paper concentrates on tracing requirement and proposes to develop a model that can be used to handle and manage requirement traceability on large and complex systems.
Keywords
Requirement Engineering; Traceability; requirements tools RTM; DOORS.
1. INTRODUCTION
Requirement engineering is first step in software development. It has many steps elicitation, analysis, and requirement management validation. Its aimed to collect and managed the requirements in a good manner and best way to ensure that all requirements are gathered and analyzed in the way that allow to produce both products and services that satisfying quality attributes [1]. Requirements management (RM) is process of managing changes in the requirements throughout procedure of requirement engineering. Requirement management contains activities related to identification of change, maintenance of change, traceability and changes management of requirements [2]. Requirements management Tools are used to manage the requirements, and shows the relationship between the requirements, and so on. The following section introduces a tools and technologies used to manage the requirements and performs a simple comparative study between three tools DOORS, RTM, and Volere, by this comparative study we want to specify the properties and ability of each, and the problem of each one so that we can enhance the tools and methodology to produce a tool that satisfy the all quality attributes, and we provides multi-views for software requirement engineering, and idea for research in this area.
The paper concentrates on requirement engineering traceability.
2. Volere Requirement Specifications:
Volere is used for requirement specifications. It is developed to manage and trace the requirements using volere shell. The shell consist from main attribute that needed when specifying requirement like requirement Id, type, priority, and dependency between requirements.
Volere divides the requirements to the five types each type has the sub components as in the flowing figure (1).
1. Functional requirements are the Basic functions of system that perform core operations of the system. The concrete means are used to measure these functional requirements.
2. Non-functional requirements are the properties that the specify behavior of function such as usability, look and feel, performance, etc.
3. Project constraints describe how the product delivered and fit into the world. The constraint in involves many things like needed interface with existing component for both software and hardware, practice of business, and budget defined at starting project or be ready by a defined date [3].
4. Project drivers are the related forces of business, that drives or control on the process of project development. The purpose of the product is project driver, for all stakeholders in the system in different levels and reasons.
5. Project issues define the conditions under which the project will be done[3]
3. Dynamic Object Oriented Requirements (DOORS)
It is the requirements management tool developed by Telelogic to support the software engineering lifecycle. DOORS is mentioned in several papers and is often referred to as very capable requirements management tool[4].It allows many Users to work together at the same time, and enabling access to the database server that contains information about requirement and links[4].
DOORs database contains many project and each project has its own users and information model. The information model contains a set of module used to keep information about actual requirements and link. DOORs has main three modules:
1. Formal modules Requirements has many artifacts, the artifact contains smaller object. Formal module used to store information about representation of requirement artefact.
2. Link module each formal module has relationship; this relationship is stored in the link module.
3. Descriptive module this module not used basically to store actual requirement, but now actual requirement stored in formal module [4]. object consists from the following [5]
1. General used for describing heading, short text, object text values for the object
2. Access is manage access right to the object
3. History is used as log for changing in object
4. Attributes: is value of object attributes
5. Links: are used to handle the relationships with other objects
4. REQUIREMENT TRACEABILITY
Matrix (RTM)
The requirement engineering has two parts, first one is requirement development, this part is responsible for requirement elicitation, analysis, specification, and validation. Software Requirement Specification (SRS) is document that produced as output of the requirement development. It contains the requirement specification and it ready to the design phase. The second part of requirement engineering is requirement management, which is responsible for managing requirements and it has two part change management and traceability. Traceability process produces Requirement Tractability Matrix (RTM) as output [6]. The RTM handles requirements and relationships between these requirements in a single document [7].
Requirement Traceability
Traceability recognizable association between two or more logical entities like requirements, verifications, elements of system, or tasks. The main two types of traceability are horizontal and vertical traceability but there are other sub types [6].
Vertical Traceability: it shows the source of items and traces these items to Work Breakdown Structure (WBS), to project team and finally to the customers. It insures that the requirement can be traced till satisfied [6].
Horizontal Traceability: It shows the relationship between the related item and work group. It is aimed to avoid the conflicts.
Bidirectional Traceability: It is an association between two or more logical entities that is discernible in either direction. It effectively manages the relationship between requirement sources and requirements of product and its components. In other meaning bidirectional traceability happens between requirement to end product and vice versa [6].
Indirect Traceability: There are two directions for traceability. Firstly is Forward traceability, trace the requirements to its source. Secondly is backward traceability is trace from product to requirement source [6]. Requirement Traceability Matrix (RTM): Is matrix that used to handle the complete user and system requirements, or a part of the system.
Requirement traceability is helpful in software engineering activities like validation of requirements, and impact analysis, also is useful in tracking the logical sequences and trade-offs for each Requirement.
<table>
<thead>
<tr>
<th>No.</th>
<th>Comparison Topic</th>
<th>Volere</th>
<th>DOORs</th>
<th>RTM</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Requirement Traceability</td>
<td>Known as dependability</td>
<td>Support different traceability types</td>
<td>Support different traceability types</td>
</tr>
<tr>
<td>2</td>
<td>Requirements Integration</td>
<td>No requirement integration</td>
<td>Support integration, on different levels</td>
<td>Not fully supporting integration</td>
</tr>
<tr>
<td>3</td>
<td>Requirement Prioritizing</td>
<td>No priority</td>
<td>Support priority</td>
<td>Support priority</td>
</tr>
<tr>
<td>4</td>
<td>Requirement Status</td>
<td>No requirement status</td>
<td>Has requirement status</td>
<td>Has requirement status</td>
</tr>
<tr>
<td>5</td>
<td>Customer Satisfaction</td>
<td>Has customer Satisfaction</td>
<td>——</td>
<td>Depend on RTM Application</td>
</tr>
</tbody>
</table>
Table (1): Comparison of requirements tools and techniques
5. RELATED WORK
Renuka and et al [8] designed a novel methodology to design traceability of onboard software known as Software Requirements to Design Traceability Technique (SoRDeTT). Their methodology is based on two templates, Software Requirements Specification (SRS) and Software Design Document (SDD) as input to the methodology. SoRDeTT represent a common template for both requirement and design which is used to handle the data and information from these documents. There are two trace items; SRS Trace Item (SRSTI) is the template that populates data from SRS. The other one is SDD Trace Item (SDDTI) which if filled with data from SDD. This
methodology applied to satellite system because it’s complex and contains many subsystems. Onboard software requirements and software design of many subsystems represented in SRS and SDD. The main purpose of this methodology is to ensure the software design is done according to SRS. The methodology works as follow, data captured from SRS to build SRSTI, SDDTI, comparing SRSTI and SDDTI, if one to one mapping is not found then make tag mismatch, then generate mismatch report, analyze the inconsistency to make a correction, and finally repeat SoRDeTT for all new changes.
Filho and et al[9] propose traceability framework. The model aimed to visualizing the traceability among different tools, and they assume the models are represented in XML to support heterogeneity of tools and models, so that XML is became de facto standard for data interchange, XML supported by many tools, and they use XQuery as standard for traceability rules expression. They assume the model is generated in Native Format Models and converted to (XML_based Models) by using Model Translator.XML model and rules used as input to Traceability_Completeness_Checking Engine.
The traceability relations between the models are generated by the engine, also the engine identify missing elements based on the rules. Engine uses WordNet component to ensure the identification of synonyms between the of element’s names in the models. Traceability_Relations_Missing_Elements document is used to handle the traceability relationship and identified missing elements. This document is important because preserve the original models, to let the use of these models by other different applications and tools. The document used as input to Traceability_Completeness_Checking Engine component that used to support generation of dependent traceability relations. They developed simple prototype tool that shows the traceability relations, and they showed that there are many traceability relations can be generated.
Fab’ola and Michel [10] extend the SysML requirements diagrams concentrating on traceability of both functional and non-functional requirements. Real-Time Systems can be modelled by this extension of the requirements. The metamodel of SysML is extended with new relationships and stereotypes, and applying the specification of a Road Traffic Control System using proposed metamodel is applied to a set of requirements for the specification of a Road Traffic Control System. SysML is a UML profile and Class diagram stereotype extended new attributes. It decomposes the requirements into smaller related elements in form of hierarchy so that the complexity is managed early. The hierarchy is based on master and slave who allow reusing the requirement. They propose seven new stereotypes to extend the relationships like, copy relationship is represented by master/slave relationship, derive relationship (deriveReqt), satisfy requirement shows how model satisfies requirements, a test case, represented by verify relationship, refine relationship show how a model element used to refine a requirement. The trace relationship act as a general purpose relationship.
6. Ontology for requirements elicitation
Such work based on research roadmap in systems engineering [11] that will be integrated to partially in our work as requirement ontology if well defined and formalised can give rise to better requirements tools. This can make changes to Volere requirements templates.
Problem Definition It is very important to build requirements elicitation on the form used for requirements expression. With the evolution of the Internet and electronic commerce, future business services will often be delivered by autonomous and collaborating parts or software agents inside or across organizational boundaries through negotiation and information exchange over a distributed data network. Efforts are needed to develop collaborative requirement engineering t as an associated need.
The semantics of different information sources are collected by their ontologies, i.e., both terms and the relationships between these sources. In many applications, the intended meaning of a term is often implicit, and understanding this in a collaborative environment necessarily is reliant upon mutual agreements and understandings. In an open environment mutual agreement is hard to achieve. Thus it is very important for the vocabulary, that describes the domain model, to be specified and maintained in such a way that other systems can process them with minimum human intervention. Ontology is used to manage and deal with this task. The ontology research now has more attention from both academia and industry.
It is generally very difficult to build a machine-definable ontology (vocabulary). The semantics of a term varies from one context to another and across different stakeholders. Ideally we need an approach that reduces the problem of knowing the contents and structure of many information resources to the problem of knowing the contents of specific domain of ontologies that user familiar with the domain and easily understands.
Not all requirements are known at starting of the system development. They cannot be specified completely up front in one voluminous document. But rather will evolve during analysis phases of a project and beyond. requirements elicitation involve all stakeholders: users, developers and customers; all see their way matured in the way the requirements are expressed from this step till maintenance; such acquired added value by the elicitation is used to improve the system instead of maintaining the myth that the requirements are to remain static.
Requirement elicitation is one of requirement engineering process. It represents one of the first critical phases. Requirement process is the first phase in systems development. The specific nature of such process is that the
The requirements elicitation is one of the most important steps in requirements engineering project. Experience over the last decennia has shown that incorrect, incomplete or misunderstood requirements are the most common causes of poor quality, cost overruns and late deliveries. The ability to use an adequate approach thought a method or systematic process is therefore one of the core skills in systems development. The GAO survey is a demonstration through figures on nine projects totaling about $7 millions.
A terminology (CMU): The procedure of understanding systems requirements can be defined and described by many terms. Requirements engineering can be used as a general terms including all activities related to requirements. In fact, requirements engineering consists from four specific processes
Requirement elicitation: Is first process allowed to understand, discover, reveal, and articulate the requirements to customers, buyers, users of a system.
Requirements analysis: This process is based on the requirement elicitation. It is reasoning the elicited requirements; it involves some activities such as checking requirements to ensure from both conflicts and inconsistencies, combining requirements that related to each other’s, and specify missing requirements.
Requirements specification: In This process the requirements are recoding in the forms, this including the may be done in natural language, symbolic, formal, and diagrammatically representing the requirements, also the product that is the document produced by that process.
Research approach. The suggested research approach involves development of a shared ontology: A shared ontology can be in the form of a document or a set of machine interpretable specifications. Among possible contemporary research projects that deal with ontology-based approaches to resolving the semantic issues, the following seem especially appealing.
Requirements validation: In this process the requirements are confirmed with the users of systems, and customers to ensure that the specified requirements are valid, complete and correct.
In an actual situation, these four processes cannot be strictly separated and performed sequentially; they are interleaved and performed iteratively.
The term elicitation is not universally accepted for the process described; there are no similar term in other language, in example French language; the term acquisition, capture is often used; some companies use gathering, expressing, formulating. Each term has a different connotation. Acquisition supposes the requirements are already there like sensor value acquisition by I/O system of a computer system. Apart from the term used, all of these terms address implicitly the elicitation term.
i. Common domain model: although participating agents share a common domain that is the basis of their cooperation, they often have different views of the domain. In order for them to collaborate, a common domain model is required to facilitate their communications.
ii. Different levels of abstraction: different levels of information abstraction are required by a flexible enterprise. At the agent level, only high-level business process and service concepts are needed to form service level agreements, i.e., contracts. At the task scheduling level, processes and services must be viewed in term of individual tasks and task interfaces (methods and conditions). At the execution level, data representation must be explicit so that data can be transformed and fused correctly.
iii. Dynamic information integration: the underlying information systems are potentially large. New services may require only parts of the information systems to be integrated. Dynamic information integration is required as which parts to be integrated for what purposes cannot be determined beforehand.
iv. Service and contents description: agent services and information system contents must be formally described. The descriptions must be accessible and meaningful to all participating agents.
v. Information heterogeneity reconciliation: as flexible enterprises operate in an open environment, participating agents often use conflicting terms. In order for them to collaborate, the heterogeneity must be reconciled.
Expected results. The suggested research should result in several needed and useful outcomes.
i. Developing a requirements domain ontology environment for effective and efficient requirements elicitation will represent a considerable advance in requirements engineering. This will necessarily involve identification of appropriate support environments needed to assist ontology designers with the tasks involved in ontology management. It is
envisaged that such an environment would maintain an ontology repository that can be accessed. During the design phase to enable this, tools will be available to browse and reuse the terms from the repository. When new terms need to be added, checks should be performed to see that they do not cause inconsistency in the repository. This environment should also have a set of tools that help extract ontological information that is embedded in existing systems.
ii. Develop appropriate methods and tools to support the integration of process models and information systems from multiple organisations during requirements change.
iii. Extending XML in requirements for data sources and ontology extraction and retrieval. Integrate the ontology for requirements elicitation into a general framework and context to support systems engineering in a computer supported cooperative work environment.
7. RESEARCH OBJECTIVE
1. Better understanding of requirement management tools and techniques.
2. Evaluate mentioned tools to become easily when requirements management tool is needed.
3. Improving software quality by determine each tool capabilities to minimize the problems risk of requirement management.
4. Evaluate requirement traceability in each tool and techniques.
8. RESEARCH PROBLEM
From literature review there are many researches in requirement traceability but not detailed covered the one of the important topic in requirements validation and verification through traceability. It is expected to develop a requirement traceability model or tool that allows enhancing and improving software quality through tracing requirement in a good and best manner.
9. DISCUSSION
Requirement engineering is first step of software development its aim to collect and document requirements. The cost of detecting and managing errors in earlier stages is less than detecting errors in later stages. Traceability is very important to handle and managing requirements, because its allows easily tracking requirements. There are many research covered this area but still some gaps and missing are found. The previous research concisely covered the traceability issues and concentrate on small to medium systems. This research aimed to full the missing points on the previous studies and develop a model for requirement traceability that allow handle the traceability in big and complex system.
By developing the new model its expected to produce highly and well requirement modeling and techniques that produce software with high level of quality, and requirements of the complex system can be managed in easily way.
10. CONCULISION
This paper covered the concept of requirement engineering and comparing between some of the current tools and techniques that used to manage requirements. In the area of requirements engineering we concentrate to requirement traceability because it very important to manage and handle the requirement. Many previous researches are reviewed and concluded in this paper, and some interested areas are shown and discussed .this paper introduces some research problems like requirement traceability and requirement ontology. Its expected to solve the problem of requirements elicitations verification and validation through requirements traceability.
11. ACKNOWLEDGMENTS
The authors and mainly second author are indebted to many colleagues who contributed directly or indirectly to this work and mainly Late Professor Andy Sage from Georges Mason University and Professor Dennis Buede from New Jersey University.
12. REFERENCES
[8] Renuka and et al. 2014. NOVEL METHODOLOGY FOR REQUIREMENTS TO DESIGN TRACEABILITY OF ONBOARD SOFTWARE. 2014 International Conference on Advances in Electronics, Computers and Communications (ICAECC), (Bangalore, 10-11 OCT 2014)DOI: 10.1109/ICAECC.2014.7002386
|
{"Source-Url": "https://hal.laas.fr/hal-01704437/document", "len_cl100k_base": 4598, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19288, "total-output-tokens": 5628, "length": "2e12", "weborganizer": {"__label__adult": 0.0002856254577636719, "__label__art_design": 0.0003936290740966797, "__label__crime_law": 0.00026488304138183594, "__label__education_jobs": 0.001308441162109375, "__label__entertainment": 5.733966827392578e-05, "__label__fashion_beauty": 0.00014483928680419922, "__label__finance_business": 0.0003476142883300781, "__label__food_dining": 0.0003066062927246094, "__label__games": 0.0005211830139160156, "__label__hardware": 0.0004646778106689453, "__label__health": 0.00038242340087890625, "__label__history": 0.00021064281463623047, "__label__home_hobbies": 6.395578384399414e-05, "__label__industrial": 0.0003006458282470703, "__label__literature": 0.0002818107604980469, "__label__politics": 0.000164031982421875, "__label__religion": 0.0003597736358642578, "__label__science_tech": 0.019073486328125, "__label__social_life": 9.09566879272461e-05, "__label__software": 0.0094757080078125, "__label__software_dev": 0.96484375, "__label__sports_fitness": 0.00023627281188964844, "__label__transportation": 0.0003497600555419922, "__label__travel": 0.00016486644744873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26624, 0.0277]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26624, 0.55322]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26624, 0.9122]], "google_gemma-3-12b-it_contains_pii": [[0, 1128, false], [1128, 5462, null], [5462, 10130, null], [10130, 16017, null], [16017, 20720, null], [20720, 25349, null], [25349, 26624, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1128, true], [1128, 5462, null], [5462, 10130, null], [10130, 16017, null], [16017, 20720, null], [20720, 25349, null], [25349, 26624, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26624, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26624, null]], "pdf_page_numbers": [[0, 1128, 1], [1128, 5462, 2], [5462, 10130, 3], [10130, 16017, 4], [16017, 20720, 5], [20720, 25349, 6], [25349, 26624, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26624, 0.05932]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
d9cd6aca455745a80925765551eb1cac240ba60f
|
[REMOVED]
|
{"Source-Url": "http://www.researchgate.net/profile/Tania_Fontes/publication/271587652_NNIGnets_Neural_Networks_Software/links/54cd28660cf29ca810f78a6a.pdf", "len_cl100k_base": 4758, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20738, "total-output-tokens": 5836, "length": "2e12", "weborganizer": {"__label__adult": 0.00033092498779296875, "__label__art_design": 0.0009374618530273438, "__label__crime_law": 0.00048828125, "__label__education_jobs": 0.00566864013671875, "__label__entertainment": 0.00023496150970458984, "__label__fashion_beauty": 0.00026988983154296875, "__label__finance_business": 0.00040984153747558594, "__label__food_dining": 0.00043129920959472656, "__label__games": 0.0010547637939453125, "__label__hardware": 0.0020351409912109375, "__label__health": 0.0007758140563964844, "__label__history": 0.00043654441833496094, "__label__home_hobbies": 0.0002512931823730469, "__label__industrial": 0.00090789794921875, "__label__literature": 0.0004925727844238281, "__label__politics": 0.0003681182861328125, "__label__religion": 0.0006680488586425781, "__label__science_tech": 0.3671875, "__label__social_life": 0.00028061866760253906, "__label__software": 0.15673828125, "__label__software_dev": 0.458984375, "__label__sports_fitness": 0.0003919601440429687, "__label__transportation": 0.00041961669921875, "__label__travel": 0.00025534629821777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21688, 0.02862]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21688, 0.56834]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21688, 0.84503]], "google_gemma-3-12b-it_contains_pii": [[0, 2740, false], [2740, 5164, null], [5164, 6619, null], [6619, 8882, null], [8882, 11534, null], [11534, 14127, null], [14127, 15260, null], [15260, 16241, null], [16241, 18816, null], [18816, 21688, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2740, true], [2740, 5164, null], [5164, 6619, null], [6619, 8882, null], [8882, 11534, null], [11534, 14127, null], [14127, 15260, null], [15260, 16241, null], [16241, 18816, null], [18816, 21688, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21688, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21688, null]], "pdf_page_numbers": [[0, 2740, 1], [2740, 5164, 2], [5164, 6619, 3], [6619, 8882, 4], [8882, 11534, 5], [11534, 14127, 6], [14127, 15260, 7], [15260, 16241, 8], [16241, 18816, 9], [18816, 21688, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21688, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
7962a0bfef2836b56d05f79922fa28421e470c95
|
Optimizing Splunk Knowledge Objects
Martin Müller
Professional Services Consultant
Consist Software Solutions GmbH
Disclaimer
During the course of this presentation, we may make forward looking statements regarding future events or the expected performance of the company. We caution you that such statements reflect our current expectations and estimates based on factors currently known to us and that actual events or results could differ materially. For important factors that may cause actual results to differ from those contained in our forward-looking statements, please review our filings with the SEC. The forward-looking statements made in the this presentation are being made as of the time and date of its live presentation. If reviewed after its live presentation, this presentation may not contain current or accurate information. We do not assume any obligation to update any forward looking statements we may make.
In addition, any information about our roadmap outlines our general product direction and is subject to change at any time without notice. It is for informational purposes only and shall not be incorporated into any contract or other commitment. Splunk undertakes no obligation either to develop the features or functionality described or to include any such feature or functionality in a future release.
Why are we here?
New Search
tag=authentication tag=failure
Parsing job...
Why are we here?
“Oversized litsearch is the largest performance problem we face in our environment.”
- Jacob Wilkins, General Electric
Why are we here?
- Observed search run time progression during development
- Massive growth in job startup time
- Knowledge Object optimization reduced that overhead by 80%
Who’s that guy?
- Professional Services Consultant, Certified Architect, Splunk-It-All
- Five years at EMEA Splunk Partner
- Heavy Splunker since 2012
- Get in touch with me: martin.mueller@consist.de
- Give karma at Splunk Answers: martin_mueller
- Hang in #splunk on Efnet: martin_m
Session Objectives
- Understand how Splunk turns a search into results
- Learn how to recognize if you have a problem (Spoiler Alert: You do!)
- Use this to your advantage when specifying search-time knowledge
Covered knowledge objects:
- Fields
- Reverse Lookups
- Eventtypes
- Tags
Let’s dive in...
...but first, to the Job Inspector!
- **normalizedSearch**: Ultra-verbose stage of search assembly
```
normalizedSearch
litsearch index=_audit ( action=search OR ( sourcetype=audittrail ) ) |
litsearch index=_audit action=search | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"
```
- Performance stats, e.g. time spent assembling the normalizedSearch
```
15.91 dispatch.createdSearchResultInfrastructure
```
- Links to search.log to look for more hidden performance hogs
Calculated Fields (1)
- TA-splunk, props.conf: `[audittrail]`
EVAL-action=case(condN, valN, 1=1, action)
- Splunk’s assumption about looking for indexed tokens doesn’t hold
- No way to translate the eval expression into tokens
- Plain Search: `index=_audit action=search`
`normalizedSearch: index=_audit (action=search
OR (sourcetype=audittrail))`
- Load all events for that stanza plus events with the token, filter later
Calculated Fields (2)
- What if you’re not searching for that sourcetype?
```
index=_internal sourcetype=splunk*
action=logout
index=_internal sourcetype="splunk*"
(action=logout OR (sourcetype=audittrail))
```
- Splunk expands each segment of your search on its own
- For each calculated field, add stanza to every search for that field
- This is only the beginning of normalizedSearch overhead!
Field Aliases
- Sourcetype A has field `username`, sourcetype B has field `uid`, ...
- Field aliases can normalize this to `user` over all sourcetypes
- `sourcetype=A user=martin` yields this normalized search:
`sourcetype=A ((sourcetype=A AND (username=martin)) OR (sourcetype=B AND (uid=martin)) OR (sourcetype=audittrail AND (uid=martin))) OR (user=martin)`
- All field aliases for all sourcetypes are used in all searches!
A real-world example
- Splunk App for Enterprise Security 3.3.1
- The TAs shipped define 19 field aliases for user
- Your environment will have additional TAs
- Watch your normalizedSearch strings and search startup time grow
- Let’s not forget the upside though: Without standardized field names, searching over different sourcetypes would be impossible
- Are you building a TA? Extract standardized field names directly!
A real-world example
- Searching for user=martin yields 2kB of normalizedSearch:
(((sourcetype="*") AND ((username=martin))) OR ((sourcetype=A) AND ((username=martin))) OR ((sourcetype=B) AND ((uid=martin))) OR ((sourcetype="WMI:UserAccounts") AND ((Name=martin))) OR ((sourcetype="WinEventLog:Application:sophos") AND ((User=martin))) OR ((sourcetype="WinEventLog:SophosPatch") AND ((User=martin))) OR ((sourcetype="audittrail") AND ((uid=martin))) OR ((sourcetype="aws:cloudtrail") AND (("sourceIdentity.userName"="martin") OR "userIdentity.sessionContext.sessionIssuer.userName"="martin") OR "userIdentity.userPrincipalName"="martin") OR ((sourcetype="cef") AND ((suser="martin"))) OR ((sourcetype="cisco:sourcefire:appliance:syslog") AND ((User="martin"))) OR ((sourcetype="f5:bigip:asm:syslog") AND ("username.martin"))) OR ((sourcetype="f5:bigip:management:username.management") AND (("get_fullname=martin"))) OR ((sourcetype="fs_notification") AND ((uid=martin))) OR ((sourcetype="oracle:session") AND ((USERNAME=martin))) OR ((sourcetype="oracle:audit:xml") OR (sourcetype="oracle:audit:text") OR ((USERNAME=martin)))) OR ((sourcetype="sophos:appcontrol") AND ((UserName=martin))) OR ((sourcetype="sophos:devicecontrol") AND ((UserName=martin))) OR ((sourcetype="sophos:firewall") AND ((UserName=martin))) OR ((sourcetype="sophos:sec") AND ((UserName=martin))) OR ((sourcetype="sophos:threat") AND ((UserName=martin))) OR ((sourcetype="sophos:utm:ips") AND ((USERNAME=martin))) OR (user=martin) OR (sourcetype="cisco:asa") OR (sourcetype="cisco:fwsm") OR (sourcetype="cisco:pix") OR (sourcetype="oracle:audit:text") OR (sourcetype="oracle:audit:xml")
NOT PRETTY!
Fields Recap
• Each search segment is expanded on its own without context
• props.conf for one sourcetype will radiate into normalizedSearch of other sourcetypes when field names match
• Avoid calculated fields and field aliases entirely where possible
– Extract fields using standardized names in the first place!
– Some calculated fields can be replaced with lookups
• Monitor their effects where unavoidable
• Both are fine for fields you only use as output
Reverse Lookups
How reverse lookups work
- **Automatic lookup in props.conf:**
```
[splunk_web_access]
LOOKUP-ul = user_location user OUTPUT location
```
- **Reverse lookup:**
Search for `location` rather than `user`:
```
index=_internal location="Las Vegas"
```
- **Splunk translates that into this normalizedSearch:**
```
index=_internal
(((sourcetype=splunk_web_access) AND
((user=Martin) OR (user=Tom)))
)) OR (location="Las Vegas")
```
Actually, I lied...
index=_internal (((sourcetype=splunk_web_access) AND (((((sourcetype=A) AND ((username=Martin))) OR ((sourcetype=B) AND ((uid=Martin))) OR ((sourcetype=audittrail) AND ((uid=Martin))))) OR (user=Martin))) OR (((((sourcetype=A) AND ((username=Tom))) OR ((sourcetype=B) AND ((uid=Tom))) OR ((sourcetype=audittrail) AND ((uid=Tom))))) OR (user=Tom)))))) OR (location="Las Vegas")
• Despite defining the lookup on splunk_web_access, other sourcetypes’ props.conf settings radiate into this search
Expanding to more sourcetypes
- Splunk’s `_internal` index has seven sourcetypes with a `user` field
```plaintext
index=internal ((((sourcetype=scheduler) AND (((sourcetype=A) AND ((username=Martin))) OR ((sourcetype=B) AND ((uid=Martin)))) OR ((sourcetype=audittrail) AND ((uid=Martin))) OR (user=Martin))) OR (((sourcetype=A) AND ((username=Tom))) OR ((sourcetype=B) AND ((uid=Tom))) OR ((sourcetype=audittrail) AND ((uid=Tom))) OR (user=Tom))) OR ((sourcetype=splunk_btool) AND (((((sourcetype=A) AND ((username=Martin))) OR ((sourcetype=B) AND (uid=Martin))) OR ((sourcetype=audittrail) AND (uid=Martin))) OR (user=Martin))) OR (((sourcetype=A) AND ((username=Martin))) OR ((sourcetype=B) AND ((uid=Martin))) OR ((sourcetype=audittrail) AND (uid=Martin))) OR (user=Martin))) OR (((location="Las Vegas") NOT PRETTY!))
```
A location with more than two users?
- 50 users produce a 72kB normalizedSearch that broke PowerPoint
- Noticeable overhead during Parsing Job... phase
- That’s with three field aliases and no calculated fields – imagine 20+!
- Above 50 values per lookup Splunk will revert to „classic“behavior: Load all events, filter later
Mitigation strategies (1)
- Subsearch using inputlookup
index=_internal [inputlookup user_location | search location="Las Vegas" | fields user]
- Removes the per-sourcetype duplication
- Lets you choose between reverse lookups and *classic* behavior
- Ignores the configured knowledge per sourcetype
- More effort required to write and maintain searches
- Not eventtype-compatible
- Subsearch overhead
Mitigation strategies (2)
- Define the per-sourcetype automatic lookup using sourcetype-specific **input** fields
```
LOOKUP-ul = user_location user AS username
OUTPUT location
```
✔ Removes the per-alias duplication
✔ Transparent to the search and user
▶ More effort required to write and maintain knowledge objects
▶ Retains the per-sourcetype duplication
Removed 80% of key-value pairs from the normalizedSearch!
Mitigation strategies (3)
- Define the per-sourcetype automatic lookup using sourcetype-specific output fields
LOOKUP-ul = user_location user OUTPUT location AS sourcetype_location
- Removes the per-sourcetype duplication
- Not transparent at all
- More effort required to write and maintain knowledge objects
- Only really viable if hidden behind eventtypes and/or tags
- Retains the per-alias duplication
Mitigation strategies (4)
- Replace per-sourcetype lookups with broader props.conf stanzas
- Wildcards on source or host
`[source::*access.log*]`
- Unofficial: Wildcards on sourcetype
`[(?:){0}splunk*]`
- ✔️ Removes the per-sourcetype duplication
- ✔️ Transparent to the search and user
- ⚠️ Sourcetype wildcards are neither documented nor supported
- ⚠️ Retains the per-alias duplication
70% key-value pair reduction!
Indexed tokens footnote
- The normalizedSearch generated by reverse lookups can be efficient:
\[
\text{index=}_\text{internal} \ \text{location=}"\text{Las Vegas}" \\
\text{index=}_\text{internal} \\
(((\text{sourcetype=}\text{splunk_web_access}) \ \text{AND} \\
((\text{user=}\text{Martin}) \ \text{OR} \ (\text{user=}\text{Tom})) \\
)) \ \text{OR} \ (\text{location=}"\text{Las Vegas}")
\]
- But: Splunk is looking for a literal \text{location=}"\text{Las Vegas}"
- Watch out for \text{location=}0 or similar values that aren’t unique-ish
- This can blow up your scanCount and search duration
- More on dealing with indexed tokens after the end of the deck
How eventtypes work
- Store a search filter or fragments thereof in a reusable box
- No pipes, no subsearches
- Run search and see `searchCanBeEventType` in Job Inspector
- `eventtype=foo` expands to the stored search fragment
- `eventtype=f*` expands to an OR’d list of matching eventtypes
- Events that match an eventtype have their `eventtype` field set, regardless of whether the eventtype was used in the search or not
What are eventtypes good at?
- Two different systems likely don’t log login attempts the same way
- Define eventtypes for each system, search on eventtypes
- Tag your eventtypes and search on tags
- Configured knowledge simplifies searches
- Great way to hide complexity from the searcher
- Add systems to existing searches without touching searches
- Even when not searching on eventtypes, looking at the `eventtype` field helps quickly understand results
Splunk login example
- **TA-splunk, eventtypes.conf:**
```search = index=_audit "action=login attempt" NOT "action=search"
normalizedSearch: ((index=_audit "action=login attempt" NOT "action=search"))```
- **Note how Splunk chose not to use** `action="login attempt"`!
- Avoids the wrath of calculated fields and aliases in the search
- Search relies on structure of raw events instead of field extractions
- The results contain the CIM-compatible `action` regardless
How tags work
- Give a set of `field=value` pairs a common name
- No wildcarded `field=v*` – can be worked around with tagged eventtypes
- `tag=foo` expands to the list of `field=value` pairs individually
- `tag=f*` expands to an OR’d list of matching tags
- Events that match a tag have their `tag field` set accordingly
- For each tagged `field`, additionally set `tag::field`
What are tags good at?
- Homogenize system-specific values to allow unified searches
- Great in combination with eventtypes:
- Eventtypes define system-specific searches
- Tags on those eventtypes provide a common interface
- Searches on those tags don’t need to know the systems particularly well
- Also great in combination with normalized field names and values
- The unified searches find events over many systems
- The returned results also provide homogenous data back to you
- That’s the Splunk Common Information Model in a nutshell
- Further reading at [http://docs.splunk.com/Documentation/CIM](http://docs.splunk.com/Documentation/CIM)
Splunk login example
- **TA-splunk, tags.conf:** `[eventtype=splunk_access]`
application = enabled
authentication = enabled
- **The search** `tag=application tag=authentication yields`
`(((index=_audit "action=login attempt" NOT "action=search")))`
`(((index=_audit "action=login attempt" NOT "action=search")))`
- **The eventtype is included twice!**
How tags really work
- Search for `tag=application` `tag=authentication`
- Splunk won’t look for `field=value` pairs matching both tags
- Splunk will treat the search like this:
`(tag=application) (tag=authentication)`
- Each tag is expanded individually
- `field=value` pairs will be included once per matching tag
- This can lead to even larger `normalizedSearch` strings!
A real-world example
- Splunk_TA_Oracle defines a handful of tagged eventtypes
- **Four match** `tag=database tag=instance tag=stats`
- Expanding each tag on its own yields sixteen eventtypes!
- Every TA is influenced by every other TA: „Tag Expansion Explosion“
Mitigation Strategies
- Avoid long lists of tags mapping to the same field=value
- Especially with eventtypes and reverse lookups
- Use distributive properties to reduce tag-eventtype redundancy
- Instead of tagging every Splunk eventtype with application, consider tagging sourcetype, host, etc. with application
- Instead of tagging special eventtypes for admin users with privileged, consider tagging those users or a reverse lookup field identifying them
- Look for what actually defines the tag in the real world
- Charm Splunk into optimizing how tags are expanded 😊
Wrapping up
Dos and Don’ts
⚠ Don’t stop using field aliases, calculated fields, reverse lookups, etc.
⚠ Don’t compromise maintainability for small gains
✔ Do take a good look at your environment
✔ Do identify and improve real performance hogs
✔ Do scope knowledge object sharing as narrowly as possible
✔ Do clean up unused knowledge objects and TAs
✔ Do keep monitoring as your knowledge object world grows
Q&A
What Now?
Related breakout sessions and activities...
- You have access to your Splunk at .conf? Talk to me for a quick look!
- Grab the app: https://splunkbase.splunk.com/app/2871
- Duane & George: Beyond the Lookup Glass (Tuesday)
- Amrit & Jag: How splunkd Works (Tuesday)
- Duncan & Julian: Search Efficiency Optimization (Tuesday)
- Niklas: How to use CIM to Gain Security Awareness (Wednesday)
- Dritan: Notes on Optimizing Splunk Performance (later today!)
THANK YOU
Fields: Optimizations Beyond Litsearch
Fields
“Let all values be indexed tokens, for indexed tokens power fast searches.”
- Splunk, late 2000s
Job Inspector continued
- base lispy: How did Splunk crawl its index for events?
- eventCount / scanCount: How efficient was the lispy-induced crawl?
This search has completed and has returned 65 results by scanning 67,296 events in 6.411 seconds.
The following messages were returned by the search subsystem:
```
DEBUG: Configuration initialization for C:\dev\splunk_install\etc took 246ms when dispatching a search (search ID: 1437344782.517)
DEBUG: Subsearch evaluated to the following search expression: splunk
DEBUG: base lispy: [ AND index::_internal splunk ]
DEBUG: search context: user="admin", app="search", bs-pathname="C:\dev\splunk_install\etc"
```
(SID: 1437344782.517) search.log
- limits.conf: [search_info] infocsv_log_level=DEBUG
How Splunk searches for field values (1)
\[
\text{index}=_\text{internal} \quad \text{group}=\text{tpool}
\]
- Assume a field value is present as indexed tokens
- Load events containing those indexed tokens anywhere
\[
[ \text{AND index:::_internal tpool} ]
\]
- Apply field extractions and filter again
07-21-2015 22:42:52.662 +0200 INFO Metrics -
group=tpool, name=indexertpool, qsize=0, ...
- Job Inspector: scanCount ≈ eventCount
How Splunk searches for field values (2)
index=_internal qsize=0
[ AND index::_internal 0 ]
- Splunk returns the same event, but takes ages!
07-21-2015 22:42:52.662 +0200 INFO Metrics -
group=tpool, name=indexertpool, qsize=0, ...
- Default assumption works great iff field values are unique-ish
Key-Value Tricks (1)
```
index=_internal qsize qsize=0
[ AND index::_internal qsize 0 ]
```
- Take advantage of default key-value field extractions
```
07-21-2015 22:42:52.662 +0200 INFO Metrics -
group=tpool, name=indexer tpool, qsize=0, ...
```
- Flexible, zero-config speed-up that requires smart searchers!
```
eventCount 18225
scanCount 18691
```
Key-Value Tricks (2)
- Move inline optimization to fields.conf
\[
\text{INDEXED\_VALUE=} [\text{AND \ qsize <VALUE}>] \\
\text{INDEXED\_VALUE=} [\text{AND \ qsize <VALUE}>] \\
\text{INDEXED\_VALUE=} [\text{AND \ qsize <VALUE}>] \\
\text{INDEXED\_VALUE=} [\text{AND \ qsize <VALUE}>]
\]
- Adds the extra token \text{qsize}, whether the searcher likes it or not
\[
\text{INDEXED\_VALUE=} [\text{AND \ qsize <VALUE}>] \\
\text{INDEXED\_VALUE=} [\text{AND \ qsize <VALUE}>] \\
\text{INDEXED\_VALUE=} [\text{AND \ qsize <VALUE}>] \\
\text{INDEXED\_VALUE=} [\text{AND \ qsize <VALUE}>]
\]
- fields.conf applies to all fields of that name, regardless of sourcetype
- This can break for multi-token values!
Key-Value Tricks (3)
- Take it further and assemble longer tokens
```
[qsize]
INDEXED_VALUE=qsize=<VALUE>
```
- Rule out events with `qsize!=0` that contain a 0 elsewhere
```
index=_internal qsize=0
[ AND index:::internal qsize=0 ]
```
- This will even break for events with `qsize="0"` (major breaker)
- Be sure you know your data before fiddling with fields.conf!
Wildcards (1)
- Splunk will only use indexed tokens for prefixes of wildcarded values
\texttt{index=_internal component=} \texttt{BucketMove*} \\
\texttt{[ AND index:\:\_internal bucketmove* ]}
- \texttt{index=_internal component=\*ucketMover} \\
\texttt{[ AND index:\:\_internal ]}
- Oops!
07-21-2015 22:41:22.999 +0200 INFO \texttt{BucketMover -} \\
\texttt{idx=main Moving bucket=}...
Wildcards (2)
- Force Splunk to use indexed tokens
```
index=_internal component=TERM(*ucketMover) [ AND index:::_internal *ucketmover ]
```
- Much faster than loading all events, but there’s a penalty for crawling the index without a prefix!
- fields.conf to remove the `TERM()` from all searches
```
[component]
INDEXED_VALUE=<VALUE>
```
Fields Recap (Part 2)
- Indexed tokens are king
- scanCount performance hit when indexed tokens can’t be used
- fields.conf optimizations can fix performance, but can break results
|
{"Source-Url": "http://conf.splunk.com/session/2015/conf2015_MMueller_Consist_Deploying_OptimizingSplunkKnowledge.pdf", "len_cl100k_base": 5268, "olmocr-version": "0.1.53", "pdf-total-pages": 57, "total-fallback-pages": 0, "total-input-tokens": 85379, "total-output-tokens": 7619, "length": "2e12", "weborganizer": {"__label__adult": 0.0002579689025878906, "__label__art_design": 0.00042557716369628906, "__label__crime_law": 0.0007100105285644531, "__label__education_jobs": 0.0025310516357421875, "__label__entertainment": 0.00015413761138916016, "__label__fashion_beauty": 0.00013768672943115234, "__label__finance_business": 0.0029659271240234375, "__label__food_dining": 0.00017273426055908203, "__label__games": 0.0006422996520996094, "__label__hardware": 0.0006985664367675781, "__label__health": 0.00021409988403320312, "__label__history": 0.0002536773681640625, "__label__home_hobbies": 0.00013971328735351562, "__label__industrial": 0.0003726482391357422, "__label__literature": 0.00024580955505371094, "__label__politics": 0.0002562999725341797, "__label__religion": 0.0002727508544921875, "__label__science_tech": 0.01459503173828125, "__label__social_life": 0.0002053976058959961, "__label__software": 0.26953125, "__label__software_dev": 0.70458984375, "__label__sports_fitness": 0.0002351999282836914, "__label__transportation": 0.00020182132720947263, "__label__travel": 0.0002143383026123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20409, 0.01337]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20409, 0.06416]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20409, 0.69095]], "google_gemma-3-12b-it_contains_pii": [[0, 115, false], [115, 1339, null], [1339, 1416, null], [1416, 1554, null], [1554, 1728, null], [1728, 2015, null], [2015, 2301, null], [2301, 2318, null], [2318, 3079, null], [3079, 3079, null], [3079, 3518, null], [3518, 3930, null], [3930, 4360, null], [4360, 4785, null], [4785, 6461, null], [6461, 6461, null], [6461, 6929, null], [6929, 6945, null], [6945, 7407, null], [7407, 7922, null], [7922, 8749, null], [8749, 8749, null], [8749, 9076, null], [9076, 9482, null], [9482, 9861, null], [9861, 9919, null], [9919, 10334, null], [10334, 10736, null], [10736, 10766, null], [10766, 11444, null], [11444, 11444, null], [11444, 11873, null], [11873, 12333, null], [12333, 12810, null], [12810, 12810, null], [12810, 13190, null], [13190, 13848, null], [13848, 14219, null], [14219, 14601, null], [14601, 14865, null], [14865, 15448, null], [15448, 15460, null], [15460, 15858, null], [15858, 15862, null], [15862, 16329, null], [16329, 16339, null], [16339, 16378, null], [16378, 16484, null], [16484, 17236, null], [17236, 17682, null], [17682, 17985, null], [17985, 18347, null], [18347, 19074, null], [19074, 19466, null], [19466, 19865, null], [19865, 20228, null], [20228, 20409, null]], "google_gemma-3-12b-it_is_public_document": [[0, 115, true], [115, 1339, null], [1339, 1416, null], [1416, 1554, null], [1554, 1728, null], [1728, 2015, null], [2015, 2301, null], [2301, 2318, null], [2318, 3079, null], [3079, 3079, null], [3079, 3518, null], [3518, 3930, null], [3930, 4360, null], [4360, 4785, null], [4785, 6461, null], [6461, 6461, null], [6461, 6929, null], [6929, 6945, null], [6945, 7407, null], [7407, 7922, null], [7922, 8749, null], [8749, 8749, null], [8749, 9076, null], [9076, 9482, null], [9482, 9861, null], [9861, 9919, null], [9919, 10334, null], [10334, 10736, null], [10736, 10766, null], [10766, 11444, null], [11444, 11444, null], [11444, 11873, null], [11873, 12333, null], [12333, 12810, null], [12810, 12810, null], [12810, 13190, null], [13190, 13848, null], [13848, 14219, null], [14219, 14601, null], [14601, 14865, null], [14865, 15448, null], [15448, 15460, null], [15460, 15858, null], [15858, 15862, null], [15862, 16329, null], [16329, 16339, null], [16339, 16378, null], [16378, 16484, null], [16484, 17236, null], [17236, 17682, null], [17682, 17985, null], [17985, 18347, null], [18347, 19074, null], [19074, 19466, null], [19466, 19865, null], [19865, 20228, null], [20228, 20409, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20409, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20409, null]], "pdf_page_numbers": [[0, 115, 1], [115, 1339, 2], [1339, 1416, 3], [1416, 1554, 4], [1554, 1728, 5], [1728, 2015, 6], [2015, 2301, 7], [2301, 2318, 8], [2318, 3079, 9], [3079, 3079, 10], [3079, 3518, 11], [3518, 3930, 12], [3930, 4360, 13], [4360, 4785, 14], [4785, 6461, 15], [6461, 6461, 16], [6461, 6929, 17], [6929, 6945, 18], [6945, 7407, 19], [7407, 7922, 20], [7922, 8749, 21], [8749, 8749, 22], [8749, 9076, 23], [9076, 9482, 24], [9482, 9861, 25], [9861, 9919, 26], [9919, 10334, 27], [10334, 10736, 28], [10736, 10766, 29], [10766, 11444, 30], [11444, 11444, 31], [11444, 11873, 32], [11873, 12333, 33], [12333, 12810, 34], [12810, 12810, 35], [12810, 13190, 36], [13190, 13848, 37], [13848, 14219, 38], [14219, 14601, 39], [14601, 14865, 40], [14865, 15448, 41], [15448, 15460, 42], [15460, 15858, 43], [15858, 15862, 44], [15862, 16329, 45], [16329, 16339, 46], [16339, 16378, 47], [16378, 16484, 48], [16484, 17236, 49], [17236, 17682, 50], [17682, 17985, 51], [17985, 18347, 52], [18347, 19074, 53], [19074, 19466, 54], [19466, 19865, 55], [19865, 20228, 56], [20228, 20409, 57]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20409, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
7ad07dd47938cd823faf19579c7f68d527d2cdf7
|
A series of computer programs to handle natural language retrieval and analysis in a manner analogous to that of a statistical package is discussed. The document presents an overview of several language analysis projects currently underway, and of several research approaches to problems in language analysis. The manner in which the Language Analysis Package (LAP) could be used with each approach is considered, and design specifications for a state-of-the-art system are presented. A preliminary implementation of such a system using programs currently available at the Southwest Regional Laboratory is suggested. An overview of the design considerations and algorithms for LAP is also included.
(Author/DGC)
TITLE: PRELIMINARY DESIGN FOR A LANGUAGE ANALYSIS PACKAGE (L.A.P.)
AUTHOR: Ann Porch
ABSTRACT
A package of computer programs to handle natural language retrieval and analysis in a manner analogous to that of a statistical package is discussed. The document presents an overview of several language analysis projects currently underway, and of several research approaches to resolving problems in language analysis. Emphasis is presented on the way the L.A.P. could be used with each approach. Design specifications for a package sensitive to the state-of-the-art are documented. A preliminary implementation, using programs in SWRL's possession, is suggested as a first step in the development process. An overview of the design considerations and algorithms for the preliminary package is presented.
PRELIMINARY DESIGN FOR A LANGUAGE ANALYSIS PACKAGE (L.A.P.)
Introduction
For years there have been "packaged programs" in statistical areas. These programs offer generalized computational capabilities in a form and format especially suited to easy use by researchers whose basic orientation is not that of Computer Science.¹ Researchers in the social sciences, for instance, are able to perform complex multivariate regression analysis by computer without undergoing any special training in programming or computer operations.
A language analysis package (L.A.P.) with a power comparable to that of statistical packages would have considerable general utility.
As machines become more powerful and their use expands to include more disciplines, it is increasingly significant for researchers in many fields to be able to use the speed and versatility of the computer for processing natural language text as well as numerical data. For example, researchers doing studies of textbooks now suffer from the fact that their work usually must be done by hand. In those instances where computer technology is employed, a special purpose program is generally written by the resident programmer, who may or may not have specialized training in techniques of natural language processing. The results are costly, both in time and money spent on processing with inefficient or one-shot programs. Because such programs are limited in scope and written for a special purpose, the researcher has little flexibility available to him, and finds that a relatively minor change in his research perspective may make the computer program unusable.
In the past ten years, a great deal of work has been done throughout the country and the world in natural language processing in fields such as artificial intelligence, information retrieval, machine translation, computational linguistics, and computer stylistics. Hundreds of computer programs have been written, debugged, run, and then shelved when the researcher went on to another project. A number of these programs are the product of months of careful work by experts. Some researchers, such as Borden and Watts at Pennsylvania State University, are trying to develop generalized systems which handle many of the basic tasks associated with natural language processing. The design proposed here suggests the use of the best of those existing programs whose authors are willing to release them.
Such an approach has several advantages: the package will reflect the power of the finest specialized programming skill presently available. The development costs will be minimized since the major programming task will consist of interfacing the existing programs or subsections in a modular fashion under the direction of one control routine, rather than developing each of the specialized routines from scratch. By carefully constructing the package in a highly independent, modular manner, individual routines may be easily "un-plugged" and replaced should a more efficient or powerful routine be developed. Thus, the system will be dynamic and open-ended, constantly updating itself to keep pace with the state-of-the-art.
One obvious disadvantage to such an approach is the degree to which it creates machine dependency. Since programs are written in
different computer languages at different computer installations using different machines, some account must be taken of the problem which will arise in "minor" modifications necessary to make them run at the particular installation chosen for the L.A.P. development. Some care must be taken to try to keep the entire package as machine independent as possible. Certainly it can only be implemented on a computer which had the ability to compile and run most of the major computer languages currently in use for language processing. One such installation exists at UCLA, where the IBM 360 mod 91 has compilers for the following languages: PL/1, FORTRAN IV (G & H), COBOL, SNOBOL, LISP, APL, 360 ASSEMBLER, ALGOL.
Certain hardware and system software requirements will also be essential to the efficient development of the L.A.P. Among these are an efficient system sort-merge routine, multiple tape drives, relatively large amounts of direct-access storage and considerable available core. Again, the UCLA installation is one example of a computer center which is amply equipped in all areas.
Processing Features
A survey of the work currently being done in the field of language analysis² reveals seven major areas of present interest and usage which can logically be included in the L.A.P. Of the seventy-five projects listed in the November, 1970 issue of Computers and the Humanities, nine dealt with frequency counts, seventeen with KWIC production, fourteen with semantic or content analysis, eight with statistics, thirteen with index production, six with retrieval systems, six with sentence
parsing and six with miscellaneous items such as machine translation. Several projects must be considered to fall into more than one category. Ratios in Linguistics in Documentation (Current Abstracts), Language and Automation, and Computer Studies in the Humanities and Verbal Behavior are much the same, although with a slightly heavier emphasis on retrieval and parsing.
An ideal language analysis package, then, should have the ability to perform efficiently in each of these seven basic areas, and the flexibility to operate in any manner desired, either sequentially or concurrently. In addition, the L.A.P. should be modifiable at any time, either for more effective general use or for a particular research application.
Flexibility is perhaps the most important single consideration in the development of such a package. The L.A.P. will be useful only to the degree that it is adaptable to a variety of projects and approaches. The fewer constraints on the user, the more likely it is to be used. To save researcher time and money, the L.A.P. should provide the user with as many options as possible, allowing him to select precisely and easily only the functions he requires. No section of the package should be "called in" unless the researcher specifically requests it. He should not be limited to simply an exclusive "OR" type selection, where he can only chose to do either a KWIC or an Index, but should be able to combine any or all routines and subroutines in any fashion he desires. He may, for example, want to produce KWIC's on words occurring within his inclusion list while simultaneously producing an index of all words except those in his exclusion list and a frequency count of every word.
in the text. He should be able to use only the retrieval aspect of the L.A.P. or only the statistical portion without being penalized by the fact he is using a package rather than a single program designed specifically for his purpose.
**Data Base Manipulation**
Since a number of researchers may be making use of the L.A.P., the system should have the ability to differentiate among data bases, selecting the base or sub-base from a large library of resident data bases stored on magnetic tape or disc and making it available to the researcher for his processing. For example, the entire library might include the complete works of Shakespeare, the ERIC files, and all California State approved first grade text books. Researchers using the L.A.P. should be able to easily obtain an input file containing only first grade *reading* books or only ERIC documents dealing with reading.
Often, data bases which a particular researcher may wish to use have been prepared elsewhere with each having different input conventions and formats. For example, one data base might be prepared with a logical record of 100 and in EBCDIC code, utilizing both upper and lower case characters, while another might have a logical record length of 72, in ASCII code, and be in upper case only with capitals indicated by a "/" preceding the capitalized letter. The L.A.P. should be able to handle virtually any input format and set of conventions that the researcher can specify.
A researcher may want to use output from one step in the L.A.P. procedure as input to another. He may want to select subsets of a given data base, process each separately, then cross reference the results or subsets of the results. He may want to do transformations on the data as it is being processed, and use the transformed data as input. L.A.P. should be able to save output for further processing and should be able to save subsets of data once they are selected, in order to save the expense of repeated retrieval processing. The researcher should be able to present the system with a new file or retrieve and use a file either he or another researcher has previously used or created.
**Modes of Operation**
In addition to the flexibility of modularity, input formats and file handling, the L.A.P. should take advantage of the best features of two basic kinds of operation. An interactive, conversational system can provide the user high flexibility with little training. He interacts with the computer by answering questions, provides the program information about options he intends to implement for a particular run. On the other hand, a non-interactive, "batch" processing environment is significantly less expensive. For example, one Los Angeles service bureau\(^3\) prices interactive time at $360 per CPU hour, while batch processing is only $150 per CPU hour. Language analysis processing requires considerable CPU time, since most computers are not designed for text scanning and string manipulation, but rather for numeric processing. If the L.A.P. could provide interaction to collect the parametric
\(^3\) C & C Computing, 8939 S. Sepulveda, Los Angeles, California
information required for the run from the user, and batch processing for the remainder of the run on the data base, it would be optimal in both areas. It can do so by having an interactive module which sets up the parameters for the batch run, which can then be scheduled to run at a time optimal for cost considerations.
Certainly, the need for such a package exists. The design proposed here is not in any way meant to be a complete solution to that need, but rather a tool with enough inherent flexibility to survive after scratching the surface.
L.A.P. Design
An overview of the L.A.P. design is shown in the macro-flowchart (See Figure 1). The design consists of function modules for each of seven basic types of language analysis: frequencies, KWIC's, retrieval, indexing, parsing, semantic analysis, and statistical analysis such as type-token percentages, cross tabulations, etc. Once a pilot model is implemented, work can begin on refinement and optimization, and further programs of interest may be added. One such additional program is FAMULUS, a bibliography generating program, in FORTRAN IV.
Sub-modules such as a routine scanning for words, a dictionary look-up, or a routine for sorting will be included within the larger modules. Each of these function modules will consist of a full-blown program which will have subroutines or internal modules within it, any one of which, like the main modules, may be "unplugged" and replaced by a more efficient routine. There will be an additional processing module to translate internal format conventions into those of the input data.
The control of the program flow will reside in the control module, the parameters of which will be set during the interaction with the user provided by the interaction module. Any or all of the nine main processing modules can be called into action by the control routine, and the order and structure of the calling procedure need bear no resemblance to the linear thinking of the user who interacted with the interaction module, but can be structured in terms of machine efficiency for the particular combination of calls required by the particular run.
For the purposes of a pilot study of the feasibility of the L.A.P., programs in SWRL's possession can be used. At the present time, the program library includes programs for the following modules: frequencies, index, KWIC, parse, semantic analysis, retrieval, dictionary look-up. Several of the programs were collected from natural language specialists throughout the country, several were written by the author. Coincidentally, all are in PL/1. In order to implement the L.A.P. design discussed here, four routines need to be written, and sufficient interfacing prepared to allow the existing programs to run in the control environment.
Following is a detailed description of the system design for the modules which need to be written, together with comments on techniques of interfacing them with several of the existing programs.
Interaction Module
The function of the interaction module is to act as an interface between the user and the package. It may run as a front-end portion of the total program, if the computer system utilized has interactive
capability, or it may run at a separate facility (such as on a SWRL minicomputer) and prepare user control parameters to be appended to the input data which will then be run in batch mode at the main facility.
It will ask the user questions about the parameters of the run he is initiating and will utilize his answers to prepare computer compatible control parameters to be read by the control module.
Since it is a separately functioning entity, it will serve as a training program for researchers using the package for the first time. As each control parameter is compiled through the question and answer process, a statement will be printed out for the user's information. If the interaction module is running as a front-end portion of the total program, the control statements will be passed directly to the control module. If it is running separately, the control statements will be output in a form appropriate for use as input to the main package program, such as punched cards or magnetic tape.
After the researcher has used the interaction module for a while he may find that he is sufficiently familiar with the requirements of the user control language so that he feels competent to prepare his own control statements without computer assistance. Certainly, such user expertise is one of the goals of the interaction module. As a teaching, as well as a functional program, it will keep a running count of user success and failure in the question answering process, and output such information to a system programmer whenever it is polled.
The system programmer can use such information to modify and upgrade the package for maximum success within the environment of the actual researchers making use of the system.
Control Module
The function of the control module is to read a set of control statements containing run parameters preceding the data for a particular computer run, and to perform a decision making function for that run.
In any particular case, the user will specify which package functions he wishes to use, as well as the particular output specifications he has. For example, he may wish to produce a KWIC, a rank-ordered frequency count and a parsing of his text. He will indicate his needs by means of a user control language which will be runched and appended to the beginning of his input data. The control module will read the user control statements and compile a decision table which can be used by the program during execution to determine which subroutines will be called for that run. If the user has made syntax errors in his preparation of the user control statements, the control module will return error messages to him which will enable him to correct his errors before re-submitting the job. The user control language will be designed in such a way that typical errors, such as the omission of a comma, will be automatically corrected, allowing execution to proceed. In such cases of automatic error correction, a message will be printed on the output indicating the assumptions made by the control module, enabling the user to check for possible misunderstanding of his intent. If the program is run in interactive mode, these assumption statements will be printed out before the program continues to further execution, and a verification of correctness will be required from the user.
For each run, the user will specify the form of his input data, such as record length, columns punched, location of variables (for statistics module) and size of data base being used. The control module will scan this information and set parameters for the run to optimize usage of computer equipment and peripherals. Such parameters will control selection of I-O subroutines, storage and access subroutines, etc. Messages will accompany the output, indicating the options used in a particular run. Provision will be made for user override of the defaults.
The user will also indicate special conventions used in his data, such as a slash preceding a letter to indicate upper case, or an "@P" preceding a character stream to indicate the beginning of a new paragraph. The control module will evaluate the form of the input data, decide if sufficient information is present within the data to allow the requested function modules to perform, and determine whether the translate module needs to be called to provide the interface of data and programs. Messages will be output to the user indicating missing information which prevents execution. As in other cases, translation parameters for the particular run will accompany output. If the translation module is needed, the control module will pass the information contained in the user control statements concerning input conventions to the translation module. During execution, the control module will perform subroutine calls according to the decision table established from the user control statements.
Translation Module
The function of the translation module is to provide the interface between the user's data and the data conventions required by the particular
subprograms he wishes to use. It may be viewed as a subfunction of the control module.
Since it is extremely costly to convert a large data base to another format, the translation module will work in the opposite direction, converting the relatively few program conventions into the format in which the data exists. Such a conversion will be accomplished in the following manner.
Each of the function modules will have an array associated with it which is accessible to the translation module. The arrays will each have a dimension of 256, corresponding to the 256 possible 8-bit codes. Each position in the array will hold an octal number equivalent to the new value (the data dependent value) which should be utilized by the particular function program for the current run. The translation module will set up the arrays for those function modules being called by the current run, using the old value (the function module dependent value) as a subscript to locate the appropriate position within the array into which to store the data dependent value. For example, if the KWIC program is written to expect a slash ("/") preceding each character, which is to be taken as upper case, and the user's data has been prepared with a dollar sign serving the same function, an octal 133 (equivalent to an EBCIDIC "$") would be placed in position 97 of the array associated with the KWIC program, since 97 is the EBCIDIC decimal equivalent of a slash ("/")\(^4\).
\(^4\) EBCIDIC codes have been used, since the UCLA computer facility is a likely one on which to set up the package. The same algorithm could be used with a computer which uses ASCII, with an octal 040 being placed in position 47 of the array.
Each of the function modules will have a specially prepared initialization subroutine which initializes each of the program-dependent variables (such as a variable "capital") to the value contained in the appropriate position in the array associated with that module.
The utilization of the general purpose array will allow great flexibility in those cases where a new module is to be added or substituted to the system, since no modification will be necessary to the total system in order to handle a completely different set of input requirements.
**Output Module**
The function of the output module is to direct various portions of the output to appropriate devices at the user's discretion.
In general, the default situation will be that all output goes to the printer; however, user's may have particular needs or preferences, and may elect to use another output device. For example, the user may wish to save his output in some computer compatible form for later input to some other program. In such a case, he may specify output to magnetic tape, punched cards, or punched paper tape.
Another, although complex use of the output module, would be an instance where the user wants his output to serve as input for another module within the package itself. In such a case, the output module would not only put the output onto a selected device, but also would put the data into an appropriate storage location within the system.
|
{"Source-Url": "https://files.eric.ed.gov/fulltext/ED111336.pdf", "len_cl100k_base": 4341, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 28217, "total-output-tokens": 5059, "length": "2e12", "weborganizer": {"__label__adult": 0.0004010200500488281, "__label__art_design": 0.0006756782531738281, "__label__crime_law": 0.0004427433013916016, "__label__education_jobs": 0.007656097412109375, "__label__entertainment": 0.0001894235610961914, "__label__fashion_beauty": 0.00020825862884521484, "__label__finance_business": 0.00033354759216308594, "__label__food_dining": 0.00036978721618652344, "__label__games": 0.0006928443908691406, "__label__hardware": 0.001766204833984375, "__label__health": 0.0008096694946289062, "__label__history": 0.0004208087921142578, "__label__home_hobbies": 0.00014281272888183594, "__label__industrial": 0.0005598068237304688, "__label__literature": 0.002124786376953125, "__label__politics": 0.000324249267578125, "__label__religion": 0.00060272216796875, "__label__science_tech": 0.186279296875, "__label__social_life": 0.0002236366271972656, "__label__software": 0.058380126953125, "__label__software_dev": 0.736328125, "__label__sports_fitness": 0.000270843505859375, "__label__transportation": 0.0004727840423583984, "__label__travel": 0.0001671314239501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23009, 0.0048]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23009, 0.51263]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23009, 0.92857]], "google_gemma-3-12b-it_contains_pii": [[0, 713, false], [713, 1517, null], [1517, 3322, null], [3322, 4980, null], [4980, 6730, null], [6730, 8446, null], [8446, 9909, null], [9909, 11597, null], [11597, 13195, null], [13195, 14809, null], [14809, 16540, null], [16540, 18147, null], [18147, 19867, null], [19867, 21571, null], [21571, 23009, null], [23009, 23009, null]], "google_gemma-3-12b-it_is_public_document": [[0, 713, true], [713, 1517, null], [1517, 3322, null], [3322, 4980, null], [4980, 6730, null], [6730, 8446, null], [8446, 9909, null], [9909, 11597, null], [11597, 13195, null], [13195, 14809, null], [14809, 16540, null], [16540, 18147, null], [18147, 19867, null], [19867, 21571, null], [21571, 23009, null], [23009, 23009, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23009, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23009, null]], "pdf_page_numbers": [[0, 713, 1], [713, 1517, 2], [1517, 3322, 3], [3322, 4980, 4], [4980, 6730, 5], [6730, 8446, 6], [8446, 9909, 7], [9909, 11597, 8], [11597, 13195, 9], [13195, 14809, 10], [14809, 16540, 11], [16540, 18147, 12], [18147, 19867, 13], [19867, 21571, 14], [21571, 23009, 15], [23009, 23009, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23009, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
da3cf6d16d5f82d3278375af096a64287d72038f
|
A Metamodel and Model-based Design Rule Checking DSL for Verification and Validation of Electronic Circuit Designs
Adrian Rumpold and Bernhard Bauer
Institute of Computer Science, University of Augsburg, Germany
Keywords: Domain-specific Modeling, Model-based Analysis, Model-based Testing and Validation, Systems Engineering, Electronic Design Automation, Design Rule Checking.
Abstract: Development of embedded systems depends on both software and hardware design activities. Quality management of development artifacts is of crucial importance for the overall function and safety of the final product. This paper introduces a metamodel and model-based approach and accompanying domain-specific language for validation of electronic circuit designs. The solution does not depend on any particular electronic design automation (EDA) software, instead we propose a workflow that is integrated into a modular, general-purpose model analysis framework. The paper illustrates both the underlying concepts as well as possible application scenarios for the developed validation domain-specific language, MBDRC (Model-Based Design Rule Checking). It also discusses fields for further research and transfer of the initial results.
1 INTRODUCTION
Embedded hardware and computing have become pervasive aspects of modern technology, which we encounter under a variety of different names: cyber-physical systems, embedded systems, smart devices, and not least the Internet of Things. While different in their concrete use cases and implementation, they all share a common underlying concept, the close connection between software and hardware aspects. Their design and development activities differ, but ultimately both need to be actively quality managed in order to attain a sufficient level of quality. This quality may be expressed in terms of freedom from errors, but may also refer to aspects such as reliability, dependability, and safety, depending on the application domain.
Hardware design, and the design of electronic circuits in particular, differ significantly from the activities in a software development process, since it touches on both the abstract design of electronic circuits as well as their physical embodiment in the form of printed-circuit boards (PCB), assemblies, and higher integration levels. Nonetheless, both domains rely on tool support to identify mistakes by the developer, enforce design rules, or validate certain properties of the system under development. Electronic design automation (EDA) tools commonly include such functionality, but only offer it as part of a manual workflow within the software package — mostly only available as non free, proprietary software with the risk of vendor lock-in.
This paper proposes a model-based solution to the challenge of automatic validation of electronic design rules, outside of a particular EDA tool suite. As a prerequisite, we introduce an Ecore-based metamodel for electronic circuits and illustrate the transformation from industry-standard textual description formats into this model representation. Based on this modeling approach, we then introduce MBDRC, a textual domain-specific language for description of design rules for electronic circuits represented as instances of the EDA metamodel, demonstrate its applicability and describe further research opportunities.
A caveat: Similar approaches are also found – using similar terminology – in the domain of VLSI (very large scale integration) design of integrated circuits on the silicon level – however, this paper is not aimed at these applications but rather a component-level view of embedded hardware.
Outline
Section 2 discusses related work, regarding both the domain of electronic design automation and the metamodeling and analysis approaches shown in this paper. We introduce a metamodel for
electronic circuits based on the KiCad open-source EDA software in section 3. Based on this modeling approach, we develop MBDRC, a textual domain-specific language and an associated analysis workflow for validation of electronic design rules in section 4. This section also demonstrates the applicability of our proposed approach in two realistic use cases. Section 5 summarizes the key results of this paper and suggests starting points for further research beyond these initial efforts.
2 RELATED WORK
Metamodelling
The de-facto standard for simulation of electrical circuits is the SPICE tool (Nagel, 1975; Quarles et al., 1993). SPICE uses a textual format for description of circuits on a component level, their electrical connections, and simulation parameters. The structural description forms a netlist, which lists instances of components, their terminals or pins (external connection points), and nets (the electrical connections between these terminals). A similar netlist format lies at the heart of our metamodel used for circuit description, introduced in section 3.
Although it focuses on a chip-level view, the work by Fischbach et al. (2014) proposes an abstract, highly generic metamodel for netlists, also loosely based on the SPICE format.
In the automotive domain, the AUTOSAR standard (Automotive Open System Architecture) provides a metamodel specification for the description of ECU (electronic control unit) hardware resources (AUTOSAR, 2017). Using this metamodel, electronic components commonly found in the automotive field can be described on a hardware element level, with pins and pin groups forming their connection. While the standard allows for hierarchical nesting of hardware elements, the metamodel is not well suited for capturing component-level design models of hardware assemblies. Rather, it addresses a higher level of abstraction, and as such can be regarded more of a complement to the metamodel proposed in this work, rather than a substitute.
Previous effort has been directed at standardization of a common format for exchange of electronic designs, leading to the creation of the Electronic Design Interchange Format (EDIF, see IEC 61690-2:2000). This interchange format mostly covers lower-level aspects of VLSI design (very large scale integration). As such, it is not suited well for describing circuits on a component level. Nonetheless, its modeling approaches for schematics can also be applied in the context of this paper.
Electrical and Design Rule Checking
While current research in electrical engineering has shifted towards low-level validation of integrated circuit designs, related academic literature exists that applies to the checking of electrical and design rules on a circuit level:
Pelz (1992) proposes a general set-based interpreted approach for design-rule and design-for-testability checking, which transforms netlists (specified in the common SPICE format) into an equivalent abstract set representation. Validations are subsequently performed by interpretation of a domain-specific language over this set representation. Their work is primarily focused on VLSI design, nevertheless it serves as a fundamental example of the use of domain-specific languages for the validation of EDA designs. The approach provides sufficient expressiveness, however it requires significant mental effort for construction and validation of the actual validation scripts.
Furthermore, commercial EDA software packages include a variety of built-in design rules, on which this paper draws to derive realistic application scenarios for the developed validation language.
Model Analysis
Rumpold et al. (2017); Pröll et al. (2018) have previously introduced a conceptual and tooling framework for complex model-based analyses with applications across various domains. They describe a modular architecture based on domain-specific modeling languages and associated analyses, which can be combined into arbitrary analysis workflows. We expand upon this framework in this paper, by adding support for models from the EDA domain and implementing an accompanying analysis wrapper for the MBDRC validation language.
3 A METAMODEL FOR ELECTRONIC CIRCUITS
In this section, we describe a metamodel that allows capturing information about the structure of electronic circuits, metadata associated with them, as well as a facility for modeling libraries of electronic parts used during circuit design.
3.1 Motivation and Foundations
Our metamodel closely follows the representation used by the popular open-source electronic design automation (EDA) software suite KiCad\(^1\). Chapters 11 and 15 of the official Eeschema (the schematic layout component of KiCad) documentation describe the generation of netlists and the file format used by KiCad in greater detail (Charras and Tappero, 2018). The reasons for choosing this format as the basis for the EDA metamodel are twofold: First, the free and open-source nature of the KiCad tool suite ensures that the schematic capture software is universally available. Secondly, the format combines two concepts that benefit from a close connection, the modeling of the actual schematic as well as the abstract parts underlying the design. Similar textual representations exist for virtually all EDA software packages, the OrCAD Capture User Guide includes a comprehensive overview (Cadence Design Systems, 2016, ch. 20).
Conceptually, a schematic of an electronic circuit denotes the component symbols for the parts of the circuit and electrical connections between them. These connections between components form the basis for netlists, which list all nets formed by electrically connected parts (or their connection points, such as pins or pads on an integrated circuit). Figure 1 illustrates how these concepts relate to the graphical representation of circuit diagrams. From this structural description of electronic circuits the EMF metamodel for netlists shown in fig. 2 was derived.
EDA tools frequently include a library of common components, including their schematic symbols and information about physical properties such as pin assignments and their package. These libraries can also be extended by the designer, to accommodate for specific parts not available in the generic library. Figure 3 shows our metamodel for such component libraries. Parts are grouped inside uniquely identified libraries and carry information about their physical package and pin assignment, as well as textual fields stored as key-value pairs.
These key-value pairs can convey arbitrary additional information about a part or a specific component instance in a circuit, such as manufacturer part numbers, links to supplementary documentation, or simulation models and parameters.
The EDA metamodel has been defined as an instance of the EMF Ecore meta-metamodel. An accompanying parser allows to directly load file-based netlist representations as created by the KiCad software. This parser transforms the textual netlist into a proper instance of the EDA metamodel, which can be supplied as an input to the analysis framework introduced in the next section.
3.2 Description of Metamodel Elements
Part. A template element that abstractly describes a single electronic part and its basic properties (name and description, as well as alternative names [aliases]).
\(^1\)http://www.kicad-pcb.org
Library. In order to improve usability, EDA tools commonly group related parts into libraries, e.g. by function or manufacturer.
Footprint. In order to produce a printed circuit board (PCB) from a schematic, all components need to be assigned a footprint, which describes the physical packaging of the part. Since a single part may be offered in a variety of packages by its manufacturer, a single Part model element can be associated with multiple Footprint element instances, whereas only a single footprint is permitted for a concrete component in a circuit.
Pin. A single external connection point for a part, identified by its ordinal number with respect to the footprint of the part. A pin is further characterized by its mode of operation, e.g. output, input, or power supply pins of a part. Each part may contain any number of pins, although most electrical components contain at least two pins (notable exceptions are e.g. test points used for quality assurance and debugging purposes, which only contain a single connection point).
Field. Both library parts as well as components can carry arbitrary metadata in the form of key-value pairs, which may be used to convey additional information about the underlying circuit element (such as documentation references or procurement information).
Netlists. The collection of all nets in a circuit.
Net. An electric connection between one or more nodes. Nets are identified by a numeric code and may be assigned a unique name. In a hierarchical schematic, nets may also be designated as local, i.e. not visible outside the current hierarchy level.
Node. A node is the point where a single pin of a component makes connection with a net.
Component. A concrete instance of a library part on a circuit diagram. A component is identified by its reference designator on the schematic, usually comprised of a single-letter prefix and a numeric counter (e.g. C12 for a capacitor; see IEEE Std 315-1975 for comprehensive reference). Components are assigned a value, which further characterizes the component (e.g. the resistance or capacitance values for passive elements, or the concrete part name for a library part with multiple aliases). A unique timestamp allows to accurately match components, even when their designator changes, e.g. when the schematic is formatted in a different layout and designators are re-numbered.
3.3 Use Cases
The EDA metamodel described in this section forms the basis for the MBDRC validation language introduced below. However, other use cases besides circuit validation are possible, for example:
BOM Generation. Since the netlist model contains information about all components in a circuit, as well as additional metadata, a bill of materials (BOM) can be generated from it. The BOM lists all parts in a circuit, their designators, values, as well as additional procurement and/or assembly information. Identical parts can be grouped together to improve readability and conserve space.
The generation of a BOM is one example for the class of model-to-text (M2T) transformations that is possible using the EDA model as input.
Model Versioning. Given a representation of an electronic circuit as a model at different points in time, analysis of the evolution of the system over time in a semantic manner becomes possible (see Selic (2003, p. 23)). A persistent storage of these models allows to compare circuits between arbitrary revisions based on changes in the model elements, similar to approaches already found in the software modeling domain.
Furthermore, these snapshots can serve as baselines, from which different design variants can be derived; for example to evaluate different implementation options. Since our metamodel follows the Ecore meta-metamodel and is based on the set of Eclipse EMF technologies, model versioning approaches could be easily constructed using the EMF Compare feature (see also Brun and Pierantonio, 2008).
4 MBDRC — A DSL FOR MODEL-BASED DESIGN RULE CHECKING
4.1 Foundations
This section introduces a textual domain-specific language – MBDRC – for validation of electronic circuits based on the metamodel described above. Its name stems from the design rule checking (DRC) activities that form an integral part of the design of electronic circuits. Conventionally, such checks are
2https://www.eclipse.org/emf/compare
directly integrated into EDA tool packages, with little room for customization or secondary use outside the design tool.
We propose to separate the validation of design rules from the actual EDA tool used to design the system. This split enables use cases beyond the classical support of circuit designers in their daily work: For example, a standalone design validation can provide automated quality assessments of electronic designs in a similar fashion to state-of-the-art software engineering practice in the field of Continuous Integration. Analysis results can be visualized as part of a product quality dashboard, enabling a high-level overview of a system’s development progress.
Furthermore, a stand-alone validation approach based on open technologies can help to alleviate the effects of vendor lock-in caused by the use of proprietary software solutions.
4.2 Language Definition and Elements
The MBDRC language is a textual domain-specific language defined using the Eclipse Xtext language engineering toolkit. It utilizes the EDA metamodel from the previous section to express rules used to assess the validity of a given electrical circuit design.
**Overall Structure.** As the top-level entity, an MBDRC script file may contain an arbitrary number of named *rules*. These rules provide a semantic grouping for *validation expressions*, against which a given netlist should be validated.
Validation expressions comprise a number of first-order logic expressions over the elements and attributes of the EDA metamodel introduced in the previous section.
**Validation Expressions.** Rules are composed of one or multiple quantified first-order predicates, denoted by the *forall* and *exists* keywords in the MBDRC language (corresponding to the $\forall$ and $\exists$ quantifiers). If a rule contains more than one quantifier, the overall rule is considered the logical conjunction of these quantified expressions.
Every quantified expression may reference an arbitrary number of target variables, which will be bound during evaluation by the interpreter. The list of target variables immediately follows the quantifier keyword (see the following section for concrete examples of the syntax). Each variable is associated with a type, either `component`, `pin`, or `net`, and must have a unique identifier. These variable definitions are collected in the sets $C, N, P$ or components, pins, and nets for each quantified expression.
Individual quantified expressions are evaluated by binding the quantified variables to all combinations of components, nets, and pins of the netlist. All constraints in the quantified expression are then evaluated using this valuation; if multiple constraints are specified, the overall result is obtained as their logical conjunction.
The actual constraints are propositional logic formulae (using $\lor, \land, \rightarrow, \neg$), with support for equality and inequality comparisons ($<$, $>$, $\leq$, $\geq$, $=$, $\neq$), as well as property expressions on target variables, function calls, and array types.
Validation expressions can be scoped to only specific subsets of a netlist by specifying a *where* expression. During evaluation, only variable assignments that fulfill the scoping condition will be further validated against the rule body. By selecting appropriate scope conditions, the rule body can be simplified in order to improve readability and rule evaluation performance. Besides this main function of narrowing the scope for the rule body, the *where* clause also allows for a traversal of model structure, in a similar fashion to joins in SQL. This feature is further described in the next subsection.
**Type Checking.** Expressions in the MBDRC language are strongly typed and continuously type checked during the development of the script inside the editor as well as during the interpretation of the script.
The language supports both primitive types (boolean, integers, strings), complex types (as defined by the EDA metamodel), as well as sets of these.
Function expressions are statically typed, i.e. their parameter and return values types must be known at design time. Overloads of different return types are not currently supported in the DSL.
4.3 Rule Script Execution
The MBDRC domain-specific language forms the basis for an automated validation of a given netlist against a set of design rules. While in theory the analysis could be performed inside standalone validation tool, we envision its use in a more complex workflow. Therefore, the MBDRC analysis has been integrated into the model-based analysis framework previously described in Rumpold et al. (2017); Pröll et al. (2018). The addition of the EDA metamodel proposed in the previous section adds a new modeling domain to the set of domain specific modeling
The MBDRC analysis is available as one analysis unit along the execution graph to be processed by the analysis framework. It depends on a previous loader step, which transforms the textual representation of the input netlist into a proper domain model, and produces one or multiple analysis result objects. These describe the success or failure of each MBDRC rule that was validated, including detailed information about rule violations. Figure 4 illustrates the flow of information and dependencies between the steps of MBDRC validation.
The results can then be further processed, depending on the use case for the analysis: In an interactive setting, it might be desirable to highlight each circuit element that violates any design rules to aid the designer in fixing these problems. If the analysis is used to produce a quality report document, a more coarse-grained listing of all passed or failed design rules can be generated, omitting information about individual component-level elements. Generating such reports can be delegated to downstream model-to-text steps in the execution flow, to promote a proper separation of concerns between analysis modules.
4.4 Application Examples
The following section aims to illustrate the capabilities of the MBDRC language by applying it to two common tasks found in the EDA design process.
This selection of examples in this paper is by no means exhaustive, but rather seeks to provide an overview of the language elements and their possible applications.
Validation of Documentation and Manufacturing Information
Besides the structural information, the EDA metamodel introduced in the previous section also allows to attach metadata to each component in a circuit, or to the underlying library part, in the form of arbitrary key-value fields.
One use case for this information is to verify the availability of parts from a distributor and monitor the life cycle status of critical components as part of obsolescence management (see IEC 62402:2007 as an example for a relevant international standard).
The following MBDRC code snippet enforces that each component must be annotated with a distributor or manufacturer part number, unless it is actively marked as not being a critical part for the BOM (bill of materials):
```
Listing 1: MBDRC code for validation of procurement metadata.
rule BOMValidation {
forall (component c) {
// Every component must declare its
// manufacturer/distributor part number
constraint c.bom_critical != "no" ->
(c.distr_no != "" || c.manf_no != "");
}
}
```
Listing 1 illustrates the basic structure of an MBDRC script: Each validation rule, identified by an arbitrary name, may contain one or more quantified expressions to specify the target model elements. Each quantified validation expression defines constraints to be verified against matching elements from the netlist model under test. Both block and single-line comments may be added to the script in a syntax familiar from the C or Java programming languages.
Electrical Rules Checking
As a more comprehensive demonstration of the capabilities of the MBDRC language, this example shows the implementation of simple electric rules checks as a composite MBDRC rule. These checks are commonly found in EDA tools to prevent logical errors during design of an electronic circuit and serve to validate the design during the schematic capture phase of the product life-cycle.
ERC rules for example guarantee that no two power sources are directly connected, or that components are not shorted out (which effectively renders them useless in the circuit, since they are bypassed). Most commercial EDA packages include a variant of this check; fig. 5 depicts the configuration of the pin compatibility matrix in the KiCad ERC checker as one representative example.
Listing 2 shows the implementation of a subset of these checks in the MBDRC language. The example demonstrates some of the more advanced features of the DSL:
• Multiple quantified expressions can be combined to form composite rules. The overall rule evaluation result is formed by taking the logical conjunction of each quantified expression in the rule.
• where \((p1.net == p2.net)\): Scope filters can selectively apply constraints to model elements matching a filter expression. Complex property expressions (references to other model elements; in this example the net associated to a single pin) allow for comfortable model traversal similar to joins in relational databases.
• \(p2.mode \in \{"output", "power_out", "3state"\}\): The language supports sets of primitive types and membership testing for element properties, as a syntactical shorthand to improve readability.
• card(p.nets) <= 1: Function expressions allow computation of values from model properties (the example determines the number of nets a pin is connected to as the cardinality of the set). The set of available functions is currently fixed, future research may add the possibility of declaring additional functions as part of the MBDRC language.
• not exists: In order to improve readability, the result of quantified expressions may be negated. This does not change the expressiveness of the language, since the negation might also be pushed inside the expression (compare first-order logic: \(\neg\exists x. P(x) \iff \forall x. \neg P(x)\))
• severity=info and message "Pin mode unspecified": Rules may specify a custom level of severity (info, warning, error) as well as an informative message to be displayed to the user, if the rule is found to be violated during evaluation.
Listing 2: MBDRC code for validation of electrical connection rules.
```plaintext
rule ERC_PinTypes {
// Connected pins must have compatible modes
forall (pin p1, pin p2) where (p1.net == p2.net) {
constraint p1.mode == "power_out" -> !p2.mode in ["output", "power_out", "3state"];
constraint p1.mode == "output" -> p2.mode != "output";
constraint p1.mode in ["openCol", "openEm"] -> !p2.mode in ["output", "power_out"];
}
// Pins marked as 'do not connect' must not be attached to any other net in the schematic
forall (pin p, net n) where (n == p.net) {
constraint p.mode == "NotConnected" -> card(n) <= 1;
}
// Warn for unspecified pin modes
not exists (pin p) severity=warn {
message "Pin mode unspecified";
constraint p.mode == "unspc";
}
}
```
5 CONCLUSION
This paper has introduced two key results: First, we have proposed a general-purpose metamodel for electronic circuits derived from an industry-standard netlist representation. The metamodel allows capturing structural information about electronic circuits, as well as metadata and library information about the parts used in these circuits. Subsequently, we have described a textual domain-specific language for the analysis and validation of circuits represented as instances of this metamodel. Several application examples demonstrate the language features as well as the DSL’s suitability as a complement to the validation functionality found in common EDA tools.
We foresee that our approach can supplement the established design workflow for electronic circuits, by uncoupling the checking of design and electrical rules from any concrete EDA software suite. This additional freedom allows for easier tool interoperability and enables new use cases for these analyses.
Future Work
Based on our preliminary research introduced in this work, we envision a number of possible scenarios for further research, expanding the scope both towards more abstract system-level views, as well as sub-circuit level design activities.
Generation of Validation Code. The first prototype of the MBDRC DSL uses a separate
parser to validate the rules defined in an MBDRC script against a concrete netlist. This approach facilitates the rapid co-evolution of the language syntax and its associated semantics, but does not fully utilize the power of the model-based approach. Instead, validation code for a given set of rules can be generated directly. The set of technologies for implementation of the first prototype of the DSL was chosen with this extension in mind: Xtext offers rich support for generating Java (among other languages) code from a DSL script (see Bettini, 2016, ch. 5).
**Physical Layout Validation.** While the current implementation of the MBDRC language is based on the metamodel for the abstract circuit representation embodied by netlists, the same concepts hold for the validation of physical circuit layouts. Here, the validation focuses on the positioning and electrical connections between components on a printed circuit board (PCB). Common questions during PCB design revolve around minimum clearance between adjacent tracks on the board, physical dimensions of components, or violations of manufacturing process capabilities (e.g. minimum drill sizes for holes or minimum track widths that can be manufactured).
The EDA metamodel can be extended to also include the physical positioning of components on a printed circuit board, the tracks that correspond to nets, as well as the physical dimensions of the board and the components to be placed on it. By adding appropriate mathematical operations to the MBDRC language, it can then be used to answer these physical design questions.
**Test Plan Generation.** Our last suggestion for further research focuses on the test of hardware components. Based on the model representation, as well as higher-level descriptions of requirements and associated test goals, we envision a strategy for planning of testing activities: By establishing a model link between requirements, test goals, as well as the actual components under test in a single integrated model, test cases may be derived that fulfill these test goals. One example might be the goal of verifying the correct function of fault tolerance mechanisms by means of fault injection. The circuit model provides the necessary information about the available signals as well as potentially affected components, while the traceability of the model allows to identify affected software components. The integrated view on both these domains allows for a clearer picture of the dependencies between components, as well as the necessary development steps in order to achieve certain test goals.
**REFERENCES**
|
{"Source-Url": "https://www.scitepress.org/Papers/2019/73813/73813.pdf", "len_cl100k_base": 5930, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24668, "total-output-tokens": 7174, "length": "2e12", "weborganizer": {"__label__adult": 0.0008106231689453125, "__label__art_design": 0.0006651878356933594, "__label__crime_law": 0.0005550384521484375, "__label__education_jobs": 0.0008921623229980469, "__label__entertainment": 0.0001366138458251953, "__label__fashion_beauty": 0.0004742145538330078, "__label__finance_business": 0.0004265308380126953, "__label__food_dining": 0.0005955696105957031, "__label__games": 0.00091552734375, "__label__hardware": 0.048065185546875, "__label__health": 0.0011348724365234375, "__label__history": 0.0004360675811767578, "__label__home_hobbies": 0.0003612041473388672, "__label__industrial": 0.0031948089599609375, "__label__literature": 0.00024366378784179688, "__label__politics": 0.00042939186096191406, "__label__religion": 0.0010290145874023438, "__label__science_tech": 0.254638671875, "__label__social_life": 0.00010573863983154296, "__label__software": 0.0106353759765625, "__label__software_dev": 0.67138671875, "__label__sports_fitness": 0.0006923675537109375, "__label__transportation": 0.002010345458984375, "__label__travel": 0.0002849102020263672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33222, 0.01714]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33222, 0.67862]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33222, 0.89464]], "google_gemma-3-12b-it_contains_pii": [[0, 3846, false], [3846, 8304, null], [8304, 11237, null], [11237, 15592, null], [15592, 20406, null], [20406, 24396, null], [24396, 28122, null], [28122, 33222, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3846, true], [3846, 8304, null], [8304, 11237, null], [11237, 15592, null], [15592, 20406, null], [20406, 24396, null], [24396, 28122, null], [28122, 33222, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33222, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33222, null]], "pdf_page_numbers": [[0, 3846, 1], [3846, 8304, 2], [8304, 11237, 3], [11237, 15592, 4], [15592, 20406, 5], [20406, 24396, 6], [24396, 28122, 7], [28122, 33222, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33222, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4d605bc1e18fa889032aa37dda7ee3129dde81ee
|
A Case Study of Instruction Tuning with Mixture of Parameter-Efficient Experts
Oleksiy Ostapenko$^{1,2,5}$ Lucas Caccia$^{1,3,5}$ Zhan Su$^4$ Nicolas Le Roux$^{1,2,3,5}$ Laurent Charlin$^{1,6,7}$ Alessandro Sordoni$^5$
$^1$Mila - Quebec AI Institute, $^2$Université de Montréal, $^3$McGill University, $^4$University of Copenhagen, $^5$Microsoft Research, $^6$HEC Montréal, $^7$Canada CIFAR AI Chair
Abstract
We study the applicability of mixture of parameter-efficient experts (MoPEs) for instruction-tuning large decoder-only language models. Recent literature indicates that MoPEs might enhance performance in specific multi-task instruction-following datasets. In this paper, we extend such previous results and study the applicability of MoPEs in settings previously overlooked: a) with open-domain instruction-following datasets; b) with recent decoder-only models, and c) with downstream out-of-distribution test sets. We build on top of LLaMA1-13B/-7B and LLaMA2-13B. We study different variants of learned routing, namely per-example routing ([PE]), and a more expensive per-token ([PT]) routing. Overall, we are unable to substantiate strong performance gains observed in related studies in our setting. We observe occasional enhancements of LlAMA2 fine-tuned on the Open Platypus dataset in 0-shot SNI evaluation and TruthfulQA evaluation after fine-tuning on a subset of Flan. We also shed some light on the inner workings of MoPEs by comparing different routing strategies. We find that [PE] routing tends to collapse at downstream evaluation time reducing the importance of the router’s application. Our code will be publicly released as part of this repository https://github.com/microsoft/mttl.
1 Introduction
While fine-tuning often relies on end-to-end training of a dense base model, several recent works have demonstrated the efficacy of applying a mixture of parameter-efficient experts (MoPEs) techniques for fine-tuning a densely pretrained model on multi-task data [33, 2, 23]. Similar to mixture-of-expert approaches, MoPEs build on the intuition that each example or token should be processed by a subset of specialized experts. MoPEs differ from standard mixture-of-experts approaches in that each expert is a parameter-efficient adapter [10] and the mixture is obtained by combining the parameters of each expert. Concurrent work by Zadouri et al. [33] successfully applies MoPEs to encoder-decoder (T5) models in a multi-task dataset (T0 [21]) demonstrating performance comparable to full fine-tuning at a fraction of its cost.
In this work, we extend previous results and study MoPEs under different experimental conditions, i.e. a) with recent open-domain instruction following datasets such as Platypus [12], Flan-100K and Evol-Instruct [32], b) with large decoder-only models such as LLama2-13b, and c) by testing them on out-of-distribution tasks, zero-shot and few-shot. Additionally, we study different design choices for building MoPEs on top of decoder-only transformers: we ablate per-example and per-token routing with both LoRA [11] and IA3 [15] adapters. When using LoRA adapters, we explore a more efficient type of expert consolidation, namely consolidation before the outer-product of the low-rank adapters [2]. Across tasks, our results show that it is difficult to see any improvements with respect to the single-expert baseline, which boils down to standard parameter-efficient fine-tuning.
We find that the more efficient per-example routing performs similarly to the per-token routing used by [33]. We find that the more efficient way of consolidating experts before the outer-product seems to hurt performance only when applying per-token routing. Taken together, our results indicate that the performance gains reported in the recent work by [33] do not hold in our setting, thus pointing towards doubting the relevance of MoPEs in more general scenarios.
2 Methods
2.1 Parameter-efficient fine-tuning (PEFT)
Parameter-efficient fine-tuning aims to develop methods that enable memory and compute efficient fine-tuning of LLMs. This is usually achieved by infusing a subset of trainable parameters into a large frozen backbone, these infused parameters are also known as adapters [10]. Two prominent examples of PEFT methods are LoRA [11] and IA3 [15].
LoRA infuses a set of learnable low-rank matrices $A$ and $B$ that approximate a weight matrix of the shape identical to the original layer’s matrix. The activation of a transformer layer becomes:
$$ h = W_0 x + s \ast AB^T x, \tag{1} $$
where $B \in \mathbb{R}^{d_o \times r}$, $A \in \mathbb{R}^{d_o \times r}$, $W_0$ are the frozen weights of the layer that LoRA is being applied to and $s \geq 1$ is a trainable hyperparameter. IA3 introduces a more efficient way of fine-tuning that consists of scaling the layer’s output with a learnable vector $l$:
$$ h = l \odot (W_0 x). \tag{2} $$
Both types of adapters can be applied to any linear layer in the transformer model. While other PEFT methods exist [1, 18], we limit our attention to LoRA and IA3 adapters due to their suitability to MoE-based training as previously demonstrated by [2, 33].
2.2 Mixture of Experts (MoE)
In the context of transformer models [26], mixture-of-experts methods usually come in two flavours: ones that use external routing information like cluster assignments [13, 8, 20]; and ones that learn the routing end-to-end. In the latter category, which is particularly relevant in the context of this work, some of the feed-forward layers in the transformer are replaced with a set of $K$ experts $\{E(\cdot; \theta_0) \ldots E(\cdot; \theta_K)\}$ and a router $R$. The router $R$ is usually another feed-forward network that produces a $k$-dimensional vector indicating the routing probabilities for each expert. Routing is usually realized through weighted averaging of the outputs of the experts, i.e. the output of an MoE layer is defined as $h = \sum_{k=1}^{K} R(x)k E(x; \theta_k)$, where we denote parameters of expert $k$ as $\theta_k$. Such routing can be prohibitively expensive (at least in a vanilla implementation) as each example has to be passed through each of the activated experts. To tackle this limitation several works proposed to use sparse routing, where only top $k$ experts are used [7].
The majority of existing MoE methods are designed for the pre-training phase. More recently, several studies have begun to explore the application of MoEs in the fine-tuning phase, either by continuously fine-tuning pre-trained MoEs [23] or by applying MoE techniques atop dense pre-trained models [2, 33]. In this paper, we study the latter approaches, which we present in the next section.
2.3 Mixture of Parameter-Efficient Experts (MoPEs)
Recent work studied fine-tuning of LLMs using a mixture of parameter-efficient experts (MoPEs), which are MoEs that use PEFT adapters such as LoRA as experts [2, 33, 17]. MoPEs can be considered as a way of increasing capacity during fine-tuning of a dense pre-trained model and are expected to inherit some desirable properties of the MoEs, such as modularity and specialization.
Merging The output of a MoPE layer is computed by applying the average expert, whose parameters are weighted combination of all experts’ parameters at the MoPE layer with weights coming from the router: this ensures scalability to a large number of experts, given that there is no need to compute the output of each expert independently, and differentiability of the expert routing operation. Given
the representations at a layer in a LLM, the output of MoPEs is computed as:
$$h = E(x; \theta), \quad \theta = \sum_{k=1}^{K} R(x)_{k} \theta_{k}. \quad (3)$$
For example, in the case of LoRA experts, MoPEs compute:
$$\overline{AB} = \sum_{k=1}^{K} R(x)_{k} A_{k} B_{k}^{T}, \quad h = W_{0} x + s * \overline{AB}^{T} x, \quad (4)$$
which might be expensive given that, in the naive formulation, this requires explicitly building the $$d_{in} \times d_{out}$$ matrix $$\overline{AB}$$. We call this option MA (merging after the outer product). We can optimize this computation by first computing $$B_{k}^{T} x$$, for each $$k$$, then multiplying it with $$A_{k}$$, and then averaging the outputs. A cheaper alternative is to skip the averaging of the full LoRA matrices and average both $$A$$ and $$B$$ before taking the outer product:
$$\overline{A} = \sum_{k=1}^{K} R(x)_{k} A_{k}, \quad \overline{B} = \sum_{k=1}^{K} R(x)_{k} B_{k}, \quad h = W_{0} x + s * \overline{AB}^{T} x, \quad (5)$$
which leads to a less expressive combination given that the rank of the outer product cannot be greater than the rank of the LoRA matrices. We ablate these two variants in our experiments.
Routing LLMs compute a hidden representation for each token in the input sequence, i.e. $$x \in \mathbb{R}^{s \times d}$$, where $$s$$ is the sequence length. Therefore, routing can be done both per-example (PE) [17] or per-token (PT) [33]. Per-token routing computes different routing probabilities for each position $$s_{i}$$. Per-example routing conditions the router by taking the average of the input across the sequence dimension and applying the resulting routing to all tokens in $$x$$.
3 Experimental results
Methods We test the following methods. Single expert methods which are equivalent to applying a single low-rank adapter with rank 16 are denoted as LoRA[R16]. MoPEs variants are denoted with R, standing for router. Hence, R[PE, MA]_{(E8, R4)} denotes a MoPE with 8 experts, using per-example routing, each is a LoRA adapter with rank 4, and experts are averaged after the outer product application. Other routing variants are denoted according to this nomenclature.
Fine-tuning Our main experiments use LLaMA2-13B as a base model and fine-tune it on the Platypus instruction following dataset [12]. This dataset consists of 25K input-output pairs gathered from different open datasets and curated specifically to enable fast and efficient fine-tuning of LLaMA2 models with PEFT adapters. We also experiment with other instruction datasets such as FLAN-100K, a 100K example subset of the flan dataset [16].
In all experiments, the base model is quantized in 8-bit format [6] and we train PEFT adapters with float-32 precision. At generation time, we load the model in float-16 precision for inference. We use the hyperparameters introduced by [12] for fine-tuning, namely, we train for one epoch with a maximum input length of 4096, cosine learning rate annealing, batch size of 16 and micro-batch size of 1 and the learning rate of 3e-4. We use LoRA ranks of 16 and 32 for LoRA baselines and rank 4 for the MoPEs in order to keep the number of trainable parameters comparable. We fix the LoRA alpha to 16. In preliminary experiments, we noted that further increasing the LoRA rank results in decreased downstream performance.
Metrics To reliably evaluate the properties of our MoPEs, we analyze the following factors: downstream performance, routing diversity and routing entropy. The latter two factors, when considered together, provide valuable insights into the robustness of the routing mechanism, helping us detect potential routing collapses.
To evaluate the ability of the model to generalize to new tasks we use zero- and two-shot evaluation on Super-Natural Instructions dataset [29], which consists of 120 open-ended generation tasks; the MMLU [9] benchmark – a collection of 57 multiple-choice classification tasks formulated in natural language, ARC [5], 0-shot TruthfulQA (TQA) [14] and 10-shot HellaSwag [34].
We measure specialization through the lens of the average (normalized) entropy of the routing distribution:
$$H(r^{(l)}) = -\frac{1}{b} \sum_{i=1}^{b} E[\log(r^{(l)}_i)]/\log(k),$$
where $b$ is the batch size, $k$ is the number of experts and $r$ is the output of the router. We can average this quantity over the $L$ layers of the base model to gain a more aggregated view. We measure the diversity of the routing through (normalized) negative mutual information (MI) between examples and routings:
$$-\text{MI}^{(l)} = -E[\log(r^{(l)}_i)]/\log(k) - H(r^{(l)}),$$
where $r^{(l)}_i$ is the batch-averaged routing for layer $l$.
### Results
<table>
<thead>
<tr>
<th></th>
<th>0-SNI</th>
<th>2-SNI</th>
<th>MMLU</th>
<th>0-ARC</th>
<th>25-ARC</th>
<th>TQA</th>
<th>10-HS</th>
<th>AVG.</th>
</tr>
</thead>
<tbody>
<tr>
<td>LLaMA2 13B</td>
<td>7.13</td>
<td>27.6</td>
<td>55.5</td>
<td>49.2</td>
<td>59.6</td>
<td>36.9</td>
<td>82.2</td>
<td>45.4</td>
</tr>
<tr>
<td>LoRA (x16)</td>
<td>33.4 ±1.3</td>
<td>54.5 ±0.5</td>
<td><strong>58.3 ±0.4</strong></td>
<td>52.9 ±0.1</td>
<td>59.9 ±0.4</td>
<td><strong>44.1 ±0.3</strong></td>
<td>82.3 ±0.1</td>
<td>55.1</td>
</tr>
</tbody>
</table>
Table 1: LLaMA2 13B fine-tuned with different versions of MoPEs and evaluated on 0-shot SNI, 2-shot SNI (Rouge-L, ↑) [28], 5-shot MMLU [9] (Accuracy, ↑), 0-shot and 25-shot ARC [5], 0-shot TruthfulQA (TQA) [14] and 10-shot HellaSwag [34] downstream tasks. We do not seed R[PT] variant due to its poor performance on 0-shot SNI.
<table>
<thead>
<tr>
<th></th>
<th>0-SNI</th>
<th>MMLU</th>
<th>0-ARC</th>
<th>25-ARC</th>
<th>TQA</th>
<th>10-HS</th>
</tr>
</thead>
<tbody>
<tr>
<td>LoRA (R16)</td>
<td>46.79</td>
<td>38.2</td>
<td>45.4</td>
<td>49.1</td>
<td>33.7</td>
<td>77.3</td>
</tr>
<tr>
<td>R[PE] (E4)</td>
<td>48.7</td>
<td>36.8</td>
<td>46.0</td>
<td>48.5</td>
<td>36.0</td>
<td>77.4</td>
</tr>
<tr>
<td>R[PE] (E8)</td>
<td>50.5</td>
<td>38.3</td>
<td>45.1</td>
<td>47.8</td>
<td>33.5</td>
<td>77.3</td>
</tr>
<tr>
<td>R[PE] (E12)</td>
<td>49.1</td>
<td>37.3</td>
<td>47.4</td>
<td>49.7</td>
<td>36.2</td>
<td>77.4</td>
</tr>
<tr>
<td>R[PT] (E4)</td>
<td>48.8</td>
<td>37.5</td>
<td>45.4</td>
<td>49.7</td>
<td>34.3</td>
<td>77.6</td>
</tr>
<tr>
<td>R[PT] (E8)</td>
<td>49.9</td>
<td>36.8</td>
<td>47.0</td>
<td>49.1</td>
<td>33.0</td>
<td>77.6</td>
</tr>
<tr>
<td>R[PT] (E12)</td>
<td>49.2</td>
<td>37.5</td>
<td>47.4</td>
<td>49.7</td>
<td>36.2</td>
<td>77.8</td>
</tr>
</tbody>
</table>
Table 2: Results on Flan-100K fine-tuned using LLaMA1 7B as a base model – this is a subset of 100K examples from Flan [16] dataset used in [30]. Here we only tested [PE] and [PT] versions without MA.
**Limited gains with MoPEs** We show our main result for Open Platypus dataset in Table 1. We stick 8 experts as this performed best in the preliminary experiments. Overall, across all downstream evaluation tasks, MoPEs perform very similarly to the baseline LoRA, with a slight improvement on 0-shot ARC and a slight decrease in performance on TruthfulQA as compared to a single expert baseline LoRA. We also note that the MMLU result we obtained for LoRA is significantly higher than reported in the original Platypus paper[12] (they reported 56.7%). Interestingly, one result stands out – we observe a consistent performance improvement for MoPEs for 0-shot SNI.
In order to ensure the robustness of these results in the context of 0-shot SNI, we conducted additional evaluations using a slightly alternated prompt 1, that resulted in further performance gains for all models, with simple LoRA now outperforming best MoPE also on 0-shot SNI as shown in Table 3.
1We remove "Now complete the following example" from the prompt before giving the actual test example.
Table 4: Ablation. The effect of using uniform (UNIF-\(\mu\)) and training dataset average (DST-\(\mu\)) routing at downstream evaluation time.
Absence of substantial gains for MoPEs method transfer to LLama1 7b (c.f. Table 5). In the following, we use our standard prompt for 0-shot SNI evaluations to ensure consistency. Additionally to Open Platypus, we also experiment with Evol-Instruct (c.f. Table 7 ) and a 4x larger dataset, Flan-100K. The results for Flan-100K are presented in Table 2. Here we observe slight improvements on 0-shot ARC and a 2.5 percent-points improvement on TruthfullQA. This is a promising result showcasing that further investigation of MoPEs on larger datasets might lead to larger performance gains.
No significant improvement with per-token routing While per-token routing requires more computation as every token is processed by the router, per-example routing only processes a single averaged token for each example. In their recent work [33] demonstrated that per-token routing has an advantage in the context of encoder-decoder architecture as compared to per-example routing. In contrast to [33], we do not observe any significant gains when applying per-token routing. On the contrary, per-token routing appears to be more brittle, as it only works in combination with merging after the outer product (MA) and is more sensitive to wrong routing at test time as we discuss next.
Importance of training and downstream routing In this section, we examine the characteristics and importance of the learned router during the downstream evaluation. To this end, we substitute the learned router at downstream evaluation time with a uniform routing — UNIF-\(\mu\), or the average routing distribution, denoted as DST-\(\mu\) and derived from taking the average routing distribution at each layer on 20% of the training set. In Table 4 we observe that while uniform routing significantly decreases the downstream performance of [PE] routed models at MMLU and 0-shot SNI, we observe a smaller drop is present in the DST-\(\mu\) case.
To understand why DST-\(\mu\) still performs well, we examine the average entropy and diversity of routing distribution averaged over layers for the validation set, and downstream datasets, which we plot in Figure 1. As evidenced from Figure 1b, there is a large drop in routing diversity between validation (in-distribution) and downstream evaluation on both MMLU and SNI (out-of-distribution). This, coupled with low entropy routing as plotted in Figure 1a, suggests a routing collapse during downstream evaluation, which explains a large performance drop when using uniform routing and an insignificant drop when using dataset average routing for the [PE] variant. For a full picture, we include a per-layer diversity and entropy in Figures 3 and 4.
2Importantly, in their version of per-example routing, a separate model was applied to produce example embeddings, while here, similarly to [17], we compute average token.
Unlike the per-example ([PE]) routing, per-token ([PT]) routing exhibits distinct characteristics. First, it has a much larger entropy both in and out-of-distribution. Second, it exhibits a larger diversity as we show in Figure 1b. Both these observations point to the absence of collapse in PT routing which also explains a large drop in performance when disabling routing at evaluation time (reliably seen on SNI in Table 4.3).
IA3 underperforms LoRA To evaluate the impact of different adapter architectures, we employ IA3. The results are summarized in Table 6. The validation loss curves are plotted in Figure 2. IA3 significantly under-performs LoRA in our experiments introducing no improvement over the base model. We find that adding more experts does not result in performance gains contrary to the observation of [33]. This can be attributed to the extreme parameter efficiency of IA3, which comes at the cost of lower versatility.
4 Related Work
Instruction tuning Instruction tuning is a technique of fine-tuning LLMs on input-response pairs formulated in natural language. Early progress in this field was mainly achieved through transforming the classical NLP tasks such as summarization, sentiment classification etc., into a unified text-to-text format [19]. This also involved scaling the number of tasks into hundreds [21] or even thousands [4], and expanding the repertoire of instruction templates into hundreds[31]. These works have shown that scaling the data led to increasing improvement of performance across unseen tasks. Recently, a new thread of research has emerged, an open domain instruction fine-tuning, which seeks to fine-tune LLMs on more general purpose instruction datasets [29, 32, 12, 24, 3]. Many of these works rely on synthetic datasets generated by superior LLMs [32, 27, 12, 24] and tune smaller decoder-only models such as LLaMA1 [25] or LLaMA2 [25]. In this work, we follow this recent trend and rely on LLaMA2 as our foundation model. We fine-tune on the recently proposed Open Platypus dataset [12] – a dataset containing 25K input-output samples curated specifically for effective and efficient PEFT-based fine-tuning of the instruction following models.
Mixture of experts Mixture-of-experts in NLP have been developed originally for the pre-training phase with the motivation of scaling the parameter count of large models while keeping the computational cost similar to the dense model by only sparsely activating a subset of experts [22, 7, 36, 35]. Several recent works focused on using MoEs in the context of fine-tuning pre-trained language models. Notably [23] demonstrated that MoE fine-tuned on large corpus of instruction following tasks extracted from classical NLP tasks outperform dense fine-tuning in terms of both 0-shot and few-shot fine-tuning performance on unseen tasks. Similarly to our work, [33] used a parameter-efficient MoEs with experts represented with PEFT modules like low-rank adapters [11]. They tune an encoder-decoder based T5 [19] model on a dataset of 62 tasks obtained through converting classical NLP tasks into instruction following format using large set of prompt templates [21]. Here we follow a recent trend of fine-tuning decoder-only model on a general but relatively small dataset of for open-domain instruction following. The choice of a smaller dataset is justified also by the fact that we start from a larger but also much better base model LLaMA2.
Parameter efficient fine-tuning is a strategy aimed at minimizing the memory and time requirements during the fine-tuning process of Language Models (LMs). It achieves this by training a limited set of additional parameters within selected layers while keeping the backbone of the model frozen [10, 1]. Several prominent PEFT techniques have emerged in recent times. For instance, LoRA [11] introduces low-rank weights, while IA3 [15] focuses on scaling the activations. These approaches are geared towards streamlining and enhancing the efficiency of the fine-tuning process.
5 Conclusion
Our investigation raises doubts about the effectiveness of mixture of parameter-efficient experts (MoPEs) for open-domain instruction fine-tuning, particularly within the constraints of the relatively small fine-tuning datasets employed in this domain. To overcome these limitations, future research
could explore the application of MoPEs in conjunction with external routing priors, such as those derived from data clustering, which may enhance routing quality.
References
Appendix A Additional Results
Figure 2: Validation loss converge for LoRA vs. IA3.
Figure 3: Routing diversity per layer for [PE] routing with 8 experts.
Figure 4: Routing entropy per layer for [PE] routing with 8 experts.
Table 5: Result of instruction tuning with different PEFT-MoE flavours and Platypus dataset on top of LLaMA1 13B and 7B models. We report 0-shot SNI performance (Rouge-L) and 5-shot MMLU performance (accuracy).
<table>
<thead>
<tr>
<th></th>
<th>LLaMA1 13B (BASE)</th>
<th>LLaMA1 7B (BASE)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>MMLU</td>
<td>0-SHOT SNI</td>
</tr>
<tr>
<td>BASE</td>
<td>42.914</td>
<td>8.966</td>
</tr>
<tr>
<td>LoRA (E4)</td>
<td>50.94 ± 0.17</td>
<td>42.57 ± 0.20</td>
</tr>
<tr>
<td>R[PE] (E8)</td>
<td>50.86 ± 1.0</td>
<td>37.29 ± 0.4</td>
</tr>
<tr>
<td>R[PE, MA] (E8)</td>
<td>50.62 ± 0.5</td>
<td>40.07 ± 0.1</td>
</tr>
<tr>
<td>R[PT] (E8)</td>
<td>51.08 ± 0.8</td>
<td>40.92 ± 0.3</td>
</tr>
<tr>
<td>R[PT, MA] (E8)</td>
<td>50.52 ± 0.7</td>
<td>40.43 ± 2.8</td>
</tr>
</tbody>
</table>
Table 6: Results with IA3 adapter and Open Platypus dataset with LLaMA2-13b as base.
<table>
<thead>
<tr>
<th></th>
<th>MMLU</th>
<th>0-SH. SNI</th>
<th>2-SH. SNI</th>
</tr>
</thead>
<tbody>
<tr>
<td>IA3</td>
<td>55.37</td>
<td>6.74</td>
<td>30.54</td>
</tr>
<tr>
<td>R[PE]</td>
<td>E4)</td>
<td>55.63</td>
<td>6.69</td>
</tr>
<tr>
<td>R[PE]</td>
<td>E12)</td>
<td>55.41</td>
<td>6.71</td>
</tr>
<tr>
<td>R[PE]</td>
<td>E20)</td>
<td>55.48</td>
<td>6.71</td>
</tr>
</tbody>
</table>
Table 7: Results on Evol Instruct [32] fine-tuned using LLAMA2 13B as a base model. We fine-tuned for 1 epoch with max. sequence length of 1024.
<table>
<thead>
<tr>
<th></th>
<th>0-SNI</th>
<th>MMLU</th>
<th>0-ARC</th>
<th>25-ARC</th>
<th>TQA</th>
<th>10-HS</th>
</tr>
</thead>
<tbody>
<tr>
<td>LLaMA2 13B</td>
<td>7.13</td>
<td>55.5</td>
<td>49.2</td>
<td>59.6</td>
<td>36.9</td>
<td>82.2</td>
</tr>
<tr>
<td>LoRA</td>
<td>17.2</td>
<td>55.5</td>
<td>49.9</td>
<td>59.8</td>
<td>47.1</td>
<td>82.4</td>
</tr>
<tr>
<td>R[PE]</td>
<td>(E4, r=4)</td>
<td>20.6</td>
<td>54.3</td>
<td>50.9</td>
<td>60.3</td>
<td>45.0</td>
</tr>
<tr>
<td>R[PE]</td>
<td>(E4, r=8)</td>
<td>24.2</td>
<td>54.7</td>
<td>51.4</td>
<td>60.6</td>
<td>47.0</td>
</tr>
<tr>
<td>R[PE]</td>
<td>(E4, r=12)</td>
<td>21.7</td>
<td>54.9</td>
<td>50.3</td>
<td>60.7</td>
<td>47.6</td>
</tr>
<tr>
<td>R[PT,MA]</td>
<td>(E4, r=4)</td>
<td>13.9</td>
<td>55.0</td>
<td>50.8</td>
<td>60.2</td>
<td>43.7</td>
</tr>
<tr>
<td>R[PT,MA]</td>
<td>(E4, r=8)</td>
<td>15.9</td>
<td>55.1</td>
<td>51.7</td>
<td>60.2</td>
<td>45.0</td>
</tr>
<tr>
<td>R[PT,MA]</td>
<td>(E4, r=12)</td>
<td>15.9</td>
<td>54.7</td>
<td>51.6</td>
<td>60.6</td>
<td>45.7</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://openreview.net/pdf?id=Rye1neGGUd", "len_cl100k_base": 7091, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34524, "total-output-tokens": 10431, "length": "2e12", "weborganizer": {"__label__adult": 0.0005130767822265625, "__label__art_design": 0.0006833076477050781, "__label__crime_law": 0.0005354881286621094, "__label__education_jobs": 0.001934051513671875, "__label__entertainment": 0.0002930164337158203, "__label__fashion_beauty": 0.0002815723419189453, "__label__finance_business": 0.0003821849822998047, "__label__food_dining": 0.0004549026489257813, "__label__games": 0.00104522705078125, "__label__hardware": 0.0018777847290039065, "__label__health": 0.0006399154663085938, "__label__history": 0.0004744529724121094, "__label__home_hobbies": 0.00012671947479248047, "__label__industrial": 0.0008707046508789062, "__label__literature": 0.0007238388061523438, "__label__politics": 0.0004677772521972656, "__label__religion": 0.0008387565612792969, "__label__science_tech": 0.2386474609375, "__label__social_life": 0.000156402587890625, "__label__software": 0.01861572265625, "__label__software_dev": 0.7294921875, "__label__sports_fitness": 0.0004146099090576172, "__label__transportation": 0.0005106925964355469, "__label__travel": 0.0002282857894897461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32747, 0.08738]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32747, 0.15682]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32747, 0.82874]], "google_gemma-3-12b-it_contains_pii": [[0, 3451, false], [3451, 7546, null], [7546, 11584, null], [11584, 14730, null], [14730, 17728, null], [17728, 22074, null], [22074, 25710, null], [25710, 29342, null], [29342, 30420, null], [30420, 30647, null], [30647, 32747, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3451, true], [3451, 7546, null], [7546, 11584, null], [11584, 14730, null], [14730, 17728, null], [17728, 22074, null], [22074, 25710, null], [25710, 29342, null], [29342, 30420, null], [30420, 30647, null], [30647, 32747, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32747, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32747, null]], "pdf_page_numbers": [[0, 3451, 1], [3451, 7546, 2], [7546, 11584, 3], [11584, 14730, 4], [14730, 17728, 5], [17728, 22074, 6], [22074, 25710, 7], [25710, 29342, 8], [29342, 30420, 9], [30420, 30647, 10], [30647, 32747, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32747, 0.26207]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
a8b63918a136820bf78873f66b24b4298287d34f
|
Problem Set 2 Solutions
MIT students: This problem set is due in lecture on Monday, September 24.
SMA students: This problem set is due after the video-conferencing session on Wednesday, September 26.
Reading: Chapters 6, 7, §5.1-5.3.
Both exercises and problems should be solved, but only the problems should be turned in. Exercises are intended to help you master the course material. Even though you should not turn in the exercise solutions, you are responsible for material covered by the exercises.
Mark the top of each sheet with your name, the course number, the problem number, your recitation instructor and time, the date, and the names of any students with whom you collaborated.
MIT students: Each problem should be done on a separate sheet (or sheets) of three-hole punched paper.
SMA students: Each problem should be done on a separate sheet (or sheets) of two-hole punched paper.
You will often be called upon to “give an algorithm” to solve a certain problem. Your write-up should take the form of a short essay. A topic paragraph should summarize the problem you are solving and what your results are. The body of your essay should provide the following:
1. A description of the algorithm in English and, if helpful, pseudocode.
2. At least one worked example or diagram to show more precisely how your algorithm works.
3. A proof (or indication) of the correctness of the algorithm.
4. An analysis of the running time of the algorithm.
Remember, your goal is to communicate. Graders will be instructed to take off points for convoluted and obtuse descriptions.
Exercise 2-1. Do Exercise 5.3-1 on page 104 of CLRS.
Solution:
```plaintext
RANDOMIZE-IN-PLACE
n ← length[A]
for i ← 2 to n
```
For our base case, we have $i$ initialized to 2. Therefore we must show that for each possible 1-permutation, the subarray $A[1]$ contains this 1-permutation with probability $(n - i + 1)!/n! = 1/n$. Clearly this is the case, as each element has a chance of $1/n$ of being in the first position.
Exercise 2-2. Do Exercise 6.1-2 on page 129 of CLRS.
Solution:
By definition, a 1 element heap has a height of 0. Therefore $\lfloor \log n \rfloor$ where $n = 1$ is 0 and our base case is correct.
Now we use induction, and assume that all trees with $n - 1$ nodes or fewer has a height of $\lfloor \log n \rfloor$. Next we consider a tree with $n$ nodes. Looking at the $n$ node, we know its height is one greater than its parent (and since we’re not in the base case, all nodes have a parent). The parent of the $n$th node in the tree is also the $\lfloor n/2 \rfloor$th node in the tree. Therefore its height is $\lfloor \log \lfloor n/2 \rfloor \rfloor$. Then the $n$th node in the tree has a height of $1 + \lfloor \log \lfloor n/2 \rfloor \rfloor = 1 + \lfloor \log \lfloor n/2 \rfloor \rfloor = \lfloor \log 2 + \log \lfloor n/2 \rfloor \rfloor = \lfloor \log n \rfloor$. Therefore by induction we have shown that the height of an $n$ node tree is $\lfloor \log n \rfloor$.
Exercise 2-3. Do Exercise 6.4-3 on page 136 of CLRS.
Solution:
The running time of HEAPSORT on an array $A$ of length $n$ that is already sorted in increasing order is $\Theta(n \log n)$ because even though it is already sorted, it will be transformed back into a heap and sorted.
The running time of HEAPSORT on an array $A$ of length $n$ that is sorted in decreasing order will be $\Theta(n \log n)$. This occurs because even though the heap will be built in linear time, every time the $\text{max}$ element is removed and the \textsc{Heapify} is called it will cover the full height of the tree.
Exercise 2-4. Do Exercise 7.2-2 on page 153 of CLRS.
Solution:
The running time will be $\Theta(n^2)$ because every time partition is called, all of the elements will be put into the subarray of elements smaller than the partition. The recurrence will be $T(n) = T(n-1) + \Theta(n)$ which is clearly $\Theta(n^2)$
**Exercise 2-5.** Do Problem 7-3 on page 161 of CLRS.
**Solution:**
(a) This sort is intuitively correct because the largest $1/3$rd of the elements will eventually be sorted among their peers. If they are in the first third of the array to begin with, they will be sorted into the middle third. If they are in the middle or last third, then they will obviously be sorted into their proper position. Similarly any element which belongs in the each of the thirds will be sorted into position by the three sub-sorts.
(b) $T(n) = 3T(2n/3) + \Theta(1)$
Which solves to $\Theta(n^{\log_{3}2}) \approx n^{2.7}$
(c) STOOGESORT is slower than all of the other algorithms we have studied. INSERTION = $\Theta(n^2)$, MERGE SORT = $\Theta(n \log n)$, HEAPSORT = $\Theta(n \log n)$, and QUICKSORT = $n^2$. Therefore all other sorts are faster and these professors do not deserve tenure for this work!
**Problem 2-1. Average-case performance of quicksort**
We have shown that the expected time of randomized quicksort is $O(n \log n)$, but we have not yet analyzed the average-case performance of ordinary quicksort. We shall prove that, under the assumption that all input permutations are equally likely, not only is the running time of ordinary quicksort $O(n \log n)$, but it performs essentially the same comparisons and exchanges between input elements as randomized quicksort.
Consider the implementation of PARTITION given in lecture on a subarray $A[p \ldots r]$:
```
PARTITION(A, p, r)
1 x \leftarrow A[p]
2 i \leftarrow p
3 for j \leftarrow p + 1 to r
4 do if A[j] \leq x
5 then i \leftarrow i + 1
6 exchange A[i] \leftrightarrow A[j]
7 exchange A[p] \leftrightarrow A[i]
8 return i
```
Let $S$ be a set of distinct elements which are provided in random order (all orders equally likely) as the input array $A[p \ldots r]$ to PARTITION, where $n = r - p + 1$ is the size of the array. Let $x$ denote the initial value of $A[p]$.
(a) Argue that \(A[p+1 \ldots r]\) is a random permutation of \(S \setminus \{x\}\), that is, that all permutations of the input subarray \(A[p+1 \ldots r]\) are equally likely.
**Solution:**
Given that \(A[p \ldots r]\) is random (all orders are equally likely), there are \(n!\) possible permutations of the \(n = r - p + 1\) elements. Each element has a \(\frac{1}{n}\) probability of being chosen as the pivot, therefore the number of permutations of the remaining elements is \(n! \cdot \frac{1}{n} = (n-1)!\). Consequently the \((n-1)!\) permutations are equally likely.
Define \(\delta : S \to \{-1, 0, +1\}\) as follows:
\[
\delta(s) = \begin{cases}
-1 & \text{if } s < x, \\
0 & \text{if } s = x, \\
+1 & \text{if } s > x.
\end{cases}
\]
(b) Consider two input arrays \(A_1[p \ldots r]\) and \(A_2[p \ldots r]\) consisting of the elements of \(S\) such that \(\delta(A_1[i]) = \delta(A_2[i])\) for all \(i = p, p + 1, \ldots, r\). Suppose that we run \textsc{Partition} on \(A_1[p \ldots r]\) and \(A_2[p \ldots r]\) and trace the two executions to record the branches taken, indices calculated, and exchanges performed — but not the actual array values manipulated. Argue briefly that the two execution traces are identical. Argue further that \textsc{Partition} performs the same permutation on both inputs.
**Solution:**
\textsc{Partition} takes different branches based only on the comparisons made in the \(\delta\) function. This is clear by observing line 4 of the function, as it is the only place where the sequence of instructions may differ. As the arrays have identical \(\delta()\) function values, they must take the same branches, calculate the same indices and perform the same exchanges. Consequently \textsc{Partition} will perform the same partition on both arrays, which follows directly as the exchanges performed are identical.
Define a sequence \(F = \langle f_1, f_2, \ldots, f_n \rangle\) to be an \((n, k)\) **input pattern** if \(f_1 = 0, f_i \in \{-1, +1\}\) for \(i = 2, 3, \ldots, n\), and \(|\{i : f_i = -1\}| = k - 1\).
Define a sequence \(F = \langle f_1, f_2, \ldots, f_n \rangle\) to be an \((n, k)\) **output pattern** if
\[
f_i = \begin{cases}
-1 & \text{if } i < k, \\
0 & \text{if } i = k, \\
+1 & \text{if } i > k.
\end{cases}
\]
We say that a permutation \(\langle s_1, s_2, \ldots, s_n \rangle\) of \(S\) **satisfies** a pattern \(F = \langle f_1, f_2, \ldots, f_n \rangle\) if \(\delta(s_i) = f_i\) for all \(i = 1, 2, \ldots, n\).
(c) How many \((n, k)\) input patterns are there? How many \((n, k)\) output patterns are there?
**Solution:**
There are \(\binom{n-1}{k-1}\) \((n, k)\) input patterns because we can choose \(k - 1\) positions out of \(n - 1\) possible positions to have \(\delta = -1\). There is one \((n, k)\) output pattern because the pattern must be exactly \(k - 1\) negative ones followed by a 0, followed by \(n - k\) ones.
(d) How many permutations of \(S\) satisfy a particular \((n, k)\) input pattern? How many permutations of \(S\) satisfy a particular \((n, k)\) output pattern?
**Solution:**
\((n - k)! (k - 1)!\) permutations are possible of \(S\) to satisfy a particular input pattern. This is the total number of ways to rearrange the elements which have a \(\delta\) value of \(-1\) amongst themselves, and rearrange those with a value of \(1\) amongst themselves. There are also \((n - k)! (k - 1)!\) permutations possible to satisfy a particular output pattern for the same reason.
Let \(F = \langle f_1, f_2, \ldots, f_n \rangle\) be an \((n, k)\) input pattern, and let \(F' = \langle f'_1, f'_2, \ldots, f'_n \rangle\) be an \((n, k)\) output pattern. Define \(S|_F\) to be the set of permutations of \(S\) that satisfy \(F\), and likewise define \(S|_{F'}\) to be the set of permutations of \(S\) that satisfy \(F'\).
(e) Argue that PARTITION implements a bijection from \(S|_F\) to \(S|_{F'}\). (Hint: Use the fact from group theory that composing a fixed permutation with each of the \(n!\) possible permutations yields the set of all \(n!\) permutations.)
**Solution:**
All members of \(S|_F\) satisfy \(F\) and so they all have the same result when the \(\delta\) function is applied to its elements. Therefore by part (b) when all these inputs are given to PARTITION they are subject to the same permutation. Using the hint, we then know that after all of the distinct inputs are run through PARTITION that they will produce all \((n - k)! (k - 1)!\) distinct outputs. From part (d) we know that \(S|_F\) and \(S|_{F'}\) are the same size, and also we have proven that PARTITION is onto, and therefore PARTITION must be a bijection!
(f) Suppose that before the call to PARTITION, the input subarray \(A[p+1..r]\) is a random permutation of \(S - \{x\}\), where \(x = A[p]\). Argue that after PARTITION, the two resulting subarrays are random permutations of their respective elements.
**Solution:**
Using our solution from part (e), we know that after PARTITION is run on \(S|_F\), we get all values in the set \(S|_{F'}\). Therefore we get all permutations of the \(n - k\) ones and all permutations of the \(k - 1\) negative ones. Furthermore, we get each sub-array permutations an equal number of times and so the subarrays are also random permutations.
(g) Use induction to show that, under the assumption that all input permutations are equally likely, at each recursive call of QUICKSORT($A, p, r$), every element of $S$ belonging to $A[p..r]$ is equally likely to be the pivot $x = A[p]$.
Solution:
The base case for the initial array; we know that it is randomly permuted, and so by part (f) and (a) each of its subarrays will also be randomly permuted after PARTITION. Therefore we can inductively apply (f) at each partition to prove that every subarray will also be randomly permuted.
(h) Use the analysis of RANDOMIZED-QUICKSORT to conclude that the average-case running time of QUICKSORT on $n$ elements is $O(n \log n)$.
Solution:
By part (g) we know that under the assumption that the input pattern is random every element is equally likely to be chosen as the pivot at each recursive call which then produces the same random distribution of quicksort traces as RANDOMIZED-QUICKSORT. Therefore as their distribution is the same, the expected-case analysis for RANDOMIZED-QUICKSORT will apply to the average case of QUICKSORT. Therefore the average case of QUICKSORT also takes $O(n \log n)$ time.
Problem 2-2. Analysis of $d$-ary heaps
A $d$-ary heap is like a binary heap, but (with one possible exception) nonleaf nodes have $d$ children instead of 2 children.
(a) How would you represent a $d$-ary heap in an array?
Solution:
The $d$-ary heap would be similar to a binary heap with the parent and child indexes calculated as follows:
$Parent[i] = \lfloor i / d \rfloor$
$j$th-Child[$i$] = $d \cdot i + j$ where $j = 0 \ldots d - 1$
The root of the tree would be at index $i = 1$.
Alternate Solution: A $d$-ary heap can be represented in a 1-dimensional array as follows. The root is kept in $A[1]$, its $d$ children are kept in order in $A[2]$ through $A[d + 1]$, their children are kept in order in $A[d + 2]$ through $A[d^2 + d + 1]$, and so on. The two procedures that map a node with index $i$ to its parent and to its $j$th child (for $1 \leq j \leq d$), respectively, are:
D-ARY-PARENT \( (i) \)
\[
\text{return } \left\lceil \frac{(i - 1)}{d} \right\rceil
\]
D-ARY-CHILD \( (i, j) \)
\[
\text{return } d(i - 1) + j + 1
\]
To convince yourself that these procedures really work, verify that
\[
\text{D-ARY-PARENT(\text{D-ARY-CHILD}(i, j))} = i,
\]
for any \( 1 \leq j \leq d \). Notice that the binary heap procedures are a special case of the above procedures when \( d = 2 \).
(b) What is the height of a \( d \)-ary heap of \( n \) elements in terms of \( n \) and \( d \)?
Solution:
\text{CORRECTION}
A \( d \)-ary heap would have a height of \( \Theta(\log_d n) \). We know that
\[
1 + d + d^2 + \ldots + d^{h-1} < n \leq 1 + d + d^2 + \ldots + d^h
\]
\[
\frac{d^h - 1}{d - 1} < n \leq \frac{d^{h+1} - 1}{d - 1}
\]
\[
d^h < n(d - 1) + 1 \leq d^{h+1}
\]
\[
h < \log_d(n(d - 1) + 1) \leq h + 1
\]
which solves to \( h = \left\lceil (\log_d(n(d - 1) + 1) - 1) \right\rceil \).
(c) Give an efficient implementation of \text{EXTRACT-MAX} in a \( d \)-ary max-heap. Analyze its running time in terms of \( d \) and \( n \).
Solution:
\text{HEAPIFY}(A, i, n, d)
1 \hspace{1em} j \rightarrow i
2 \hspace{1em} \text{for } k \leftarrow 0 \text{ to } d - 1
3 \hspace{1em} \text{if } d * i + k \leq n \text{ and } A[d * i + k] > A[j]
4 \hspace{1em} \text{then } j = d * i + k
5 \hspace{1em} \text{if } j \neq i
6 \hspace{1em} \text{then}
7 \hspace{2em} \text{Exchange } A[i] \leftrightarrow A[j]
8 \hspace{2em} \text{HEAPIFY}(A, j, n, d)
The running time of \text{HEAPIFY} is \( O(d \log_d n) \) because at each depth we are doing \( d \) loops, and we recurse to the depth of the tree. In \text{HEAPIFY} we compare the the \( i \)th node and each of its children to find the maximum value for all of the nodes. Then if
the maximum child is greater than the $i$th node, we switch the two nodes and recurse on the child.
**EXTRACT-MAX**($A$, $n$)
1. $max \leftarrow A[1]$
3. $n = n - 1$
4. **HEAPIFY**($A$, 1, $n$, $d$)
5. return $max$
The running time of this algorithm, is clearly constant work plus the time of **HEAPIFY** which as shown above is $O(d \log_d n)$. **EXTRACT-MAX** works by storing the value of the maximum element, moving the minimum element into the root of the heap, and then calling heapify to restore the heap property.
**d)** Give an efficient implementation of **INSERT** in a $d$-ary max-heap. Analyze its running time in terms of $d$ and $n$.
**Solution:**
See next problem part for **INCREASE-KEY** definition.
**INSERT**($A$, $k$, $n$, $d$)
1. $n \leftarrow n + 1$
3. **INCREASE-KEY**($A$, $i$, $k$, $n$)
From the following problem part, we know **INCREASE-KEY** runs in $O(\log_d n)$ time, therefore since **INSERT** only adds constant time operations it is also $O(\log_d n)$. It is rather trivially correct as the algorithm has not changed because all calculations involving the number of children are performed by **INCREASE KEY**.
**e)** Give an efficient implementation of **INCREASE-KEY**($A$, $i$, $k$), which first sets $A[i] \leftarrow \max(A[i], k)$ and then updates the $d$-ary max-heap structure appropriately. Analyze its running time in terms of $d$ and $n$.
**Solution:**
INCREASE-KEY($A, i, k$)
1 $A[i] \leftarrow \max(A[i], k)$
2 if $k = A[i]$
3 while $i > 1$ and $A\left\lfloor \frac{i}{d} \right\rfloor < A[i]$
4 do
5 Exchange $A[i] \leftrightarrow A\left\lfloor \frac{i}{d} \right\rfloor$
6 $i \leftarrow \left\lfloor \frac{i}{d} \right\rfloor$
Our implementation loops proportionally to at most the depth of the tree, therefore it runs in $O(\log_d n)$ time. INCREASE-KEY loops, at each step comparing the increased node to its parent and exchanging them if the heap property is violated. Therefore, once the algorithm terminates we know that the node once again satisfies the heap property and has the correct value.
(f) When might it be better to use a $d$-ary heap instead of a binary heap?
Solution:
It would be better to use a $d$-ary heap when it is predicted that the heap will do many more INSERTs and INCREASE-KEYs than EXTRACT-MAXs because INSERT and INCREASE-KEY are faster algorithms as $d$ increases while EXTRACT-MAX gets slower.
|
{"Source-Url": "http://courses.csail.mit.edu/6.046/fall01/handouts/ps2sol.pdf", "len_cl100k_base": 5176, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26776, "total-output-tokens": 5866, "length": "2e12", "weborganizer": {"__label__adult": 0.0005140304565429688, "__label__art_design": 0.0006241798400878906, "__label__crime_law": 0.000797271728515625, "__label__education_jobs": 0.028839111328125, "__label__entertainment": 0.0001646280288696289, "__label__fashion_beauty": 0.00030875205993652344, "__label__finance_business": 0.0004584789276123047, "__label__food_dining": 0.0009331703186035156, "__label__games": 0.002227783203125, "__label__hardware": 0.0018434524536132812, "__label__health": 0.0009908676147460938, "__label__history": 0.0007257461547851562, "__label__home_hobbies": 0.0003173351287841797, "__label__industrial": 0.0010614395141601562, "__label__literature": 0.0006580352783203125, "__label__politics": 0.0006723403930664062, "__label__religion": 0.000919342041015625, "__label__science_tech": 0.1103515625, "__label__social_life": 0.00032019615173339844, "__label__software": 0.01056671142578125, "__label__software_dev": 0.8349609375, "__label__sports_fitness": 0.0006809234619140625, "__label__transportation": 0.0008668899536132812, "__label__travel": 0.0003476142883300781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17473, 0.02524]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17473, 0.65743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17473, 0.85231]], "google_gemma-3-12b-it_contains_pii": [[0, 1590, false], [1590, 3737, null], [3737, 5954, null], [5954, 8451, null], [8451, 11233, null], [11233, 13287, null], [13287, 15041, null], [15041, 16492, null], [16492, 17473, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1590, true], [1590, 3737, null], [3737, 5954, null], [5954, 8451, null], [8451, 11233, null], [11233, 13287, null], [13287, 15041, null], [15041, 16492, null], [16492, 17473, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17473, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17473, null]], "pdf_page_numbers": [[0, 1590, 1], [1590, 3737, 2], [3737, 5954, 3], [5954, 8451, 4], [8451, 11233, 5], [11233, 13287, 6], [13287, 15041, 7], [15041, 16492, 8], [16492, 17473, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17473, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
34e1273b177271fb45ad652274e4c204a51e86a9
|
Accidental Sensitive Data Leaks Prevention via Formal Verification
Madalina G. Ciobanu¹, Fausto Fasano¹, Fabio Martinelli², Francesco Mercaldo¹,² and Antonella Santone¹
¹Department of Biosciences and Territory, University of Molise, Pesche (IS), Italy
²Istituto di Informatica e Telematica, Consiglio Nazionale delle Ricerche, Pisa, Italy
Keywords: Android, Security, Model Checking, Formal Methods, Privacy.
Abstract: Our mobile devices, if compared to their desktop counterpart, store a lot of sensitive and private information. Considering how easily permissions to sensitive and critical resources in the mobile environment are released, for example in Android, sometimes the developer unwittingly causes the leakage of sensitive information, endangering the privacy of users. Starting from these considerations, in this paper we propose a method aimed to automatically localise the code where there is the possibility of information leak. In a nutshell, we discuss a method aimed to check whether a sensitive information is processed in a way that violates specific rules. We employ code instrumentation to annotate sensitive data exploiting model checking technique. To show the effectiveness of the proposed method a case study is presented.
1 INTRODUCTION
Mobile applications are widely used in different sectors with billions of smartphone owners using mobile apps daily. The evolution of mobile software requires more attention, appropriate skills, and a better comprehension for the development, maintenance, and engineering of applications phases.
Nowadays, mobile apps need to seamlessly interact with back-end servers, which can be accomplished with numerous alterations and adjustments during the development phase (Harleen K. Flora, 2014). Smartphones, more than desktop and laptop computers, have lots of sensors that could increment usability but, there again, increment the overall complexity of the apps. With such sensors, indeed, it is possible to find position, rumor level, light, usage angle, movement and so on. Many of such sensors and peripherals produce sensitive data that must be managed following specific regulations. The GDPR ¹ introduces penalties including important administrative fines ² that can be imposed for any infringement of the Regulation, such
¹The General Data Protection Regulation (GDPR) (EU) 2016/679 is a regulation in EU law on data protection and privacy.
²Fines can be up to 20 million euros, or 4% of the firm’s worldwide annual revenue from the preceding financial year, whichever amount is higher. https://gdpr.eu/fines
tion itself, there is no ready-to-use solution to check whether an app is accidentally leaking sensitive information by unsafely saving it to the device memory of sending it to the back-end, without the explicit consent of the user. Despite code inspection can be very effective to identify such situation, several developers underrate it in favor of testing (Scanniello et al., 2013) even when the former technique would be more adequate.
In this paper, we propose a method to formally check whether a sensitive information is processed in a way that violates specific rules. The proposed method uses code instrumentation to annotate sensitive data and is based on model checking technique using temporal logic formulae to check whether the sensible data processing can break some defined rules. Moreover, the approach assists the software engineer in the identification of the reason why the rule is not satisfied, providing relative traces from the simulation environment.
The rest of the paper proceeds as follows: In Section 2 background notions about model checking are provided and the proposed method to detect sensitive information leakage in Android environment is presented. In Section 3 a case study that illustrates the proposed method applied to the real scenario is presented. Related work is discussed in Section 4, while conclusion and future work are discussed in Section 5.
2 PROPOSED METHOD
In this section we briefly introduce background notions about model checking, in the next we describe the proposed method to detect sensitive data leakage in Android environment.
2.1 Model Checking and Mu-calculus Logic
Model checking is a formal method for determining if a model of a system satisfies a correctness specification (Clarke et al., 2001; Ceccarelli et al., 2014; Santone, 2002; Santone, 2011; Barbui et al., 2005; Gradara et al., 2005). A model of a system consists of a Labelled Transition System (LTS). A specification or property is a logical formula. A model checker then accepts two inputs, an LTS and a temporal formula, and returns true if the system satisfies the formula and false otherwise.
An LTS comprises some number of states, with arcs between them labelled by activities of the system. An LTS is specified by:
- a set $S$ of states;
- a set $L$ of labels or actions;
- a set of transitions $T \subseteq S \times L \times S$.
Transitions are given as triples $(start, label, end)$.
With the aim to express proprieties of the system we consider the modal mu-calculus (Stirling, 1989) which is one of the most important logics in model checking.
The syntax of the mu-calculus is the following, where $K$ ranges over sets of actions (i.e., $K \subseteq L$) and $Z$ ranges over variables:
$$\varphi ::= \top \mid \bot \mid Z \mid \varphi \land \varphi \mid \varphi \lor \varphi \mid [K] \varphi \mid (K) \varphi \mid \nu Z. \varphi \mid \mu Z. \varphi$$
A fixpoint formula may be either $\mu Z. \varphi$ or $\nu Z. \varphi$ where $\mu Z. \varphi$ binds free occurrences of $Z$ in $\varphi$. An occurrence of $Z$ is free if it is not within the scope of a binder $\mu Z$ (resp. $\nu Z$). A formula is closed if it contains no free variables. $\mu Z. \varphi$ is the least fixpoint of the recursive equation $Z = \varphi$, while $\nu Z. \varphi$ is the greatest one. From now on we consider only closed formulas.
Scopes of fixpoint variables, free and bound variables, can be defined in the mu-calculus in analogy with variables of first order logic.
The satisfaction of a formula $\varphi$ by a state $s$ of a transition system is defined as follows:
- each state satisfies $\top$ and no state satisfies $\bot$;
- a state satisfies $\varphi_1 \lor \varphi_2$ ($\varphi_1 \land \varphi_2$) if it satisfies $\varphi_1$ or (and) $\varphi_2$.
$(K) \varphi$ is satisfied by a state which, for every execution of an action in $K$, evolves to a state obeying $\varphi$. $[K] \varphi$ is satisfied by a state which can evolve to a state obeying $\varphi$ by performing an action in $K$.
For example, $(a) \varphi$ denotes that there is an $a$-successor in which $\varphi$ holds, while $[a] \varphi$ denotes that for all $a$-successors $\varphi$ holds.
The precise definition of the satisfaction of a closed formula $\varphi$ by a state $s$ (written $s \models \varphi$) is given in Table 1.
We consider the CWB-NC\(^3\) (Concurrency Work-Bench of the New Century) as formal verification environment. It is one of the most popular environments for verifying systems. In the CWB-NC the verification of temporal logic formulae is based on model checking (Clarke et al., 2001).
2.2 Information Leakage Detection
In this section, we describe our approach aimed to detect possible sensitive information leakage in Android
\(^3\)https://www3.cs.stonybrook.edu/~cwb/
Table 1: Satisfaction of a closed formula by a state.
<table>
<thead>
<tr>
<th>$p$</th>
<th>$\mu \phi \models \psi$ iff $p \models \phi$ and $p \models \psi$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$p$</td>
<td>$\models tt$</td>
</tr>
<tr>
<td>$p$</td>
<td>$\models \phi \land \psi$ iff $p \models \phi$ and $p \models \psi$</td>
</tr>
<tr>
<td>$p$</td>
<td>$\models \phi \lor \psi$ iff $p \models \phi$ or $p \models \psi$</td>
</tr>
<tr>
<td>$p$</td>
<td>$\models [K]_R \phi$ iff $\forall p'.\forall \alpha \in K.p \stackrel{\alpha}{\rightarrow} K.p' \implies p' \models \phi$</td>
</tr>
<tr>
<td>$p$</td>
<td>$\models (K)_R \phi$ iff $\exists p'.\exists \alpha \in K.p \stackrel{\alpha}{\rightarrow} K.p' \text{ and } p' \models \phi$</td>
</tr>
<tr>
<td>$p$</td>
<td>$\models \nu Z.\phi$ iff $p \models \nu Z^n.\phi$ for all $n$</td>
</tr>
<tr>
<td>$p$</td>
<td>$\models \mu Z.\phi$ iff $p \models \mu Z^n.\phi$ for some $n$</td>
</tr>
</tbody>
</table>
where:
- for each $n$, $\nu Z^n.\phi$ and $\mu Z^n.\phi$ are defined as:
$$\nu Z^0.\phi = tt \quad \mu Z^0.\phi = ff$$
$$\nu Z^{n+1}.\phi = \phi[\nu Z^n.\phi/Z] \quad \mu Z^{n+1}.\phi = \phi[\mu Z^n.\phi/Z]$$
where the notation $\phi[\psi/Z]$ indicates the substitution of $\psi$ for every free occurrence of the variable $Z$ in $\phi$.
Applications. It is worth noting that the approach is not limited to the Android platform, but, in this paper, we will focus on this platform because of its broad diffusion, the easiness and multiplicity of ways data can be shared with other applications, and the availability of numerous source code repositories to assess the approach.
The proposed method, on one hand uses code instrumentation to provide domain expert knowledge and, on the other hand, models the Android application as a labelled transition system (LTS) capturing the behaviour of the app, and evaluating temporal properties directly on this LTS. Figure 1 shows the workflow of the proposed method.
The first step of the proposed method is the Code instrumentation, aiming at providing basics for the Model checking phase. In particular, two kind of annotations can be specified: (i) which data should be considered sensitive and (ii) where, in the app, the user is informed about the way her data are stored or processed. An example of the former instrumented code is shown in Figure 2. Here, the mData variable stores meta-data of a picture in the device photo gallery. Amongst the picture meta-data, information can be found, such as the geolocation, other EXIF data, user tags, and the picture rating that could be considered sensitive. As so, in our example, the app developer decided to annotate the variable, thus informing the verification tool that the aforementioned variable must be checked against the specified rules. An example of the latter instrumented code is shown in Figure 3. In this case, the annotation is used as a checkpoint to state where the user is informed and determine if she agreed to the sensitive data processing or not.
The second phase of the method consists in a formal verification of a set of rules. We focus here on a GDPR-compliance rule stating that personal data processing without user consent or transferring personal data to a non GDPR-compliant recipient is forbidden.
In order to describe mobile applications we adopted the Milner’s Calculus of Communicating Systems (CCS) (Milner, 1989) language specification and express behavioural properties using mu-calculus branching temporal logic (Stirling, 1989). Similarly to previous works that adopted such methodology (Canfora et al., 2018), we developed a tool aimed to first generate a model in the CCS specification starting from the analyzed application source code where, for each instruction, we define a transformation function to map source code into CCS process specifications. Afterwards, we specify the set of properties we want to guarantee. In particular, codes described as CCS processes are first mapped to labelled transition systems and then verified with a model checker, i.e., the Concurrency Workbench of New Century (CWB-NC) (Cleaveland and Sims, 1996) verification environment.
In this work, we formulated logic rules to verify whether the value of a labelled variable (one of the annotated variable in the previous step of the approach) is accessed while saving data to the shared storage or sending data through the Internet without the explicit consent of the user. In
this paper, in order to simplify the rules, we suppose that whenever a variable is accessed before the use of specific permission invocation (i.e., the android.permission.INTERNET request or the android.permission.WRITE_EXTERNAL_STORAGE request) there could be a data leakage. Moreover, the explicit consent of the user has been identified by annotating the relative code (see Figure 3). To pass the CWB-NC model checker, the execution of the code must precede in time the access to the variable in the execution stack.
Whether at least one property is not verified, the proposed framework will mark the relative variable by appending a warning code to the corresponding annotation. Moreover, a Trace Log will be created.
\[^{4}\text{A Photo Manager App available on F-Droid repository at https://f-droid.org/en/packages/de.k3b.android.androFotoFinder/}\]
that contains details about the rule that was not verified and detailed information about the reason why the compliance is not guaranteed (e.g., the trace that leads to a sensitive data leakage). In case all the rules are verified, there is no action to take.
Note that, apart from the code instrumentation, that is responsibility of the app developer, the rest of the process is automatic. In particular, the construction of the LTS is completely automatic. Furthermore, the model checker tool can be automated to continuously verify, e.g., in the background, the specified logic formulae on the formal model, and interact with the development environment when a rule cannot be verified because of changes to the source code of the application.
3 THE CASE STUDY
In this section, we present the temporal logic formulae aimed to assist the developer notifying the users about possible sensitive data leaks.
As case study, we consider the *A Photo Manager* app, an Android application to manage local photos. Below we show three code snippet belonging to the *A Photo Manager* app.
In detail, we show three code snippets implementing behaviours that can conduct to leakage of sensitive information. We show the Java code snippets extracted by exploiting the *Bytecode Viewer* tool.
Figure 4 shows a code snippet related to the *equals* method of the *GeoLocation* belonging to the *com.draw.lang* package. This snippet accesses the current latitude and longitude (as evidenced by rows 9 and 11 in the Figure 4 snippet).
Figure 5 shows a code snippet related to the *getCacheDirectory* method of the *StorageUtils* class in the *com.nostra13.universalimageloader.utils* package. The snippet shown in Figure 5 is checking whether it is possible to access to the external storage (by invoking the *hasExternalStoragePermission* method). It can be of interest to highlight that on the Android external storage once a file is stored, all the installed application can read and write the resource without explicit consent to the user (Canfora et al., 2018).
The code snippet shown in Figure 6 is related to the *NetworkAvailabilityCheck* class constructor belonging to the *org.osmdroid.titleprovider.modules* package. This snippet requires the permission to access to the network exploiting the *android.permission.ACCESS_NETWORK_STATE* permission: basically this permission allows applications to access information about networks, for this reason this request can be considered as a potential
5https://github.com/Konloch/bytecode-viewer
information leak.
Table 2 shows the temporal logic formulae to detect whether the mobile developer previously informed the user before using a sensitive resource.
The temporal logic formulae are related to the detection of the following behaviours:
- with respect to the \( \phi \) formula, related to the snippet in Figure 4, the formula is satisfied whether a \log\ is invoked (with the aim to advise the user) \emph{before} using the information about the current device localisation;
- with respect to the \( \chi \) formula, related to the snippet shown in Figure 5, this formula is \emph{true} whether an instance of the \log\ is invoked \emph{before} asking for external storage usage;
- with respect to the \( \psi \) formula, related to the snippet shown in Figure 6, the \( \psi \) temporal logic property is satisfied whether a \log\ instance is required \emph{before} the device is using information related to the network state.
Table 2: Temporal logic formulae for mobile secure programming verification.
\[ \varphi_1 = \nu X. [\text{checkcastfromdrawlangGeoLocation}] \land [\neg \text{checkcastfromdrawlangGeoLocation}, \text{pushlog}] X \]
\[ \varphi_2 = \nu X. [\text{store}] \land [\neg \text{store}, \text{pushlog}] X \]
\[ \varphi_3 = \nu X. [\text{load}] \land [\neg \text{load}, \text{pushlog}] X \]
\[ \varphi = \varphi_1 \land \varphi_2 \land \varphi_3 \]
\[ \chi_1 = \nu X. [\text{push}] \land [\neg \text{push}, \text{pushlog}] X \]
\[ \chi_2 = \nu X. [\text{invokegetCacheDirectory}] \land [\neg \text{invokegetCacheDirectory}, \text{pushlog}] X \]
\[ \chi_3 = \nu X. [\text{invokegetExternalStorageState}] \land [\neg \text{invokegetExternalStorageState}, \text{pushlog}] X \]
\[ \chi_4 = \nu X. [\text{pushmounted}] \land [\neg \text{pushmounted}, \text{pushlog}] X \]
\[ \chi_5 = \nu X. [\text{invokehasExternalStoragePermission}] \land [\neg \text{invokehasExternalStoragePermission}, \text{pushlog}] X \]
\[ \chi_6 = \nu X. [\text{invokegetExternalCacheDir}] \land [\neg \text{invokegetExternalCacheDir}, \text{pushlog}] X \]
\[ \chi_7 = \nu X. [\text{invokeCacheDir}] \land [\neg \text{invokeCacheDir}, \text{pushlog}] X \]
\[ \chi = \chi_1 \land \chi_2 \land \chi_3 \land \chi_4 \land \chi_5 \land \chi_6 \land \chi_7 \]
\[ \psi_1 = \nu X. [\text{invokegetSystemService}] \land [\neg \text{invokegetSystemService}, \text{pushlog}] X \]
\[ \psi_2 = \nu X. [\text{checkcastandroidnetConnectivityManager}] \land [\neg \text{checkcastandroidnetConnectivityManager}, \text{pushlog}] X \]
\[ \psi_3 = \nu X. [\text{invokegetPackageManager}] \land [\neg \text{invokegetPackageManager}, \text{pushlog}] X \]
\[ \psi_4 = \nu X. [\text{pushandroid permissionACCESSNETWORKSTATE}] \land [\neg \text{pushandroid permissionACCESSNETWORKSTATE}, \text{pushlog}] X \]
\[ \psi_5 = \nu X. [\text{invokePackageName}] \land [\neg \text{invokePackageName}, \text{pushlog}] X \]
\[ \psi_6 = \nu X. [\text{invokecheckPermission}] \land [\neg \text{invokecheckPermission}, \text{pushlog}] X \]
\[ \psi = \psi_1 \land \psi_2 \land \psi_3 \land \psi_4 \land \psi_5 \land \psi_6 \]
Clearly in the code snippet shown in Figures 4, 5 and 6 these formulae are not satisfied: as a matter of fact there is no log invocation before the usage of the sensitive instructions, symptomatic that the user is not advised.
With the aim to help the developer to understand the reason why a certain formula is not satisfied, we show in Figure 7 the trace generated from the CWB-NC simulation environment. In detail, we are considering the model generated starting from the code snippet in Figure 4.
In this case the $\phi$ formula is not satisfied because it never happens that before a `checkcastcom.drewlangGeoLocation` a `store` and a `load` action there is a `log` action: in fact, as shown from Figure 7, the `pushlog` action is not present, while the `checkcastcom.drewlangGeoLocation`, the `store` and the `load` are present (in Figure 7 are highlighted). The $\phi$ formula is satisfied whether a `pushlog` action is present before the highlighted actions. In this way the developer is able to localise the exact point in the source code where to invoke the log action (i.e., the exact point where the user should be notified).
4 RELATED WORK
Several studies in current state of the art literature are mainly focused on mobile malware detection (Chen et al., 2016; Suarez-Tangil et al., 2017; Duc and Giang, 2018). These works are mainly exploiting machine learning techniques by extracting distinctive features from samples under analysis to discriminate between malicious applications and trusted ones. Contrarily, in this paper we investigate an automatized method aimed to detect possible sensitive information leakage by providing an useful tool for the developer to inform the user.
Shan et al. in (Shan et al., 2018) investigate about self-hiding behaviours (SHB), e.g. hiding the app, hiding app resources, blocking calls, deleting call records, or blocking and deleting text messages. First of all the authors provide an in-deep characterization of SHB, then they present a suite of static analyses to detect such behaviour. They define a set of detection rules able to catch SHB. They test their approach against more than 9,000 Android applications.
Dynamic taint analysis is a method to analyze executable files by tracing information flow (Kim et al., 2014; Dalton et al., 2010). This approach has been applied to detect sensitive information leaking of network servers (Li et al., 2014). Differently from us, besides being focused on network servers attacked by malwares or hackers, sensitive data tagging is automatic, thus not using the domain expert knowledge.
Taint analysis is also considered by FlowDroid (Arzt et al., 2014), a tool focused on the Android application life-cycle and callback methods designed with the aim to reduce missed leaks and false positives. Authors provide also an Android-specific benchmark suite i.e., DROIDBENCH, to evaluate taint analysis tools focused on Android information leakage. Also the Leakminer tool (Yang and Yang, 2012) considers taint analysis for detecting sensitive information exfiltration in Android.
All of the aforementioned approaches are mainly focused on the identification of data leaking from the user perspective. In fact, as emerging from the current state-of-the-art discussion, the following proposal represents the first attempt to provide a tool for the developer to avoid the (unaware) usage of sensitive resource without notifying the user.
5 CONCLUSION AND FUTURE WORK
In this paper we propose a method to assess security related properties of Android applications. This is a current topic, especially in the context of GDPR regulation compliance. Indeed, the software developer may be answerable to provide access to sensitive information without the explicit user consent as well as for transferring personal data towards a non GDPR-compliant recipient. In the Android environment, data can be transferred through several channels, e.g., via the network connection or by storing them in shared memory that can be accessed by other applications. In these cases, there is the possibility to infringe the regulation and cause data leakage towards unauthorized subjects or non compliant recipients. Despite this could be considered a malicious behaviour typical from spyware and other similar malwares, there could be the possibility that the leakage is merely due to shallowness or bad programming practices, such as using an unsafe connection or a shared data store.
To overcome this issue and support the software engineer during the development and verification phases, we propose a semi-automatic method based on code instrumentation and model checking techniques. In our approach, the software engineer is in charge of identifying the sensitive information her app processes and the methods she defined to inform the user about the sensitive data processing or transfer. This can be done using Java custom annotations or logging instructions. Once the code is instrumented, formal methods are applied to check predefined rules aimed at verifying if there exist execution threads bringing about sensitive information leaks.
We also presented how the proposed method can be applied to an existing application to identify possible sensitive information leaks and improve the software security by fixing the flaw. The proposed method can be easily extended to other programming language and platforms provided that the related translator from the source to the CCS specification is available.
As future work, we plan to evaluate the method by applying it to several open source Android applications to identify possible sensitive information leaks and understand how much widespread are similar bad programming practices. Finally, in case we observe a broad diffusion of unchecked sensitive data leaks, we intend to develop a tool that automatically fix some of the most common regulation infringements by injecting configurable snippets directly in the application’s source code. This would provide the developers with a tool to significantly improve their application compliance to specific regulations, with a limited effort.
ACKNOWLEDGMENTS
This work has been partially supported by MIUR - SecureOpenNets and EU SPARTA and CyberSANE projects, the Formal Methods for IT Security Lab6, 6https://dipbioter.unimol.it/ricerca/laboratori/metodi-formali-per-la-sicurezza-informatica/
and the MOSAIC Research Center\textsuperscript{7} at the University of Molise.
REFERENCES
\textsuperscript{7}https://dipbioter.unimol.it/ricerca/laboratori/centro-di-ricerca-mosaic/
|
{"Source-Url": "https://www.scitepress.org/Papers/2020/93806/93806.pdf", "len_cl100k_base": 5832, "olmocr-version": "0.1.48", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31644, "total-output-tokens": 8101, "length": "2e12", "weborganizer": {"__label__adult": 0.0004432201385498047, "__label__art_design": 0.0003376007080078125, "__label__crime_law": 0.001605987548828125, "__label__education_jobs": 0.0005321502685546875, "__label__entertainment": 6.383657455444336e-05, "__label__fashion_beauty": 0.00018477439880371096, "__label__finance_business": 0.00022327899932861328, "__label__food_dining": 0.00032973289489746094, "__label__games": 0.0006542205810546875, "__label__hardware": 0.0016241073608398438, "__label__health": 0.0006237030029296875, "__label__history": 0.0002124309539794922, "__label__home_hobbies": 9.900331497192384e-05, "__label__industrial": 0.0004138946533203125, "__label__literature": 0.0002636909484863281, "__label__politics": 0.0003638267517089844, "__label__religion": 0.00035381317138671875, "__label__science_tech": 0.047119140625, "__label__social_life": 8.970499038696289e-05, "__label__software": 0.01212310791015625, "__label__software_dev": 0.931640625, "__label__sports_fitness": 0.0002551078796386719, "__label__transportation": 0.0004835128784179687, "__label__travel": 0.00015437602996826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30060, 0.02156]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30060, 0.46972]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30060, 0.82362]], "google_gemma-3-12b-it_contains_pii": [[0, 2586, false], [2586, 7396, null], [7396, 11748, null], [11748, 12606, null], [12606, 15147, null], [15147, 16092, null], [16092, 18258, null], [18258, 21697, null], [21697, 24644, null], [24644, 30060, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2586, true], [2586, 7396, null], [7396, 11748, null], [11748, 12606, null], [12606, 15147, null], [15147, 16092, null], [16092, 18258, null], [18258, 21697, null], [21697, 24644, null], [24644, 30060, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30060, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30060, null]], "pdf_page_numbers": [[0, 2586, 1], [2586, 7396, 2], [7396, 11748, 3], [11748, 12606, 4], [12606, 15147, 5], [15147, 16092, 6], [16092, 18258, 7], [18258, 21697, 8], [21697, 24644, 9], [24644, 30060, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30060, 0.06294]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
6d5fa7bf0124bb68ab82ca3ee3fce63bf800ba98
|
Pragmatic Software Testing Education
Mauricio Aniche
Delft University of Technology
The Netherlands
m.f.aniche@tudelft.nl
Felienne Hermans
Delft University of Technology
The Netherlands
f.f.j.hermans@tudelft.nl
Arie van Deursen
Delft University of Technology
The Netherlands
arie.vandeursen@tudelft.nl
ABSTRACT
Software testing is an important topic in software engineering education, and yet highly challenging from an educational perspective: students are required to learn several testing techniques, to be able to distinguish the right technique to apply, to evaluate the quality of their test suites, and to write maintainable test code. In this paper, we describe how we have been adding a pragmatic perspective to our software testing course, and explore students’ common mistakes, hard topics to learn, favourite learning activities, and challenges they face. To that aim, we analyze the feedback reports that our team of Teaching Assistants gave to the 230 students of our 2016-2017 software testing course at Delft University of Technology. We also survey 84 students and seven of our teaching assistants on their perceptions. Our results help educators not only to propose pragmatic software testing courses in their faculties, but also bring understanding on the challenges that software testing students face when taking software testing courses.
CSCS CONCEPTS
• Applied computing → Education; • Software and its engineering → Software verification and validation;
KEYWORDS
software testing education, software engineering education, computer science education.
ACM Reference Format:
1 INTRODUCTION
Every software developer should be aware of the (high) impact that misfunctioning software can have in our society. We have seen huge losses in the financial market [30], and even researchers withdrawing their papers [33], all of them caused by software bugs. Making sure software works is maybe the greatest responsibility of a software developer. Luckily, over the years, software testing moved away from being considered the activity that ‘less skilled’ software engineers do to one of the most important skills an engineer should have.
The act of inspecting large and complex code bases to find bugs is not a trivial task in the real world: engineers need to have a broad understanding of different practices that vary from simple manual exploratory testing, where a human tries to find bugs manually by interacting with the system, to advanced bleeding-edge testing techniques, such as automated testing and automated test generation, where engineers program machines to test their system.
Companies such as Facebook [12], Google [41], and Microsoft [35] take testing seriously and require their engineers to master such techniques. Surveys have shown that developers understand the importance of testing-related training [15] and yet many of them still lack formal testing education [6, 34].
Indeed, educating a student in the art of software testing is challenging, for both students and educators. From the educator’s perspective, it is hard to keep a testing course up-to-date with the novelties of the field as well as to come up with exercises that are realistic [14]. Due to the importance of the topic, educators have been experimenting with the introduction of testing earlier in Computer Science programs [17, 19–21, 23, 27], introducing a test-first approach in CS courses [9, 10, 22], developing tools focused on software testing education [11, 38], and proposing more complete postgraduate courses focused on testing [39]. Educators also face the fact that some testing topics are not conceptually straightforward, not easy to demonstrate and generalize, and are not all available in a single textbook [40].
This paper has a twofold goal. First, to present how we have been teaching pragmatic software testing to the first year CS students at Delft University of Technology. Second, we explore students’ common mistakes, hard topics to learn, favourite learning activities, and challenges they face when learning pragmatic software testing.
To this aim, we analyzed the 1,993 quotes from the feedback report that we, as teachers and teaching assistants, gave to each of the 230 students of the 2017 edition of the Software Quality and Testing course, which is taught at the first year of our Computer Science bachelor. In addition, we performed a survey with 84 students, which we augmented by also surveying seven of our TAs.
The main contributions of this paper are:
• A proposal for a pragmatic software testing course based on nine key principles that can be taught for computer science students, including building a test mindset and interaction with practitioners (Section 3).
• An empirical analysis of the students’ most common mistakes (Section 6.1), their perceptions on the most difficult topics in software testing (Section 6.2), and the importance of different teaching activities (Section 6.3) when learning pragmatic software testing.
2 RELATED WORK
Software Testing is an important part of any Software Engineering program [2, 8, 26, 42], and by itself poses several other challenges to educators. Unfortunately, the topic still does not receive its deserved attention in several CS programs. Wong [42] argues that many engineers are not well trained in software testing because most CS programs offer ST as an elective course. Clarke et al. [8] also points to the fact that due to the large number of topics to be covered in a Software Engineering program, little attention is given to Software Testing. Astigarraga et al. [2] show that most CS programs tend to emphasize development at the expense of testing as a formal engineering discipline. Lemos et al. [26] show that software testing education can improve code reliability in terms of correctness; however, authors also argue that university instructors tend to lack the same knowledge that would help students increase their programming skills toward more reliable code.
Educators have been suggesting different approaches on how to introduce testing in a CS curriculum: from students submitting their assignments together with test plans or sets [16, 17, 21], performing black-box testing in a software seeded with errors [21, 24, 31], students testing each others’ programs [36], to suggesting students to use a test-first approach at the very beginning of the program [9, 10, 22, 27]. Many of these authors even suggest that tests should be incorporated to the Computer Science and Software Engineering curricula, not only as an elective discipline, but throughout the curriculum. More specifically, Jones [23] suggests that students need to see the practice of software testing as part of the educational experience and that each core course in the curriculum should impart one or more testing experiences.
In addition, educators have proposed tools that are solely focused on software testing education. Elbaum et al. [11] propose BugHunt. BugHunt is a tool that contains four different lessons on software testing (terminology, black box, white box, efficiency in testing). 79% of the students in their experiment agreed that BugHunt added significant value to the material presented in the lecture(s) on software testing, and 61% of the students agreed that BugHunt could replace the classes on testing. Spacco and Pugh propose Marmoset [38], a tool to help incentivize students to test their software. Marmoset’s innovative element is that if a submission passes all of the public test cases, then students are given the opportunity to test their code against a test suite that is not publicly disclosed.
3 PRAGMATIC SOFTWARE TESTING EDUCATION
The Software Testing and Quality Engineering at Delft University of Technology is a course that covers several different aspects of software testing, ranging from topics in the ISTQB industry certification [5] to software testing automation, as well as the future of testing by means of selected research papers.
The course is currently a compulsory part of the 4th quarter of the first year in the Computer Science bachelor. The course corresponds to 5 ECTS (140 hours). Students have two lectures of 1.5 hours plus 4 hours of labwork a week. As a pre-requisite, students should have at least basic knowledge on Java programming language.
The teaching team is currently composed of two teachers and teaching assistants (TAs). The number of TAs vary as our university has a policy of 1 TA per 30 students. Teachers are responsible for the course design, lectures, creating and assessing multiple choice exams, and they have the overall responsibility of the course. TAs are responsible for helping students, grading all labwork deliverables, and for giving concrete and specific feedback on what students can improve.
Learning goals. At the end of the course, students (1) are able to create unit, integration, and system tests using current existing tools (e.g., JUnit, Mockito) that successfully test complex software systems, (2) are able to derive test cases that deal with exceptional, corner, and bad weather cases by performing several different techniques (i.e., boundary analysis, state-based testing, decision tables), (3) are able to measure and reflect on the effectiveness of the developed test suites by means of different test adequacy metrics (e.g., line and branch code coverage, MC/DC), (4) are able to reflect on limitations of current testing techniques, when and when not to apply them in a given context, and to design testable software systems, (5) Participants are able to write maintainable test code by avoiding well-known test code smells (e.g., Assertion Roulette, Slow or Obscure Tests).
Program. The course covers software quality attributes, maintainability and testability, manual and exploratory testing, automated testing, devops, test adequacy, model-based testing, state-based testing, decision tables, reviews and inspections, design-by-contract, embedded system testing, test-driven design, unit versus integration testing, mocks and stubs. More specifically:
- **Week 1**: Introduction to software testing, fault vs failure, principles of testing, (un)decidability, introduction to JUnit, introduction to labwork.
- **Week 2**: Life cycle, validation vs verification, V-model, code reviews. Functional testing, partition testing, boundary testing, and domain testing.
- **Week 3**: Structural testing, adequacy criteria, code coverage. Unit vs integration vs system testing, mock objects, and test-driven development.
- **Week 4**: State-based testing, model-based testing, and decision tables.
- **Week 5**: Test code quality, test code smells. Design for testability. Design-by-contracts.
- **Week 6**: Security testing. Search-based software testing.
- **Week 7**: Guest lectures from industry.
Key elements. To achieve a pragmatic software testing course, we have devised and currently follow some key elements:
*Theory applied in the lecture.* We put our efforts into developing lectures where students can see theory being applied to practice. Our lectures often have the following structure: we present a (buggy) code implementation (initially on slides, and later in the IDE), we discuss where the bug is, we explore, at a conceptual level, a systematic approach to detect the bug, we apply the approach into a set of concrete examples. In other words, we do not only focus on explaining abstract ideas, but on concretely showing how to apply them on different real world problems, using real-world tools, like JUnit, Mockito, and Cucumber.
Real-world pragmatic discussions. Software testing is a challenging activity to be done in practice. This means that developers often make trade-offs in deciding what and how much to test. Engineering questions that arise when complex software systems are being tested, such as “how much should I test?”, “how should I test a mobile application that communicates with a web server?”, and “should I use mocks to test this application?” are often discussed in classroom so that students see how to extrapolate from our often small exercises to their future real lifes as developers.
Build a testing mindset. Software testing is not seen as an important task by many students. A software testing course should inspire students to think about testing whenever they implement any piece of code. In our testing course, we aim to achieve such a testing mindset by (1) showing how testing can be a creative activity, requiring strong developers, by means of several live coding sessions and rich pragmatic discussions, (2) demonstrating not only the usefulness of any testing technique we teach, but also how they are applied, as well as what trade-offs such techniques have in the real-world, (3) bringing guest lecturers who talk about the importance of software testing for their companies.
Software testing automation. The software engineering industry has long been advocating the automation of any software testing activity [12, 35, 41]. However, some software testing courses still focus on writing test case specifications solely as documents, and do not discuss how to automate them. In our course, to all the theoretical and systematic test design techniques we present, from functional testing to structural testing, from unit to system-level tests, students later write them in a form of an automated test. Mastering tools such as JUnit and Mockito, standard tools for test automation in Java, is a clear learning goal of our course. The importance of automation also strongly appears in our labwork, which we discuss next.
A hands-on labwork. We see the labwork as an important learning method. In our course, by means of a practical labwork assignment, students apply a selection of techniques to a 3k lines of code game written in Java, namely, JPacMan. The labwork contains a set of 50 exercises in which students are able to exercise all the techniques we teach. It is important to notice that students not only generate test cases on the paper, but also automate them. A great amount of their work is in actually producing automated JUnit test cases.
In the following, we present the main deliverables of our labwork. The complete assignment can be found in our online appendix [1].
• Part 0 (Pre-requisites). Clone the project from Github, configure the project in your IDE, write your first JUnit test, run coverage analysis.
• Part 1. Write a smoke test, functional black-box testing, boundary tests, reflect on test understandability and best practices.
• Part 2. White-box testing, mock objects, calculate code coverage and apply structural testing, use decision tables for complex scenarios, reflect on how to reduce test complexity and how to avoid flaky tests.
• Part 3. Apply state-based testing, test reusability, refactor and reflect on test smells.
Test code quality matters. Due to the importance of automated testing activities, software testers will deal with large test codebases. Empirical research has indeed shown that test code smells often happen in software systems, and that their presence has a strong negative impact on the maintainability of the affected classes [3]. We often reinforce the importance of refactoring test code and make sure they are free of smells. To any test code we write during live coding sessions, we make sure that they are as free of smells as possible. Test smell catalogues such as the ones proposed by Meszaros [32] are deeply discussed in a dedicated lecture.
Design systems for testability. Designing software in such a way that it eases testability is a common practice among practitioners [13, 18, 29]. This requires us to not only discuss software testing in our course, but software architecture and design principles of testable software systems, such as dependency inversion [28], observability and controllability, in an entire dedicated lecture for the topic. Questions like “Do I need to test this behavior via an unit or a system test?”, “How can I test my mobile application?” are extensively discussed not only through the eyes of software testing, but also to the eyes of software design.
Mixture of pragmatic and theoretical books. The two books we use as textbooks in the course are the “Foundations of software testing: ISTQB certification” [5], which gives students a solid foundation about testing theory, and the “Pragmatic Unit Testing in Java 8 with JUnit” [25], which gives students concrete and practical examples on how to use testing tools, like JUnit. We believe both complement each other and both are important for students who will soon become a software tester.
Interaction with practitioners. We strongly encourage their interaction with practitioners throughout our course. Having guest lectures from industry practitioners helps us to show the pragmatic side of software testing. Guests focus their lectures on how they apply software testing at their companies, tools they use, their pros and cons, and on the mistakes and challenges they face. In the 2017 edition, we also experimented with Ask-Me-Anything (AMA) sessions, where we called experts from all over the world via Skype and students had 15 minutes to ask any software-testing related questions.
Grading. We currently use the following formula to grade our students: 0.25 * labwork + 0.75 * exam. The labwork (as we explain below) is composed of 4 deliverables, each graded by our TAs in a range of [0..10]. We later average the grades of four deliverables, which compose the labwork component of the grade. At the end of the course, we propose a 40-question multiple choice exam. Students may take a resit 6 weeks later if they did not pass in the first time. We also offer an optional midterm exam for students who want to practice beforehand.
4 RESEARCH METHODOLOGY
The goal of this study is to provide a better understanding of the difficulties and challenges that students face when learning pragmatic software testing.
To that aim, we analyze the data from 230 students of the 2016-2017 edition of our software testing course. We propose three research questions:
RQ1: What common mistakes do students make when learning software testing?
RQ2: Which software testing topics do students find hardest to learn?
RQ3: Which teaching methods do students find most helpful?
To answer our research questions, we collect and analyze data from three different sources: the feedback reports that TAs give to students throughout the course, a survey with students, and a survey with the TAs, both performed after the course. We characterize the participants in Section 5. In the following, we detail the three parts of our methodology.
Manual content analysis on the feedback. As we explain in Section 3, students work on and produce four deliverables during the course. After each deliverable, our team of TAs manually reads students’ reports, source code, and tests, and with the help of a rubric, provides them with rich qualitative feedback.
This feedback usually contains several quotes that touch on a mix of different topics, such as mistakes they made in the exercises, tips on how to improve their existing work, issues on the written report, and even compliments for their good work. The language of such feedback reports is usually informal, as we do not give constraints to TAs on how the feedback should be.
We analyze the content of all feedback reports. To that aim, we first filter out any feedback that is not directly related to software testing (e.g., comments on exercises that were not done, or compliments). We then follow an iterative process, derived from standard qualitative data analysis procedures [37]: (1) we assign a code for each quote in the feedback; the code summarizes the essence of the quote, (2) if a quote does not belong to any existing codes, we introduce a new code, (3) each quote has just a single code; if a quote tackles two different problems, we split the original quote into two quotes, (4) to assign the correct code to a quote, we used our knowledge of the testing course, labwork, and the existing rubrics. We assigned 40 different codes to a total of 1,993 quotes. As a next step, we started an iterative merging process to derive the final themes, by grouping similar codes into higher-level themes, e.g., the theme “maintainability of test code” contains quotes from the “test quality”, and “test duplication” codes. We ended up with eight themes that we present in the Results (Section 6).
Survey with students. With the goal of capturing their perceptions on learning software testing, we asked students to answer a questionnaire that contained both open and closed questions at the end of the course.
The survey contains a total of 18 questions, none of which are required. The two closed questions of the survey asked students about the difficulty of learning and putting into practice the concepts and techniques we taught, and about the importance of the different activities we used throughout the course. In these questions, students had to choose from a five point Likert-scale, ranging from strongly disagree to strongly agree (see Figures 2 and 3). The open questions were mostly focused on understanding the students’ main challenges, difficulties, and suggestions of improvements for our testing course. We apply qualitative techniques to analyze the results of each open question individually, similarly to our analysis of the feedback reports. The full survey as well the full code book can be found in our online appendix [1].
We did not make answering the survey compulsory for the students. We received 84 complete answers out of the 230 students.
Survey with Teaching Assistants. Our TAs support students throughout the course, by answering their questions, supporting their work during the lab, and by grading their assignments. As a consequence of such intense contact with students, TAs obtain a good perspective on the challenges of teaching software testing.
We also performed a similar survey with TAs, focusing on what they perceive as challenges for students. The survey contained the same two closed questions from the students’ survey (challenges when applying software testing, and the importance of the different activities). In the open questions, we focused on asking about the common mistakes students do during the lab, as well as their perceptions on the challenges that students face.
We shared the survey internally at the end of our course. We also did not make answering the survey compulsory for TAs. At the end, we received 7 complete answers out of the 10 TAs.
5 CHARACTERIZATION OF THE PARTICIPANTS
Students. 66 students identify themselves as male, 8 as female, and 10 preferred not to answer. 89.3% of the students are between 18 to 24 years, five are between 25 and 34, and four are 17 or younger. Only three students were international students. In terms of Java knowledge, in a scale from 1 to 10, 9.5% of students consider their knowledge between 9 and 10, and 72% of them consider themselves between 7 and 8. Only 4 students consider themselves 5 or below.
Thanks to the introduction to JUnit that students receive during their very first course on programming, most of them already had some knowledge on software testing prior to our course. In fact, as we show in Figure 1, before the course starts, in a scale from 1 to 10, 39% of them consider themselves between 6 and 8, 44% between 4 and 5, and only 16% between 1 and 3. No student considered herself a 9 or 10. Students considered that their knowledge increased after the course. All of them considered their knowledge after the course as 6 or greater; 39% of them ranked themselves with a 8, and 14.6% with a 9. Two students ranked themselves with a 10.
We characterize the labwork feedback in eight different themes (ordered by their frequency): test coverage, maintainability of test code, understanding testing concepts, boundary testing, state-based testing, assertions, mock objects, and tools.
Test coverage (416 times, 20.87%). Students commonly either miss tests, i.e., they do not provide all the expected tests for a given piece of code, or they write tests that are not totally correct, e.g., the test does not actually test the piece of code, or the test exercises the wrong class. In addition, we also observed cases (14) where the student actually “overtested” (i.e., wrote tests for more cases than required).
Maintainability of test code (407 times, 20.42%). Students often need advice on how to write maintainable test code. More specifically, test quality advices in general, such as better naming and excessive complexity (247), code duplication and lack of reusability (69), tests that could be split in two (31), better usage of test cleanup features, such as JUnit’s Before and After (47).
Understanding testing concepts (366 times, 15.35%). Students provide incomplete answers or have difficulties when it comes to questions that involve testing concepts and ideas, such as what flaky tests are about, advantages and disadvantages of unit and system tests, and the importance of removing test smells.
Boundary testing (258 times, 12.95%). Students often miss all the tests required to cover a boundary (142). As we also ask them to first build a decision table and then derive the tests, we also see that they often miss elements in the table (50) and generate tables that are not fully correct (46).
State-based testing (247 times, 12.39%). When it comes to state-based testing, students often miss or create wrong states or events (56) and transitions (72), or develop non-clear or not legible state machines (68).
Assertions (158 times, 7.93%). Most feedback related to assertions focus on missing assertions, i.e., the student forgot to assert one or more expected result, and on assertions that are wrong or should not exist in that test.
Mock Objects (117 times, 5.87%). Students required some feedback on how to use mock objects. More specifically, on how to properly verify interactions with mock objects (i.e., Mockito’s ‘verify’ method) and to explain when one should mock an object.
Tools (84 times, 4.21%). Students sometimes do not use the tools properly. More specifically to our course, students commonly use JUnit 4 features instead of JUnit 5, do not correctly use AssertJ’s fluent API, and make wrong use of Cucumber features.
TAs perspective. Overall, the observations of TAs match with what we observed in the labwork analysis. In terms of testing best practices, TAs mentioned to help students in writing maintainable test code. According to one TA, students often write tests that contain unnecessary code and weird interactions with the class under test. In addition, according to one TA, students do not clearly see how to reuse test code. Another TA mentioned that a common question is on how to properly test exceptions. Finally, a TA also observed that students often write tests that actually do not exercise any production code (in this case, JUnit still shows a green bar, giving a false impression of success to the student).
6.1 RQ1: What common mistakes do students make when learning software testing?
6.2 RQ2: Which software testing topics do students find hardest to learn?
50% of the students are neutral, and 21% perceive it as a hard topic (Q3). Not a single TA perceived this topic as easy. We believe these findings highlight even more the importance of discussing even more the pragmatic side of software testing.
When it comes to testing code best practices, students had a contradicting perceptions. The usage of mocks to simulate a dependency (Q4) and writing fast, reproducible, and non-flaky tests (Q17) were considered easy topics to be learned by 42% and 56% of students, respectively. TAs agree that students learn these topics with less difficulties. However, when it comes to following testing best practices (Q9), 46% of students perceive it as an easy topic, while 71% of TAs perceive it as a hard topic for students. The students’ perceptions also contradicts the results of RQ1, where we observe a large number of feedback focused on best practices in their assignments.
Finally, testability seems less challenging for students than for TAs. While students perceive optimizing code for testability (Q10) as just somewhat challenging (35% find it easy, 41% are neutral, and 25% find it hard), 67% of TAs believe that testability is a hard topic for students. As we conjecture that TAs have a better understanding of testability than the students, these findings suggest that the students are not sufficiently aware of the difficulty of testability.
6.3 RQ3: Which teaching methods do students find most helpful?
In Figure 3, we show how students perceive the importance of each learning activity we have in our software testing course.
Students perceive activities that involve practitioners as highly important. More specifically, guest lectures from industry (Q2) were considered important by 72% of participants. The Ask-me-Anything sessions (Q10), on the other hand, was considered important by only 32% of participants; 38% are neutral, and 30% do not consider them important.
Moreover, different interactions during the lecture are also considered important for students. Teachers performing live code (Q3) and discussions and interactions during the lecture (Q4) are considered important by 75% and 65% of students, respectively. We conjecture that discussions and live coding are moments in which students have the opportunity to discuss the topics they consider hard, such as how much testing is enough, which test level to use, and test code best practices (as seen in RQ1 and RQ2).
On the other hand, the two books we use as textbooks in the course are not considered fundamental for students. More specifically, 31% of students find the ISTQB [5] not important and 36% are neutral (Q6), whereas 29% of them find the PragProg [25] not important and 51% are neutral (Q5). Reading related papers (Q9) is also considered not important for 35% of them.
6.4 Limitations of our study
The qualitative analysis of the open questions in the survey was manually conducted by the first author of this paper. The analysis, therefore, could be biased towards the views of the authors. To mitigate the threat, we make all the data available for inspection in our online appendix [1].
TAs were responsible for giving feedback to students throughout the study. Although we instruct all TAs on how to grade and what kind of feedback to give (they all follow the same rubrics), different TAs have different personalities. In practice, we observed that some TAs provided more feedback than other TAs. While we believe this could have little impact on the percentages of each theme in RQ1, we do not expect any other theme to emerge.
In terms of generalizability, although we analyzed the behavior of 230 students, we do not claim that our results are complete and/or generalizable. Furthermore, most students were Dutch (we only had 3 international students answering our survey), which may introduce cultural bias to our results. We urge researchers to perform replications of this study in different countries and universities.
7 CONCLUSIONS
Software testing is a vital discipline in any Software Engineering curriculum. However, the topic poses several challenges to educators and to students. In this paper, we proposed a pragmatic software testing curriculum and explored students’ common mistakes, hard topics to learn, favourite learning activities, important learning outcomes, and challenges they face when studying software testing.
Researchers and educators agree that software testing education is fundamental not only to industry, but also to research. We hope this paper helps the community to improve even more the quality of their software testing courses. As Bertolino [4] states in her paper on the achievements, challenges, and dreams on software testing research: “While it is research that can advance the state of the art, it is only by awareness and adoption of those results by the next-coming generation of testers that we can also advance the state of practice. Education must be continuing, to keep the pace with the advances in testing technology”.
ACKNOWLEDGMENTS
We thank all the students and teaching assistants that followed our course in the last years.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/47139096/paper_testing.pdf", "len_cl100k_base": 6718, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25422, "total-output-tokens": 9820, "length": "2e12", "weborganizer": {"__label__adult": 0.0011653900146484375, "__label__art_design": 0.001209259033203125, "__label__crime_law": 0.0009665489196777344, "__label__education_jobs": 0.1395263671875, "__label__entertainment": 0.00018584728240966797, "__label__fashion_beauty": 0.00061798095703125, "__label__finance_business": 0.0007357597351074219, "__label__food_dining": 0.0014944076538085938, "__label__games": 0.0020198822021484375, "__label__hardware": 0.0013065338134765625, "__label__health": 0.0013561248779296875, "__label__history": 0.0006871223449707031, "__label__home_hobbies": 0.000377655029296875, "__label__industrial": 0.0009012222290039062, "__label__literature": 0.0010004043579101562, "__label__politics": 0.0008559226989746094, "__label__religion": 0.001438140869140625, "__label__science_tech": 0.0080108642578125, "__label__social_life": 0.0005025863647460938, "__label__software": 0.00743865966796875, "__label__software_dev": 0.8251953125, "__label__sports_fitness": 0.0010404586791992188, "__label__transportation": 0.0013256072998046875, "__label__travel": 0.0006704330444335938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40347, 0.03306]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40347, 0.78007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40347, 0.9308]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5312, false], [5312, 11889, null], [11889, 18512, null], [18512, 24094, null], [24094, 27584, null], [27584, 32724, null], [32724, 40347, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5312, true], [5312, 11889, null], [11889, 18512, null], [18512, 24094, null], [24094, 27584, null], [27584, 32724, null], [32724, 40347, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40347, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40347, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5312, 2], [5312, 11889, 3], [11889, 18512, 4], [18512, 24094, 5], [24094, 27584, 6], [27584, 32724, 7], [32724, 40347, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40347, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
ea9e22d889501bd58670a0882e3d78df561399cf
|
# Table of Contents
**LPAR 2002: Short Contributions**
Direct Resolution for Modal-like Logics ........................................ 3
*Carlos Areces and Juan Heguiabhere*
A New $O(n \log n)$-SPACE Decision Procedure for Propositional Intuitionistic Logic ........................................ 17
*Alessandro Avellone, Guido Fiorino, and Ugo Moscato*
GLORIA: An Agent’s Executable Specification ................................. 35
*Jacinto Dávila and Mayerlin Uzcátegui*
Pre- and Postcondition Reasoning in Dynamic First Order Logic ........ 45
*Juan Héguiabhere*
TRP++: A temporal resolution prover ............................................. 65
*Ulrich Hustadt and Boris Konev*
**CSL 2003: Extended Posters** .................................................. 81
A Combinator and Presheaf Topos Model for Primitive Recursion over Higher Order Abstract Syntax ......................................... 83
*S. J. Ambler, R. L. Crole, and A. Momigliano*
Derivability and Admissibility of Inference Rules in Abstract Hilbert Systems 91
*Clemens Grabmayer*
A comparison between two techniques of program extraction from classical proofs
*Mircea-Dan Hermest* ............................................................... 99
Continuous approximations of MV-algebras with product and product residuation: a category-theoretic equivalence ........................................ 103
*Luca Spada*
GLORIA: An Agent’s Executable Specification
Jacinto Dávila\textsuperscript{1} and Mayerlin Uzcátegui\textsuperscript{2}
\textsuperscript{1} CESIMO,
\textsuperscript{2} SUMA,
Universidad de Los Andes.
Mérida, 5101. Venezuela
{jacinto,maye}@ula.ve
Abstract. This paper presents a specification for an intelligent agent in classical logic. We argue that this type of specification can be systematically translated into an executable code that implements the agent on top of some computing platform. They, therefore, deserve the name of executable specifications, as suggested in [17]. We illustrate how to translated our agent’s specification into a running implementation and, then, how to migrate that specification from that logical language into an industrial OO specification device (UML) and into an OO programming language (JAVA), in order to build an actual agent.
1 Introduction
In an introductory paper [18], Michael Wooldridge and Nick Jennings distributed the issues associated with the design and construction of intelligent agents in 3 groups: Agent theories, Agent architectures and Agent languages. Agent theories provide the answer to the questions: What is an agent? How should it be characterized?. Agent architectures are engineering models of agents: generic blue-prints that could guide the implementation of particular agents (in software or hardware). And Agent languages include all the software engineering tools for programming and experimenting with agents [17].
In this paper, we present a work that aims to relate the aforementioned categories in a coherent whole. In the proposed framework, an agent theory is seen as a specification that is systematically converted into an architecture with a built-in agent programming language. Our proposal is to use the same language to state the theory, to implement the architecture and to program the agents. The language is classical logic.
2 Agents in Logic
A specification states what a system is and the properties it has. Logic is widely used as a specification language. Agent specifications are normally stated in some version of modal logic [2], [9], [13]. Apparently this is the case because agents are normally regarded as intentional systems [6], [12]. An intentional system is one
characterized by so-called attitudes: beliefs, desires intentions, among others. It is, by definition, an autonomous system, with an internal state upon which it registers inputs and from which it produces outputs. "Intentional system" is an useful abstraction in Artificial Intelligence, as it allows the description of entities that have their own *agendas* [12]. It is widely believed that the representation of autonomous systems requires the alleged added expressiveness of the modal logics, with their possible world semantics. Typically, a number of new modal operators (such as $K$ for knowledge, $B$ for beliefs and $G$ for goals) are introduced. In this work, we depart from that trend and refrain from using modal logics. Instead, we use classical logic and employ a set of meta-logic predicates to model, not the agents beliefs and goals separately, but some processes associated with them.
2.1 The proposal by Kowalski
The work presented here started with a proposal by Bob Kowalski which, basically, prescribed the use of logic programs to specify an agent that is both reactive and rational [10], [11]. We developed that proposal into a complete specification by including the specification of a proof procedure [8] that is used as the reasoning mechanism of the agent [3]. This specification is completely translatable into logic programs and is, therefore, an executable specification. Our agent has been code-named GLORIA$^3$.
2.2 Our version of the cycle predicate
The cycle predicate, [GLOCYC], describes a process by which the agent's internal state changes, while the agent assimilates inputs from and posts outputs to the environment (the act predicate, [GLOACT] and [GLOEXE]), after time-periods devoted to reasoning (with the demo predicate, shown below). The things being posted are the influences, just as prescribed in the situated multi-agent theory by Ferber and Müller [7] and in our multi-agent simulation theory [4, 5].
2.3 The demo predicate: an abductive reasoner
Observe the arguments for the demo predicate. This predicate reduces goals to new goals by means of definitions stored in the knowledge base. "demo" is, precisely, the embodiment of the definition of the "believes" relationship between an agent and its beliefs. An agent believes what she/he/it can DEMOnstrate. This explains the first three arguments of the demo predicate. The fourth argument is an important device to count the amount of resources or the time available for reasoning. At each cycle, the agent reasons for a bounded amount of time and so it is prevented for jumping into infinite regress or total alienation
$^3$ GLORIA stands for a General-purpose, Logic-based, Open, Reactive and Intelligent Agent.
from its environment. This is a legitimate resource in the specification language that allows us, the agent’s modellers, to encode an important dimension of the bounded rationality found in realistic agents, as we have been arguing.
Fig. 2. The demo predicate: basic reduction and abductive rules.
apply(X,Y,RP,NewRP), NR is R - 1,
demop(Obs,goals(NewRP, RG), NGoals, Influences, InfOut, NR).
% Negation
demop(Obs,goals(sp(not(A), RP), RG), NGoals, Influences, InfOut, R ):-
allowed(A), NR is R-1, demop(Obs,goals(sp(if(sp(A,true),false),
RP), RG), NGoals, Influences, InfOut, NR).
% Cleaning if false,.. then ..
demop(Obs,goals(sp(if(sp(false,B),C),RP), RG), NGoals, Influences,
InfOut, R):- NR is R - 1,
demop(Obs,goals(RP, RG), NGoals, Influences, InfOut, NR).
% Distribution of or within an if
demop(Obs,goals(sp(if(sp(or(A,Rest),B),C),RP), RG), NGoals,
Influences, InfOut, R)-NR is R - 1,
agregar_plan(A, B, NewA), demop(Obs,goals(sp(if(NewA,C),
sp(if(sp(Rest,B),C),RP)), RG), NGoals, Influences, InfOut, NR).
% Propagation.
demop(Obs,goals(sp(if(sp(A,B),C),RP), RG), NGoals, Influences, InfOut, R):-
allowed(A), (in(A,RP); member(A,Obs)), NR is R - 1,
demop(Obs,goals(sp(if(B,C),RP), RG), NGoals, Influences, InfOut, NR).
% Equality within an if (notice we are not considering splitting as yet)
demop(Obs,goals(sp(if(sp(X,Y,B),C),RP), RG), NGoals, Inf, InfOut, R):-
apply(X,Y,B,NewB), apply(X,Y,C,NewC), apply(X,Y,RP,NewRP), NR is R-1,
demop(Obs,goals(sp(if(NewB,NewC),NewRP), RG), NGoals, Inf, InfOut, NR).
% Unfolding within an if
demop(Obs,goals(sp(if(sp(A,B),C),RP), RG), NGoals, Influences, InfOut, R):-
allowed(A), unfoldable(A), definition(A,Def), NR is R - 1,
demop(Obs,goals(sp(if(Def,B),C),RP), RG), NGoals, Influences, InfOut, NR).
% Negation in the body of an implication
demop(Obs,goals(sp(if(sp(not(A),B),C),RP), RG), NGoals, Influences,
InfOut, R):- allowed(A), NR is R-1, demop(Obs,goals(sp(if(sp(A,true),
sp(sp(if(B,C),RD),true)),RP),RG), NGoals, Influences, InfOut, NR).
% true -> C is equivalent to C
demop(Obs,goals(sp(if(true,C),RP), RG), NGoals, Influences, InfOut, R):-
NR is R - 1,
demop(Obs,goals(NP,RG), NGoals, Influences, InfOut, NR).
% (A or B) --> C is equivalent to A --> C and B --> C
demop(Obs,goals(sp(if(sp(or(A,B),C),D),RP), RG), NGoals, Influences,
InfOut, R):- NR is R - 1, demop(Obs,goals(sp(if(sp(A,C),D),
sp(sp(sp(B,C),D),RP)),RG), NGoals, Influences, InfOut, NR).
% (A or B) and C is equivalent to A and C or B and C
demop(Obs,goals(sp(or(A,B),C),RP), RG), NGoals, Influences, InfOut, R):-
agregar_plan(A,RP,NA),
demop(Obs,goals(NA,goals(sp(B,RP),RG)), NGoals, Influences, InfOut, NR).
% Simplification: A and A is equivalent to A unification permitting
demop(Obs,goals(sp(A,RP), RG), NGoals, Influences, InfOut, R):-
allowed(A), in(A,RP), % unification will be performed
NR is R - 1,
demop(Obs,goals(RP,RG), NGoals, Influences, InfOut, NR).
% Unfolding: A <= Def, A and RP is equivalent to Def and RP
demop(Obs,goals(sp(A,RP), RG), NGoals, Influences, InfOut, R):-
allowed(A), unfoldable(A), definition(A,Def), NR is R - 1,
demop(Obs,goals(sp(Def,RP), RG), NGoals, Influences, InfOut, NR).
% Abduction to produce influences..
demop(Obs,goals(sp(A,RP), RG), NGoals, Influences, InfOut, R):-
allowed(A), executable(A),
demop(Obs,goals(RP,RG), NGoals, NInfluences, InfOut, NR).
% Abduction to consume observations. Experimental.
demop(Obs,goals(sp(A,RP), RG), NGoals, Influences, InfOut, R):-
allowed(A), observable(A), member(A,Obs), NR is R - 1,
demop(Obs,goals(RP,RG), NGoals, Influences, InfOut, NR).
% Removal of plans for not observing on time.
demop(Obs, goals(sp(A,RP), RG), NGoals, _, InOut, R):-
allowed(A, observable(A), not(member(A, Obs)), NR is R - 1,)
demop(Obs, RG, NGoals, [], InOut, NR).
% Can’t to anything.. but shuffling the first plan..
demop(Obs, goals(sp(A,RP), RG), NGoals, Influences, InOut, R):-
agregar_plan(RP, sp(A,true), NP), NR is R - 1,
demop(Obs, goals(NP,RG), NGoals, Influences, InOut, NR).
% Add plans with the structure sp(First Action,Rest of Plan)
agregar_plan(true,X,X).
agregar_plan(sp(A,X),Y,sp(A,Z)):- agregar_plan(X,Y,Z).
% in(A,sp(A,_,)).
in(A,sp(_,R)):- in(A,R).
2.4 An example
A full description of the demo predicate is in [11], [3]. However, to emphasize
the executable condition of the demo predicate, we show, in figures 2 and 3, a
simplified version written in PROLOG, which we are using to test our simulation
platform [16]. Observe that goals are represented as a list (i.e. the term goals)
of alternative plans. Plans, in turn, are represented as a list (i.e. the term splan)
of sub-plans. We treat terms that represent actions, sent from the agent to the
environment, and terms that represent observations, sent from the environment
into the agent, as abducibles, in the sense explained in [8,3].
Fig. 3. The demo predicate customized for an example and its invocation.
build_eq(T,S,T_eq_S),!.
unpack_all_eq([],[],[]).
unpack_all_eq(G,[Gs|Rest],[T_eq_S|D],FinalAll):-
unpack_eq(G,Os,Os eq_S,|T_eq_S|D)纭 (true) -> FinalAll = RestA;
FinalAll = [Gs|RestA],unpack_all_eq(G,Rest,D,RestA).
% build_eq([],[],true).
build_eq([T1|RT],[S1|RS],T1 = S1,Rest)):- build_eq(RT,RS,Rest).
% and_append(F,S,R):= F =\=[true,and_compress(S,R).
and_append(F,S,F,R)):-
(var(F);(function(F,Funct,\_),Funct \=\='')),!,and_compress(S,R).
and_append((F,Rf),S,(F,Ro)):- F \=\= true,!,and_append(Rf,S,Ro).
and_append((F,Rf),S, Ro):- F \=\= true,and_append(Rf,S,Ro).
% and_compress(L,L):= L \=\= true,!.
and_compress(L,L,Rest)):- (var(L);(function(L,Funct,\_),Funct \=\='')),!.
and_compress((F,Rf),(F,Ro)):- F \=\= true,!,and_compress(Rf,Ro).
and_compress((F,Rf),Ro):- F \=\= true,!,and_compress(Rf,Ro).
% ic(Restricciones):- findall(if(\BBody,\HHead),((if Body then Head),arregla(Body,\BBody),
arregla(Head,\HHead)),L), aplana(L,Restricciones).
% obs(L):- findall(Obs,observe Obs,L).
% aplana([],true). aplana([],sp,[C,RR]):= aplana(R,RR).
% arregla(true,true):= !. arregla((A,B),sp(A,\BB)):- !,arregla(A,\BB).
% arregla(A,sp(A,true)).
% Invoking demo
demo(Gin,Gout,Influences):- ic(IC),obs(Obs),(Gin = goals(Plan,Rest);
(Plan = true,Rest = true)),agregar_plan(IC,Plan,\NPlan),
demap(Obs,goals(\NPlan,Rest), Gout,[]),Influences,300).
demod(Obs,Gin,Gout,Influences):- ic(IC),(Gin = goals(Plan,Rest);
(Plan = true,Rest = true)),agregar_plan(IC,Plan,\NPlan),
demap(Obs,goals(\NPlan,Rest), Gout,[]),Influences,200).
% unfoldable(A):- \+ executable(A),\+ observable(A).
% apply(X,Y,C,\NC):= var(X),nonvar(Y),apply_conj([\X/Y,\C,\NC],!.
apply(X,Y,C,\NC):= var(Y),nonvar(X),apply_conj([\Y/X,\C,\NC],!.
apply(X,Y,\NC):- !. apply_conj(_,true,true).
apply_conj([\V/T|\Rest],\Conjunct,\NewConj):= substitute_conj(V,T,
\Conjunct,\NextConj),apply_conj(\Rest,\NextConj,\NewConj).
% substitute_conj(_,true,true):- !.
% substitute_conj(V,T,G,\Rest), (\NewG,\NewRest)):=
substitute(V,T,G,\NewG),substitute_conj(V,T,\Rest,\NewRest).
% substitute(V,T,G,\NewPred):- \Pred =. [Name|\Args],
% substitute_args(V,T,\Rest,\NewPred,\NewPred =. [Name|\NewArgs].
% substitute_args(_,\[]).
% substitute_args(V,T,\A|\Rest),[\T|\NRest]):-
V =\= A,substitute_args(V,T,\Rest,\NRest).
% substitute_args(V,T,\A|\Rest),[\NewA|\NRest]): - \complex atoms.
% compound(A),!,substitute(V,T,A,\NewA),substitute_args(V,T,\Rest,\NRest).
% substitute_args(V,T,\A|\Rest),[\A|\NRest]):-
V =\= A,substitute_args(V,T,\Rest,\NRest).
executable(get_umbrella(_)).
executable(go_home(_)).
observable(it_rains(_)).
\%
if it_rains(X) then protect_yourself(X).
\%
to protect_yourself(X) do get_umbrella(X).
to protect_yourself(X) do go_home(X).
Fig. 4. Agent specification in UML
3 From logic to UML
The cycle predicate and the structures it uses (mainly to store the knowledge base and the goals) lead to a general Unified Modelling Language specification for an agent similar to the one shown in figure 4.
In this OO framework, the class Ag contains the methods that implement the agent. It includes the methods that correspond to the cycle predicate (cycle()), the act predicate (observe() and execute()) and the demo predicate (reason()).
3.1 From logic to JAVA
The method execute() communicates the agent’s intention to the environment. observe(), on the contrary, communicates a description of the environment to
the agent. reason() implements the reasoning engine that mediates between perceptions and actions.
The accompanying classes implements the data structures (and associated methods) required to store and manage agent's goals and beliefs, among other things. Note that a basic data structure: a list, is an essential part of the implementation. We only require a list with accessible head and tail components. For the sake of space, in figure 4 only the Goals' attributes and methods are shown.
Thus, using the cycle predicate as the specification, one can produce a JAVA implementation of an agent like the one in figure 5. For the sake of simplicity, in this implementation a plan contains only one action. The Goals class implements the set of alternative plans for the agent.
Fig. 5. Ag Class
```java
/**
* This is a preliminary implementation of the specification in GLORIA.
*/
/**
* Ag is the class that implement the agent
*/
public class Ag {
List observations;
List influences;
Goals goals;
Beliefs beliefs;
Goal permanentGoal;
/** Agent constructor initiates all the structures */
public Ag() {
observations = null;
influences = null;
goals = new Goals();
beliefs = null;
/* As a test, consider this "permanent goal"*/
List rains = new List("It rains");
/* it corresponds to "if it rains then carry an umbrella" */
permanentGoal = new Goal("carry umbrella", rains);
/* It is used to try execute the intentions. */
public List execute() {
return influences;
}
/** It is used to update the knowledge of the environment. */
public void observe() {
// Get inputs from the environment..
// its being simulated with a single report of "it rains"
List obs = new List("it rains");
// Update its records.
observations = obs;
/** This is a very simple implementation of the reasoning engine. */
public void reason() {
/* Every goal must be checked against observations.. */
if (permanentGoal.fired(observations)) {
goals.activateGoal(permanentGoal);
influences = new List(goals.allGoals[goals.intention]);
}
/** The main cycle/locus of control of the agent */
public void cycle() {
observe();
}
}
```
4 Conclusions
This paper presented the formal specification of an agent. The language used to state the specification is a form of classical logic, as opposed to modal logic. We have shown that this specification can be systematically translated into other formalisms and, more importantly, into executable code. It is, therefore, a mechanism for the “agentification process” described by Shoham [15] by means of which a specification produces an agent. We believe this agentification process can greatly benefit from using executable specifications, where specification/implementation trade-offs can be more easily understood.
This work has been applied to the construction of a multi-agent simulation platform called GALATEA. We previously presented the specification of the simulation platform [4] and described the multi-agent and OO simulation platform [5]. The following steps are 1) to assemble the platform with the interface between agents and the simulation engine 2) to develop alternative inference engines for the agent (a different embodiment of the demo predicate) and 3) to perform the first multi-agent simulation experiments. [1], [14]
Acknowledgements
This work has been partially funded by CDCHT-University of Los Andes projects I-666-99-02-E and I-667-99-02-B and FONACIT project S1-2000000819.
References
Kurt Gödel Society
Collegium Logicum
The Kurt Gödel Society is an international organization that aims to promote research in Logic, Philosophy, and the History of Mathematics, in particular in connection with the life and work of Kurt Gödel and the areas to which he has made scientific contributions.
Detailed information on the Kurt Gödel Society can be found at the web page of the society: http://kgs.logic.at/
Collegium Logicum is the publication series of the Kurt Gödel Society. It is a continuation of the Annals of the Kurt Gödel Society. The Collegium Logicum contains publications in Logic, Philosophy, and the History of Mathematics.
Detailed information on Collegium Logicum can be found at the series web page: http://kgs.logic.at/cl/
ISBN 3-901546-03-0
|
{"Source-Url": "http://webdelprofesor.ula.ve/ingenieria/jacinto/publica/2004/gloria.pdf", "len_cl100k_base": 5116, "olmocr-version": "0.1.48", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 13449, "total-output-tokens": 6972, "length": "2e12", "weborganizer": {"__label__adult": 0.0004398822784423828, "__label__art_design": 0.0005717277526855469, "__label__crime_law": 0.0005841255187988281, "__label__education_jobs": 0.0015439987182617188, "__label__entertainment": 0.00014984607696533203, "__label__fashion_beauty": 0.0002168416976928711, "__label__finance_business": 0.0003986358642578125, "__label__food_dining": 0.0005478858947753906, "__label__games": 0.0009212493896484376, "__label__hardware": 0.0010290145874023438, "__label__health": 0.00087738037109375, "__label__history": 0.0004320144653320313, "__label__home_hobbies": 0.00017535686492919922, "__label__industrial": 0.0007586479187011719, "__label__literature": 0.0008368492126464844, "__label__politics": 0.00048065185546875, "__label__religion": 0.0007171630859375, "__label__science_tech": 0.142822265625, "__label__social_life": 0.00017726421356201172, "__label__software": 0.006999969482421875, "__label__software_dev": 0.837890625, "__label__sports_fitness": 0.0003952980041503906, "__label__transportation": 0.0007920265197753906, "__label__travel": 0.0001958608627319336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22370, 0.01553]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22370, 0.65536]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22370, 0.7149]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 1430, false], [1430, 3707, null], [3707, 6433, null], [6433, 6732, null], [6732, 10078, null], [10078, 11411, null], [11411, 13934, null], [13934, 14820, null], [14820, 17327, null], [17327, 19309, null], [19309, 21599, null], [21599, 22370, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 1430, true], [1430, 3707, null], [3707, 6433, null], [6433, 6732, null], [6732, 10078, null], [10078, 11411, null], [11411, 13934, null], [13934, 14820, null], [14820, 17327, null], [17327, 19309, null], [19309, 21599, null], [21599, 22370, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22370, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22370, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 1430, 3], [1430, 3707, 4], [3707, 6433, 5], [6433, 6732, 6], [6732, 10078, 7], [10078, 11411, 8], [11411, 13934, 9], [13934, 14820, 10], [14820, 17327, 11], [17327, 19309, 12], [19309, 21599, 13], [21599, 22370, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22370, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
09c39b07f53552ab394f115de939ffb8444d50c9
|
On the Implementation of Single-Query Sampling-Based Motion Planners
Ioan A. Șucan and Lydia E. Kavraki
Abstract—Single-query sampling-based motion planners are an efficient class of algorithms widely used today to solve challenging motion planning problems. This paper exposes the common core of these planners and presents a tutorial for their implementation. A set of ideas extracted from algorithms existing in the literature is presented. In addition, lower level implementation details that are often skipped in papers due to space limitations are discussed. The purpose of the paper is to improve our understanding of single-query sampling-based motion planners and motivate our community to explore avenues of research that lead to significant improvements of such algorithms.
I. INTRODUCTION
This work aims to contribute to our understanding of single-query sampling-based planners [1], [2] and to promote the advancement of research towards truly substantial improvements of these planners as a whole. This paper systematizes and clarifies a large body of work by articulating (a) the common core of single-query sampling-based planners, (b) some of the existing heuristics that have been shown to work well in practice, and (c) some of the implementation details that are often left out in the corresponding papers but can strongly influence the performance of an algorithm. These implementation details are derived from the authors’ implementation and/or use of existing single-query motion planning software libraries. This paper also reveals the breadth and wealth of research on this topic and can serve as a reference for future work.
Single-query sampling-based planners have become very popular due to their ability to quickly solve the motion planning problem: finding a continuous valid path from a given start state to a goal state, for a robotic system under a set of constraints [3]. Examples include Rapidly-exploring Random Trees (RRT) [4], [5], Expansive Space Trees (EST) [6], [7], Single-query Bi-directional probabilistic roadmap planner with Lazy collision checking (SBL) [8], and many more (e.g., [9]–[24]). We will generically refer to this class of algorithms as tree planners, due to the main data structure they employ. These planners iteratively grow a tree of motions in the state space of the robotic system (see Figure 1), using different heuristics. The tree is rooted at the starting state of the robotic system. At every iteration, an attempt is made to extend this tree with a new path segment (motion), towards a new state. An advantage of using trees is that they naturally encode the notion of order of states along a path and can be used to provide a time parametrization of states on paths. This becomes important when we are interested in specifying solution paths in terms of control inputs to the robotic system rather than a sequence of states between which a controller could interpolate. The tree data structure is a particular case of the roadmap used in the Probabilistic RoadMap (PRM) algorithm [25], hence some of our discussions apply to PRM as well. Vice-versa, some of the observations made for PRM in earlier work [1], [26] are relevant here.
From a theoretical standpoint, the objective of a tree planner is to grow the tree of motions in such a manner that the entire state space can be eventually covered. However, coverage does not always need to be achieved before finding a solution. In general, it is considered a good property if such a planner is probabilistically complete [1] – if a solution exists, it will eventually be found. From a practical point of view, the performance of tree planners – the amount of time spent to find a solution – is very important. This is in fact a key motivation for the development of tree planners. Many algorithms for guiding tree growth have been introduced over the years (e.g., [4]–[24]) with the purpose of improving performance. A number of software libraries for motion planning containing such algorithms have also been developed: MSL (Motion Strategy Library) [27], MPK (Motion Planning Kit) [28], OpenRAVE [29], OOPSMP (Object-Oriented Programming System for Motion Planning) [30], ompl (Open Motion Planning Library) [31].
Even though tree planners are conceptually simple, correct and efficient implementations are not trivial. In this work, we isolate some of the more prominent ideas used by tree planners and discuss a series of details that arise during their implementation, details that often get left out of papers due...
to space constraints. While this text is intended primarily for readers interested in implementing their own singlequery sampling-based motion planner, we believe readers interested in simply using existing implementations will find this text helpful for better understanding and tuning the implementations they use.
This paper is structured as follows. We first present the interface and the typical execution of a tree planner in Section II. The state space and related primitives are described in Section III. In Section IV we give an overview of some of the ideas introduced by previous work. We then continue with lower level details that often get left out in Section V and some tips on debugging tree planners in Section VI. Finally, we conclude in Section VII.
II. INPUT, OUTPUT AND EXECUTION OF A TREE PLANNER
Let $\mathcal{X}$ be the state space in which the tree planner operates. For every motion planning query, the following input should be specified:
- Specification of a starting state $s \in \mathcal{X}$. This is where the robotic system is considered to start at.
- Specification of a goal region $\mathcal{G} \subset \mathcal{X} \neq \emptyset$. In the simplest case, this can be a state in $\mathcal{X}$ ($\mathcal{G} = \{g\}, g \in \mathcal{X}$). Some algorithms are only applicable if the explicit representation of the goal state is available. More generally, $\mathcal{G}$ is implicitly specified through the use of an indicator function that decides whether a given state is in the goal region or not ($\mathcal{G} = \{x \in \mathcal{X} | g(x) = true \text{ for some } g : \mathcal{X} \rightarrow \{true, false\}\}$).
- Allowed time $t \in \mathbb{R}^+$. This is the amount of time the planner is allowed to search the state space before reporting failure.
The output of a tree planner is a valid solution path. In case of failure, the solution path is empty. The path can be represented as:
- A sequence of states, i.e., a kinematic path
- A sequence of inputs, i.e., a control path. In case we are planning with controls, the path is discretized with respect to time. Every element of the path will consist of the state at that time, the control applied when in that state and the amount of time the control is applied for.
The execution of a typical tree planner proceeds as follows:
```plaintext
Algorithm BuildTree($\mathcal{X}, s, \mathcal{G}, t$)
INIT($\mathcal{T}, \mathcal{X}, s$) // unless $\mathcal{T}$ already initialized
while ElapsedTime() $< t$ and NoGoalFound($\mathcal{G}$) do
$x_{tree} \leftarrow$ StateToExpandFrom($\mathcal{T}$)
$p_{add} \leftarrow$ PathToConsider($x_{tree}$)
if ChooseToAdd($p_{add}$) then
INSERT($\mathcal{T}, p_{add}$)
end if
end while
return $\mathcal{T}$
```
III. STATE SPACE AND RELATED PRIMITIVES
The state space $\mathcal{X}$ is a manifold consisting of all the states a robotic system could potentially attain. The following represents a minimal list of state space related primitives that tree planners depend on:
- A bounding box for an ambient space $\subset \mathbb{R}^d$ surrounding $\mathcal{X}$. Note that the dimension of this ambient space can sometimes be larger than that of $\mathcal{X}$. We use the bounding box of an ambient space instead of that of the state space to avoid the complexities that arise from the topology of $\mathcal{X}$.
- A bounding box for the control space $\mathcal{U} \subset \mathbb{R}^k$. This is only needed if we are interested in obtaining control paths. Each component of an element in $\mathcal{U}$ represents an input for the robotic system.
- State validator $valid : \mathcal{X} \rightarrow \{true, false\}$, $valid(x) = true$ for $x \in \mathcal{X}$ implies $x$ is a valid state. This usually means $x$ is at least collision free. Often, additional constraints need to be satisfied by $x$. A more advanced definition of $valid$ is $valid : \mathcal{X} \rightarrow \mathbb{R}$, where $valid(x)$ represents the distance to the nearest invalid state. This latter definition can be used for exact collision checking [32] and the so-called “continuous collision detection” [33].
- Metric $dist : \mathcal{X} \times \mathcal{X} \rightarrow [0, \infty)$, this is optional, but many algorithms need to evaluate distance between states. Depending on the state space, defining this metric may be difficult.
- A low-level function for sampling states. This is usually uniform sampling based on pseudo-random number generators (see [2] for other generators, such as quasi-random). It is very important that the topology of the state space is accounted for in this routine, as this is a common source of error. For certain spaces, uniform sampling of states can be implemented by uniform sampling in each dimension of the state space. However, this is not generally the case. For instance, spaces such as SE(3) need special attention to make sure the sampling is uniform [34]. Based on this low level functionality, the planning algorithm can implement different sampling distributions (e.g., [9]–[11] and many more).
- A function for state expansion. The purpose of this function is to move away from a given state, so that the tree expansion can be continued. In practice, this function is often in the form of a local planner or a model of motion:
- A local planner. A function of the form $local : \mathcal{X} \times \mathcal{X} \times [0, 1] \rightarrow \mathcal{X}$ generates the states that lie on a path segment between two given end-points. The topology needs to be considered here as well [34]. Note that when planning with controls such a function can be defined only for specific robotic systems [1].
- Propagation of a control from a given state (forward propagation). A function $propagate : \mathcal{X} \times \mathcal{U} \times [0, \infty) \rightarrow \mathcal{X}$ generates the states the robot passes through when
applying a given control for a given amount of time starting at a given state. This represents the model of motion for the robot.
IV. TREE PLANNER HEURISTICS
Many of the algorithms introduced over the years present ideas that can be combined and reused. Earlier on, the heuristics a tree planner employed to expand its tree data structure were what defined the planner (see for instance, EST and RRT). As the research progressed, many other ideas were introduced, combinations of existing ideas were proposed, blurring the distinction among different tree planners. In this section we aim to provide a series of ideas extracted from algorithms that have been shown to work well in practice. Many of these ideas (but not all) are compatible, meaning that they can be combined to produce different algorithms. Depending on the task, one could create an algorithm with increased performance. To evaluate the performance of individual ideas we show relevant experimental results. When such results exist in the literature we provide a summary of those results. For the cases where no experimental data was found, we present our own experiments.
A. Selecting States for Further Expansion
Deciding which parts of the tree of motions merit further exploration is a fundamental step in the execution of a tree planner and it weighs heavily on the planner’s overall performance. This decision is problem-dependent and at this time it is unclear whether an optimal approach to making this decision exists. In this section we present some of the better-known techniques for selecting nodes to be expanded, but this list is by no means comprehensive. The research on this topic is so extensive that presenting it entirely is simply not feasible. A small sample of this research is referenced in this work [4]–[24], [35]–[38].
1) Using Voronoi bias: One of the most successful ideas is to extend the tree of motions towards a random state, starting from the state in the tree that is nearest to that random one [4]. Choosing states to expand from in this fashion guides the tree of motions towards the largest Voronoi regions. This approach does not guarantee the tree never grows onto itself but seems to work well as long as a good distance metric is available [35], [39].
2) Using the out-degree of the nodes in the tree: Focusing on continuing the tree growth from nodes that have a lower out-degree is likely to take the search into unexplored space. This approach defines a probability distribution over the nodes in the tree of motions and selects nodes for expansion according to this distribution [6].
3) Using a decomposition of the state space: A similar technique is to split the state space into cells. These cells can be defined by grids imposed on the space [8]. Every new node added to the tree is also placed in one of the defined cells. When continuing the tree expansion, nodes from emptier cells are preferred. This implies a probability distribution is defined over the cells in the grid. Since there are typically fewer cells than nodes, selecting a node to expand from is a more efficient process. This approach has been shown to work well for difficult problems [8].
Using such decompositions can become problematic if the number of cells is large. The number of cells can become very large for high-dimensional spaces. As the tree increases in size, it is likely more will be gained from continuing the tree expansion from the cells corresponding to the boundary of the explored space. This can be achieved by keeping track of the number of neighbors for each cell [21]. Another possible improvement is to use multiple levels of decomposition: we can define larger cells that are themselves split into smaller cells [21]. This combination of ideas can lead to one and even two orders of magnitude speedup, depending on the model of the robot and the environment, as experiments in [21] show.
Another method of decomposing the state space is hierarchical decomposition [19]. The state space is assumed to be bounded and considered to be one large cell at the beginning of the exploration. As the tree grows, the cells being expanded from get split in half. This approach proceeds deterministically by maintaining a queue of cells that need to be explored, prioritized by their volume and the iteration number of their last exploration step [19].
4) Projecting the state space to lower dimensional spaces: More recently, a number of algorithms employ projections from the state space to a lower dimensional space to be used in conjunction with decompositions. The intention is to approximate the coverage of the state space by evaluating the coverage in the projected space. This approach was introduced since evaluating coverage in lower dimensional spaces is easier, as such spaces can often be decomposed into a manageable number of pieces. Although not explicitly mentioned, orthogonal projections are suggested in [8]. Using projections to the workspace has been shown to be useful for mobile robots [20]. To the authors’ knowledge, the first explicit use of generic projections is in [40]. It is often the case that even simple, intuitive projections perform well [21], but it is unclear whether this can be done in general [41].
B. Using a Notion of Direction
Accounting for the direction of expansion is another important idea that helps in guiding the tree expansion. Keeping track of previously used directions increases the chance of using a better direction of expansion [18], [35]. In the case of narrow passages, Principal Component Analysis (PCA) can be used locally to find a good direction of growth. This use of PCA can lead to a speedup of up to one order of magnitude, depending on the environment, as reported in [37].
A related idea is that of computing discrete paths that lead to the goal [20]. These discrete paths are a sequence of cells in a decomposition of the workspace, one that connects the starting state to the goal region. Even though these discrete plans are in the workspace, and as such, cannot be directly converted into state space plans, they can serve as a guide, a means to lead the state space exploration. It is typically the case that computing these discrete motion plans is much easier and much faster. As the state space exploration
proceeds, gained information can be used to recompute the discrete plan being used as a guide. This interplay of discrete and continuous search speeds up exploration. It has been shown that in certain mobile robotics applications, a speedup of up to two orders of magnitude can be obtained [20].
C. Bi-directional Search
A very successful technique for improving the performance of tree planners is bi-directional search. This is a search technique borrowed from artificial intelligence that has also been used successfully in the context of tree-based planning [6], [8], [12] – speedup factors of 3 to 4 are reported in [12]. Bi-directional search means that instead of growing a single tree from the start state towards the goal region, two trees are grown: one from the start state towards the goal region and one from the goal region towards the start state. Note this method requires that we have a means of sampling the goal region, or we know the actual goal state [42]. The two trees can take turns at being grown or can be grown in parallel. After each iteration that adds a motion to a tree, an attempt is made to connect to the other tree [8], [12]. If this attempt is successful, a solution path has been found: the path from the start state to the connection state concatenated with the reversed path from the goal to the connection state. Note this is a second requirement for this technique to be applied: the paths that are added to the trees need to be reversible. This is usually the case when computing kinematic paths or when systems of differential equations are used to model the motion along a path segment. However, when using physics-based simulation, the paths can no longer be reversed. An additional problem with bi-directional search is that when planning with controls, even if paths can be reversed, gaps between the two trees need to be closed, and this may be non-trivial [13], [43].
D. Lazy Collision Evaluation
Another very successful idea is that of lazy collision checking [8], [44]. Since collision checking usually takes more than 90% of a sampling-based planner’s execution time, it is desired to minimize the number of collision evaluations. A method to do this is through the use of lazy collision evaluation. This means that all states on all paths are assumed valid until a solution is found. The path segments that make up the found solution are then checked for collision. If a segment is found to be valid, it is marked as such. If it is not valid, it and its descendants are removed from the tree. If the entire path was found to be valid, the algorithm completes successfully. If the solution was found to be invalid, the tree continues to be grown, remembering the parts that were marked as valid. This technique allows the planner to check collisions only for the path segments it tries to use as part of the solution, leaving other ones unchecked, thus reducing the number of total collision evaluations.
As an example of the speedup that can be obtained with lazy collision evaluation we present a comparison of two algorithms from the ompl library: EST and its bi-directional implementation with lazy collision checking, SBL. Our own experiments[1] show that performing this comparison for the problem of moving a 7 degree-of-freedom manipulator in the presence of obstacles, from above to underneath a dining table, a speedup factor of 6.3 in favor of SBL can be observed.
E. Goal Biasing
If states in the goal region can be sampled (or are known a priori), the tree growth can be biased to grow towards these states. Biasing can be done for example by attempting to connect to goal states periodically (e.g., [14]) or by growing the tree from states that are closer to the goal (e.g., [45]). For the latter approach we need either a distance metric or a heuristic to evaluate the distance to the goal. While this method can lead to significant speedup (we report speedup by a factor of 3 to 90 in [45]), it can also degrade the performance of an algorithm when the solution path first needs to go farther from the goal and then back towards it (RRT slowed down by a factor of up to 3 in [21]). Using this approach in conjunction with a learning technique that limits growing towards the goal from specific regions, in case of repeated unsuccessful attempts, may alleviate the problem [38].
F. Projection onto the Constraint Space
If the space of valid samples has a small volume with respect to the state space, most of the sampled states will be invalid. This can lead to significant performance degradation. For instance, if a robot arm is asked to manipulate an open container, we most likely want the arm not to spill the content. This means we will constrain at least one degree of freedom for the arm to a very small range, which effectively makes the volume of the manifold of valid states in the state space be 0. The chance of sampling valid states is then practically null. In such cases, techniques that project samples onto the lower dimensional constraint manifold can be employed [46]. For a review of some methods of sampling in such lower dimensional manifolds, the reader is directed to [47], where three methods are experimentally evaluated: Randomized Gradient Descent, Tangent Space Sampling and First-Order Retraction, with the conclusion that First-Order Retraction is the preferred option.
G. Using Motion Primitives
In certain cases we may be interested in limiting the set of motions a robot is allowed to make. This can be done for instance by discretizing the control space \( U \) [16]. Such an approach is reported to achieve speedup factors of 3 to 20, for some problems [16]. The notion of a maneuver automaton [48] can be used to define a formal language on motion segments that can be used to form valid paths. Such techniques help with a more systematic exploration in the control space: one can guarantee that two controls that are very similar to one another are not both evaluated. The disadvantage is however that selecting a finite set of controls
[1] The data for all experiments conducted for this paper is available at http://kavrakilab.org/data/ICRA2010TP/index.html
from an infinite control space may prevent finding solutions when they exist.
H. Parallel Execution
It has been shown that sampling-based planners perform very well when using parallelization, be that an embarrassingly parallel setup [49] (running multiple instances of the planner until one of them finds a solution) or using shared memory parallelism [21]. Due to the randomized nature of the algorithms, super-linear speedup can be observed with respect to computation time, as shown in [21].
V. THE LITTLE DETAILS
This section is a list of details the authors feel are important to have in mind when implementing a sampling-based motion planner. The order in which these details are presented roughly follows the implementation of a typical tree planner. For the experiments we conducted, the OOPSMP [30] library was used and all reported values are averaged over 100 runs, on a 2.83 Ghz CPU with 8 GB RAM running Ubuntu Linux.
A. How Far to Grow a New Motion
Based on the set of heuristics used (from Section IV), the algorithm has chosen a state in the tree it is about to expand from and a direction of expansion. The question that remains is how far to expand this motion. In general, a good approach is to grow the tree until an obstacle is hit or some maximum length is reached. Defining minimum and maximum lengths of motions to be added in the tree prevents us from having too many short motions around a single state and from bouncing from one side to the other in the state space. Depending on the space we are planning in, the parameters for minimum and maximum motion lengths likely need to be adjusted in order to get reasonable progress (more on how to evaluate the progress in Section VI). Keeping somewhat around 90% of the valid part of the motion is potentially better than keeping the entire valid part, since the last valid state may be too close to a collision and further expansions from there would be unsuccessful.
To demonstrate the influence of the length of added motions on the runtime of a sampling-based planning algorithm we show running times of EST with different lengths for added motions for a free-flying robot in 3D.
New motions in our EST implementation are started from some state \( s \), already existing in the tree, and extend to a state \( d \), sampled around \( s \) using a Gaussian probability distribution. We thus control the length of the new motions by changing the standard deviation of the sampling distribution for \( d \). We show some experiments\(^2\) in Table 1. For low standard deviation, we have slower progress, so higher runtimes, and for standard deviation that is too high, we are bouncing from one side of the state space to the other, again increasing runtime.
When expansion is attempted in narrow passages, it is quite possible even short motions would cause collisions. A means to address this issue is to allow a small penetration of obstacles. This leads to higher chances of finding samples in narrow passages, but introduces erroneous samples. These erroneous samples can however be replaced by valid ones through a re-sampling process in a small vicinity of the penetrating sample [50].
A similar idea is to “retract” the robot and compute valid states along the surface of the colliding obstacle, thus generating a set of candidate motions that take the robot through the narrow passage [51]. This increases the chances of traversing the narrow passage, as more paths inside it are being evaluated. Speedup of more than two orders of magnitude is reported in [51] for particularly difficult narrow passage problems.
B. Intermediate States on Motions
Adding intermediate states along motions can lead to computational speedup, if the number of added states is not too large. In Table 2 we show the benefits of running RRT with adding of intermediate states at varying resolutions, along the generated motions, for a free-flying robot in 3D\(^2\). The resolution specifies the distance between added intermediate states. We report the speedup achieved with respect to the algorithm running without adding intermediate states (\( \infty \) resolution). We observe that adding too many states can slow us down, but we can also obtain speedup of up to 20% for appropriate values of the resolution.
\[
\begin{array}{cccc}
\text{Resolution} & \text{Runtime(s)} & \text{Speedup} \\
\infty & 0.367 & 1.00 \\
0.005 & 0.492 & 0.75 \\
0.010 & 0.310 & 1.19 \\
0.050 & 0.334 & 1.10 \\
0.100 & 0.339 & 1.08 \\
0.500 & 0.317 & 1.16 \\
1.000 & 0.307 & 1.20 \\
5.000 & 0.369 & 1.00 \\
\end{array}
\]
Table 2. Runtime of RRT with intermediate states at varying resolutions.
C. Continuation of Exploration
Since tree planners cannot decide that a solution does not exist, they will simply fail after the amount of time allowed for computation elapses. In some cases, a little additional computation time can lead to finding a solution. For this reason it is best to organize the tree planner in such a way that a subsequent call without clearing the data structures in the meantime continues the exploration using the previous tree of motions. Of course, this assumes the environment remains unchanged in between calls.
\[\text{http://kavrakilab.org/data/ICRA2010TP/index.html}\]
\(^2\)The data for all experiments conducted for this paper is available at http://kavrakilab.org/data/ICRA2010TP/index.html
When the environment does change in between calls to the motion planner, parts of the tree of motions become invalidated. Instead of starting with a new tree, parts of the tree that remain valid can be kept [36], [52].
D. When Using Physics Simulation
When using physics-based simulators (such as ODE [53]), the simulation can become unstable during planning, especially with random selection of controls. This needs to be detected, to avoid obtaining erroneous solutions. In general, sanity checks such as verifying that joints have not been broken and the positions of bodies are valid floating point numbers are sufficient.
Another potential problem with physics simulation is that the results of forward propagation may not seem deterministic if different time steps are used during the planning process (e.g., ODE). For this reason, it is recommended that a constant time step be used throughout the planning process.
E. Path Shortening and Smoothing
Due to the randomized nature of the planners discussed in this work, the obtained solution paths usually contain unnecessary and awkward maneuvers. However, post-processing solution paths using shortening and smoothing algorithms [54], [55] is possible and is encouraged.
VI. DEBUGGING A TREE PLANNER
This section consists of advice on how to debug and test a tree planner, and a list of suggestions that may help with the development. Once it is implemented, a tree planner is difficult to debug, due to its randomized nature. In addition to typical software engineering approaches, it is recommended that the amount of randomness in the execution of the planner is minimized (fixing the random seed, running in a single thread). Typical things to test are:
1) Running the algorithm on toy problems, where solutions are known to exist. Visualizing the solutions and on-screen projections of the tree of motions is actually one of the best ways to check whether they are correct. Checking whether the states along obtained solution paths are inside the bounding box of the state space is also a good idea. This test should be repeated with different random seeds and different number of threads, if applicable.
2) Assuming the solutions of toy problems seem correct, the next step is to run on more complex problems and compute statistics such as number of iterations, number of created states, average path segment length, average runtime until a solution is found. To make sure the algorithm is doing what it is supposed to, counting events is very useful. What this means is that various counters should be added to the code so that we can check how often certain pieces of code are executed, on average (e.g., to check whether goal biasing is used as often as we would like).
If we are interested in comparing a newly implemented motion planner to other existing ones, a set of benchmark problems will be needed. Unfortunately, there is no known set of good benchmark problems, to the authors’ knowledge. In fact, the notion of “good benchmark” is as of yet unclear. On the selected problems, looking at the runtime is important. Due to the random nature of the algorithm, averages need to be taken over multiple runs. In addition to the runtime, the variance of the runtime is important as well: a planner that has lower variance in runtime tends to be more reliable. The average number of states in the tree, used memory, number of calls to collision checker (or physics simulator) are other values to look at.
VII. CONCLUSIONS
In this paper we assumed motion planning was performed for robotic systems. However, it should be noted that with small adjustments, motion planning algorithms can be applied to protein folding, digital actors and other problems [1]. As a general rule of thumb, bi-directional search is preferred if a means to sample the goal region is available and the paths of the robotic system are reversible. Lazy collision checking is preferred, if it can be applied. In terms of guiding the tree exploration, the idea of leading the exploration using discrete paths in a projection of the state space provides significant computational advantages. If narrow passages are prevalent, techniques such as the ones based on PCA or retraction should be used. Depending on the task, more of the ideas presented in previous sections can prove beneficial. Furthermore, shared-memory parallelization is advised, if multiple compute cores are available.
The material in this paper is by no means exhaustive and the reader is encouraged to see the referenced literature for more details. However, we made an effort to present high-level decisions and low-level aspects that can lead to the implementation of a good single-query sampling based-motion planner. The high-level decisions are in fact a set of ideas extracted from algorithms that have been shown to perform well in practice. The low-level aspects are details that usually get left out from papers in the interest of space. The purpose of collecting this information in a single paper is to improve our understanding of sampling-based tree planners and to motivate our community to seek truly substantial improvements to these planners as a whole.
ACKNOWLEDGEMENTS
The authors would like to thank the reviewers, J.-C. Latombe, D. Hsu, K. Bekris, R. Rusu, M. Ciocârlie and the members of the Kavraki Lab for providing valuable comments. Furthermore, the authors thank Marius Şucan for drawing the image in this document.
REFERENCES
3Some problems that are known to be more difficult are available at http://parasol-www.cs.tamu.edu/dsmft/benchmarks/
|
{"Source-Url": "http://ioan.sucan.ro/files/pubs/implementingplanners_icra2010.pdf", "len_cl100k_base": 7169, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24501, "total-output-tokens": 7807, "length": "2e12", "weborganizer": {"__label__adult": 0.0005192756652832031, "__label__art_design": 0.000522613525390625, "__label__crime_law": 0.000850677490234375, "__label__education_jobs": 0.0010137557983398438, "__label__entertainment": 0.0001156330108642578, "__label__fashion_beauty": 0.0002677440643310547, "__label__finance_business": 0.00029087066650390625, "__label__food_dining": 0.0005292892456054688, "__label__games": 0.0014896392822265625, "__label__hardware": 0.0022258758544921875, "__label__health": 0.0011043548583984375, "__label__history": 0.0004906654357910156, "__label__home_hobbies": 0.00031375885009765625, "__label__industrial": 0.0012149810791015625, "__label__literature": 0.00031495094299316406, "__label__politics": 0.0005049705505371094, "__label__religion": 0.0006570816040039062, "__label__science_tech": 0.2174072265625, "__label__social_life": 0.0001512765884399414, "__label__software": 0.008392333984375, "__label__software_dev": 0.7587890625, "__label__sports_fitness": 0.0008397102355957031, "__label__transportation": 0.0018606185913085935, "__label__travel": 0.0003170967102050781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34299, 0.03548]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34299, 0.68025]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34299, 0.93046]], "google_gemma-3-12b-it_contains_pii": [[0, 4554, false], [4554, 10423, null], [10423, 16728, null], [16728, 22906, null], [22906, 28290, null], [28290, 34299, null], [34299, 34299, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4554, true], [4554, 10423, null], [10423, 16728, null], [16728, 22906, null], [22906, 28290, null], [28290, 34299, null], [34299, 34299, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34299, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34299, null]], "pdf_page_numbers": [[0, 4554, 1], [4554, 10423, 2], [10423, 16728, 3], [16728, 22906, 4], [22906, 28290, 5], [28290, 34299, 6], [34299, 34299, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34299, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
a3490f6bf1e82328f052ef16c429ec7a4df927f0
|
Front-End Performance Testing and Optimization
Abstract
Today, web user turnaround starts from more than 3 seconds of response time. This demands performance optimization on all application levels. Client side resources contributes on response time more than back-end. Optimizing the front-end performance from a single user point of view is a good practice before testing an application with high user loads. In this paper we will discuss the importance of web front-end resources optimization, resources to test the front-end performance, techniques to optimize the front-end performance and how these activities are complementary to load testing.
Introduction
The arrival of Web 2.0 has put lots of emphasis on the look and feel of web applications which puts more and more complexity on the front-end. Today, bad user experiences not only occur due to application, database, servers and infrastructures tuning but also due to the time it takes to load the web page and displaying its contents on end-user screen.
Therefore, identifying and resolving all client-side web application’s performance issues without losing the look and feel of the web application are of utmost importance for good user experience. Another factor which makes the front-end performance more important is now Google rank websites in search results based on their web page speed.
What is a website Front-end?
Front-end or client-side is user interface or that particular part of an application (website or software) that user views on his/her screen. This interface helps user to interact directly with the application by entering desired/required commands and to access other application areas as well.
Importance of Front-end Performance Optimization
Few years back, when we talked about website performance optimization we meant optimizing its server-side only since web sites were mostly static and almost all the processing was done on server side. But the advent of Web 2.0 technologies, now web applications are dynamic. Client side should be given due importance as well besides server side processing. Web applications architecture has forced the performance engineers to rethink about the performance testing optimization strategies.
A web application performance can be improved at two levels.
- Back-end/Server side
- Front-end/Client side
Typically, application stakeholders (especially developers) strongly believe that back-end optimization is most important from performance perspectives. Server side bottlenecks are highly important because it can make the web application useless. But it’s not the end of world in web application’s performance optimization. Client side performance issues are even more critical from performance perspectives because they have more impact on user experience. Improving the back-end performance by 50% improves the overall application performance up to 10% only but application performance can be improved by 40% or more by only reducing the front-end time to half. Moreover, front-end performance optimization is quite simple and cost effective as compared to back-end performance optimization where redesigning application architecture and code, code profiling, adding or modifying hardware, distributing database etc. is required. A study at Yahoo found that on average only 10-20% of total page loading time is spent on the back-end and other 80-90% time is spent on the front-end.
Difference between Front-end performance testing and load testing?
Front end performance testing is about "How fast does this page load?" from a single user point of view. We also call this activity "Web performance optimization". Load testing us is about "How fast does this page load when 1000 users are working on the application?" that is from a multiple users point of view where resources are used concurrently.
To know how much users your application can handle, to validate your business requirements or to test the overall performance of your application on a specific worst case scenario, you need to load test your application using a load testing tool like AgileLoad.
To optimize the rendering speed on the front-end for a single user, AgileLoad can easily be used. Other specialized tools exist also on the market.
Front-end Performance Testing Tools
Few years ago, it wasn't an easy task for a web developer to figure out what was actually happening after user submits the request on a browser. But these days there are various tools available online which can help in identifying all the activities as the user hits the enter button in address bar. These tools,
- Grades web page based on one of predefined rule set or a user-defined rule set
- They offer suggestions for improving the web page's performance
- Summarizes the web page's components
- Displays statistics about the web page
Some of these famous Front-end performance testing sources are,
- Page Speed
- Y-Slow
- Firebug
- Web page test
- Yottaa.com
Page Speed
Page speed is an open source Firefox/Firebug ad-on launched by Google that evaluates the web page and provides suggestions to minimize web page loading time. Through this service web pages are fed through a Google server and various algorithms are applied to make them more efficient and fast. It makes web page retrieval faster when users access those pages through Google search engine.
Y-Slow
Yahoo Y-Slow is a browser plug-in which tests the web page against various optimization rules defined by Yahoo performance team and recommend suggestions to optimize the web page.
Firebug
Firebug is another browser plug-in which provides various services including debugging of front end development, tracking of all the network requests and profiling JavaScript function calls. Firebug is a favorite tool for most of the developers for client side performance evaluation and profiling.
Web Page Test
Web page test is a free online service which provides the website front-end speed test facility. Website speed can be tested on all the famous web and mobile browsers from different geographical locations. It provides detailed information on all the application components which can be really helpful in application optimization.
Yottaa
Yottaa is web optimization solution that provides the web application Yottaa performance score and identifies areas which can contribute most to the application performance.
Front end Performance testing with AgileLoad
AgileLoad Script Editor captures and analyzes all the requests made between the user and the application to build a test scenario.
The Replay function validates the script generated by replaying and comparing each request with the initial scenario.
The Replay tab contains for each page of your test scenario a graphical bar chart which shows the time spent for the primary request (in blue) and the overall response time (in orange). It also gives you details of all the resources loaded, the time spent for each resources, the detailed HTTP response (Body, Client HTTP Header, Server HTTP Header) associated with each HTTP request.
For each page, you also retrieve, the HTML view, the source view, structure view, HTML tree view, HTML server headers view.
A web page performance summary gives you details on DNS time, TCP connection time, SSL handshake time, the send time, server time, receive time, response HTTP status, response size in bytes etc.
This page speed waterfall highlights problematic resources to be optimized for each page of your test scenario.
Front-end Implementation
The front-end of a web application is generally based on thin client architecture. Thin client doesn’t process any data; it only presents the user interface (UI) to interact with the application. Similarly in typical web applications client-side contains web browser only. Internet Explorer, Google Chrome, Mozilla Firefox, Apple Safari and Opera are most common web browsers used these days. The browser responsibility is to communicate with the web server over the HTTP protocol, rendering the UI of the web application and allowing user inputs.
In web applications the user interface is generally rendered as an HTML document. This HTML document contains text, input fields, link to other resources, embedded objects and reference to other images, scripts, and style sheets. Web browser retrieves the root HTML document; parse its text and resolves referenced resources.
Afterward scripts are executed, style sheets are processed and other contents are rendered to the user. The execution of these events depends upon the selected browser. All these events can be executed concurrently or step by step by the selected browser.
User input is delivered through HTML elements (which are called forms). Forms are used to collect user inputs through input elements like text fields and check boxes etc. Form data is sent to server on successful form submission by HTTP request and corresponding response is rendered to the user.
Front-end Performance Optimization Techniques
In recent times lots of work has done on client side optimization. Yahoo and Google are pioneer in client side optimization. They not only providing services for front end performance measurements but also define set of rules for optimizing the application client side performance. These rules are now being followed all around to make applications faster. There is a big list of these rules and it will not be easy to cover all of them here, we will try to discuss few most important rules which can be more helpful in optimizing the client side performance.
1. Minimize HTTP Requests
An HTTP request is used to fetch root HTML document that may refer to other page resources like images, scripts and style sheets. Each of these resources must be fetched with every HTTP request. Every HTTP request adds performance overhead as it created network traffic between the client and server. Reducing the number of resources will decrease the HTTP requests required to render the web page and will improve the performance.
One approach to reduce the web page components is to simplify its design but it can affect its look and feel as well. So the best approach is to use the optimal resources but combine them to limit the user response time. Combining all the scripts and style sheets into a single script and style sheet respectively is a challenging task but it will greatly help in achieving the desired goal on performance optimization.
Similarly web page images can also be combined into by using techniques like CSS Sprites, Image maps and Inline Images.
2. Use a Content Delivery Network
This is the era of technology and web applications are being used all around the world. If the application is deployed on a single place, it can greatly affect users accessing the application from longer distance due to network delays. Applications can be deployed over different geographical locations to facilitate the users all over the globe. User response time can be greatly improved by just distributing static web contents on various locations instead of starting from the difficult task of redesigning the application to distribute the dynamic contents.
A content delivery network (CDN) is a collection of web servers distributed across various locations to provide web contents in an efficient manner. Based on less number of network hope counts, user request should be entertained from the closest web server.
Some large internet companies have developed their own CDN but it may not be cost effective decision for smaller companies and there are various CDN service providers in market whose services can be used to optimize the end user response time.
3. Add an Expire or Cache Control Header
Browser cache is another source of client side performance optimization. In these days applications are getting richer and richer as they are using various page components like images, style sheets, scripts etc. When first time user visits a web page, it makes lots of HTTP requests to download all the page resources. But he/she doesn’t need to download all the resources on visiting the same page again. Today browsers have great ability to cache web page components to reuse them on visiting the same page again instead of requesting to web server for
same resources every time. This feature reduces the web page downloaded resources and HTTP requests as well. It’s advised not to use any expiry date for a static component while a web server should uses an expiry header to tell the client how long a dynamic component can be cached.
4. Minify JavaScript and CSS
Removing the unnecessary characters for the code is called minification. Page load time can be optimized by removing all the additional sources like comments, new line commands, meta data, white spaces, new line commands etc. By removing the additional sources web page size is reduced and its download time as well. Obfuscation is another optimization technique applied on source code which even produced better results as compared to minification.
5. GZip Components
Today all modern browsers support compressed components. All the plain text documents like HTML, JS, CSS, and XML etc. can be compressed on server side before transferring to the web browser which will decompress these documents before displaying them to end user. An important point to be noted here is that binary files like images, PDF and SWF should not be compressed again because they are already compressed.
Compressing the already compressed elements will waste CPU utilization and can also increase the file size as well. You don’t need to do anything with the code to compress the web page components; compression can be easily enabled on most of the web server through some basic configurations. Following table will show the impact of minification and compression on web pages size.
<table>
<thead>
<tr>
<th>Source</th>
<th>Original Size</th>
<th>Minified Size</th>
<th>Compressed Size</th>
<th>Minified + Compressed size</th>
</tr>
</thead>
<tbody>
<tr>
<td>HTML</td>
<td>101 KB</td>
<td>97 KB</td>
<td>17 KB</td>
<td>16 KB</td>
</tr>
<tr>
<td>JS</td>
<td>243 KB</td>
<td>195 KB</td>
<td>73 KB</td>
<td>63 KB</td>
</tr>
<tr>
<td>CSS</td>
<td>90 KB</td>
<td>68 KB</td>
<td>19 KB</td>
<td>14 KB</td>
</tr>
<tr>
<td>Total</td>
<td>434 KB</td>
<td>360 KB</td>
<td>109 KB</td>
<td>93 KB</td>
</tr>
</tbody>
</table>
6. Put Style Sheets at the Top
Research at Yahoo discovered putting the style sheets to the document HEAD allows the browser to render progressively and it makes the page loading faster. This method even more useful if the page size is larger. Instead of making the user to wait for rendering of all the page elements and getting bored on white screen, it makes good user experience to display page to see the page components gradually instead of waiting and then viewing all the components suddenly.
Few modern browsers including IE doesn’t perform the progressive rendering on web page components on putting the style sheets at the bottom and frustrate the user with blank page.
7. Put Scripts at the Bottom
Downloading multiple sources concurrently is called parallel downloading. Parallel downloading improves the user experience by fetching all the required resources in less time. According to HTTP specification, browser doesn’t download more than two components in parallel for a hostname. You can download more than two components concurrently by serving your resources from multiple hostnames. But scripts don’t allow parallel downloading. The best solution to this problem is to put the scripts at the bottom. One can also use the DEFFER attribute to allow the browser to do the parallel downloading. But unfortunately Firefox doesn’t support the DEFFER attribute and although IE support the DEFFER attribute but even then all the desired results may not be possible. So the best policy is to put the scripts at the bottom to allow all the browsers to perform parallel downloading.
8. Avoid CSS Expression
CSS expressions are used to set the Style sheet property dynamically. An example of the CSS expression can be to set the background color alternate after every 30 minutes. CSS expressions are evaluated very frequently and they are evaluated whenever any user action is performed. They are evaluated when the page is rendered and resized, page scroll down and even on a mouse hover. They are so frequently evaluated even on moving a mouse around the page can generate more than 10,000 evaluations.
One way to handle this situation is just use the one time CSS expression, when the first time expression is evaluated set those values as explicit style sheet values. If there isn’t any choice other than using the dynamic style sheet values then use event handlers instead of using the CSS expressions.
9. Make JavaScript and CSS External
As the JavaScript and CSS files are cached by the browser, using them as external files can make the page response time faster. Both of these are in lined in HTML document and downloaded every time HTML document is downloaded which increase the size of the HTML document. Better approach is to make both JavaScript and CSS external files which are cached by the browser. In this way size of the HTML document is reduced but number of HTTP requests remains the same.
10. Image Optimization
These days web pages are made very attractive consists of lots of images. Web page load time can be greatly improved by optimizing these images only. Choosing the appropriate file format will greatly help in this cause. Normally JPG image format is used with high number of colors. PNG is best for rendered text and for images with alpha transparency.
11. Reduce Domain Name System (DNS) Lookups
The DNS maps domain names to IP addresses. DNS normally takes 20-120 milliseconds for DNS to lookup for DNS to lookup the IP address for given domains and browser can’t download anything from this hostname unless DNS lookup is completed.
Although DNS lookups are cached for better performance and this DNS information is placed on the operating system’s DNS cache. But most browsers have their own cache as well. IE cache the DNS lookup for 30 minutes by default and operating system cache has no use when DNS record exists in browser cache.
Number of DNS lookups will be equal to the unique hosts in the web page in case of empty browser cache. So reducing the number of unique host names will reduce the page load time.
12. Avoid Redirects
Redirects are accomplished by using HTTP status codes of 3xx especially the 301 (Moved permanently) and 302 (Found) status codes. Redirect is an indication that user needs to take some additional actions to complete the request. Main problem with redirects is they slow down the page load time. These redirects took place on various stages like when back slash (/) is not inserted at the end of the URL. Another redirect example is when an old website is connected to new one. Back slash redirects can be fixed in Apache by using Alias.
13. Remove Duplicate Scripts
Usually multiple developers work on a web application development and there is a chance of scripts duplication on web pages. Duplicate scripts really effect the web page performance. Additional execution, resources and HTTP requests will be required for those duplicate scripts which have no role at the end. Duplicate scripts may not have great effect when application is being accessed in Firefox but it really affect IE. Duplicate scripts insertion can be avoided by using the script management module.
14. Turnoff Entity Tags
Entity tags (ETags) are used to validate the browser cache data is updated. ETags compare the browser cached copy with the one on server cache to make sure browser has update data. The ETags has a limitation it only compares the browser cache to a unique server. This technique works well when there is only one hosting server. ETag will not work in a situation where application is hosted on multiple servers and browser gets the components from one server and validate it on another server. Especially ETags generated by IIS and Apache for the same component won’t match from one server to another and user receives 200 response code instead of small, fast 304 response of ETag. So the best approach is to turn off the ETags when your application is hosted on multiple servers.
15. Make Ajax Cacheable and Small
One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won’t be toying his thumbs waiting for those asynchronous JavaScript and XML responses to return. To improve performance, it’s important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable.
Conclusion
Web applications are becoming richer and richer in design and content and at the same time good user experience has become the most desirable attribute. There is misconception that application desired response time can be achieved by optimizing server side only. Research has showed that 80-90% of page load time is spend on client side and 40-50% page load time can be optimized by just focusing on front-end of the application as compared to 20% of server side optimization.
Also front-end performance optimization is not the same than back-end optimization. One is about improving the performance from a single user point of view; the other is focused on improving the performance from a multiple user point of view when resources are used concurrently.
Both tasks are complementary and can be tested with Agileload.
|
{"Source-Url": "http://www.agileload.com/docs/default-document-library/frontend_performance_testing_optimization.pdf?sfvrsn=0", "len_cl100k_base": 4198, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 18508, "total-output-tokens": 4589, "length": "2e12", "weborganizer": {"__label__adult": 0.0001984834671020508, "__label__art_design": 0.00034356117248535156, "__label__crime_law": 0.0002092123031616211, "__label__education_jobs": 0.0002727508544921875, "__label__entertainment": 4.8100948333740234e-05, "__label__fashion_beauty": 9.018182754516602e-05, "__label__finance_business": 0.0002346038818359375, "__label__food_dining": 0.00017690658569335938, "__label__games": 0.0002701282501220703, "__label__hardware": 0.0005693435668945312, "__label__health": 0.00019538402557373047, "__label__history": 0.00010246038436889648, "__label__home_hobbies": 4.249811172485352e-05, "__label__industrial": 0.00018036365509033203, "__label__literature": 8.392333984375e-05, "__label__politics": 0.00010055303573608398, "__label__religion": 0.0001875162124633789, "__label__science_tech": 0.0051116943359375, "__label__social_life": 4.1544437408447266e-05, "__label__software": 0.0203857421875, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0001424551010131836, "__label__transportation": 0.00021135807037353516, "__label__travel": 0.0001474618911743164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21716, 0.01289]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21716, 0.28859]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21716, 0.91109]], "google_gemma-3-12b-it_contains_pii": [[0, 652, false], [652, 3494, null], [3494, 5365, null], [5365, 6208, null], [6208, 7073, null], [7073, 7506, null], [7506, 12273, null], [12273, 14390, null], [14390, 18465, null], [18465, 21716, null]], "google_gemma-3-12b-it_is_public_document": [[0, 652, true], [652, 3494, null], [3494, 5365, null], [5365, 6208, null], [6208, 7073, null], [7073, 7506, null], [7506, 12273, null], [12273, 14390, null], [14390, 18465, null], [18465, 21716, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21716, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21716, null]], "pdf_page_numbers": [[0, 652, 1], [652, 3494, 2], [3494, 5365, 3], [5365, 6208, 4], [6208, 7073, 5], [7073, 7506, 6], [7506, 12273, 7], [12273, 14390, 8], [14390, 18465, 9], [18465, 21716, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21716, 0.05769]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
12c3819b0a1bae9994b3c17d01d21563fe0c1d44
|
On the Effectiveness of Mann-Kendall Test for Detection of Software Aging
Fumio Machida
Knowledge Discovery Research Lab.
NEC Corporation, Japan
f-machida@ab.jp.nec.com
Artur Andrzejak
Institute of Computer Science
Heidelberg University, Germany
artur@uni-hd.de
Rivalino Matias, Elder Vicente
School of Computer Science
Federal University of Uberlandia, Brazil
rivalino@fc.ufu.br, elder@mestrado.ufu.br
Abstract—Software aging (i.e. progressive performance degradation of long-running software systems) is difficult to detect due to the long latency until it manifests during program execution. Fast and accurate detection of aging is important for eliminating the underlying defects already during software development and testing. Also in a deployment scenario, aging detection is needed to plan mitigation methods like software rejuvenation. The goal of this paper is to evaluate whether the Mann-Kendall test is an effective approach for detecting software aging from traces of computer system metrics. This technique tests for existence of monotonic trends in time series, and studies of software aging often consider existence of trends in certain metrics as indication of software aging. Through an experimental study we show that the Mann-Kendall test is highly vulnerable to creating false positives in context of aging detection. By increasing the amount of data considered in the test, the false positive rate can be reduced; however, time to detect aging increases considerably. Our findings indicate that aging detection using the Mann-Kendall test alone is in general unreliable, or may require long measurement times.
Keywords—Software Aging, Trend detection, Mann-Kendall test
I. INTRODUCTION
Software aging and rejuvenation research field aims to understand the phenomenology of software aging and develop effective countermeasures such as software rejuvenation. At the base of this process, a major requirement is the accurate detection of software aging effects. The variety and complex nature of software aging sources, make the correct detection of their effects a real challenge. Despite the importance of accurate aging detection, we observe a lack of preliminary studies evaluating the effectiveness of techniques used for this purpose.
Surveying the literature reveals that a common approach adopted in previous studies is based on the following general two steps. First, selected computer system metrics (e.g., free or used memory), which are assumed to capture aging effects, are monitored under controlled workload conditions. Next, based on the collected data set and supporting assumptions (e.g., aging is present if memory usage increase monotonically under constant workload), a statistical technique is used to detect trends that are considered as a sign of software aging. We highlight two aspects in applying this approach.
First, we observe that the monitored system variables are not always selected taking into account the specifics of the trend test used. This is a considerable risk, given that every trend analysis technique makes assumptions that need to be met; if assumptions are violated, test results can be erroneous. Specifically, in the presence of serial correlation, we know from the literature that many trend detection techniques can be estimated less accurately, which means the larger the correlation, the larger the uncertainty [1], [2]. Note that different system variables may demonstrate varying levels of autocorrelation in their time series. Hence, it is possible to conclude that not all system variables will provide time series with the necessary properties required by all trend tests, making both decisions strongly dependent of each other.
Second, the lack of theoretical or experimental validation of considered supporting assumptions. For example, several combinations of events not related to software aging may promote a monotonic increase of system memory usage under constant workload. In [3] an example of such situation in a real system is provided for this specific case. If the supporting aging assumptions are not verified, that is, grounded on a solid theory or consistent observations from previous experiments, the results can be misleading.
Based on the above findings from surveying the literature, in this paper we present the results of a study regarding the effectiveness of applying the Mann-Kendall trend test to software aging detection. We selected this test for our study given its broad citation in the software aging and rejuvenation literature (e.g., [4], [5], [6], [7], [8]). We also demonstrate how different system variables influence the quality of the test results.
The rest of this paper is structured as follows. Section II describes the problem context. Section III contains the description of our experimental plan, shows and discusses the results of our study. Finally, we state our conclusions in Section IV.
II. MANN-KENDALL TEST APPLIED TO SOFTWARE AGING DETECTION
A. Detecting aging via trend discovery
The Mann-Kendall test [9], [10] is a non-parametric test for detection of monotonic trends in time series data. In context of aging detection, such time series are traces of relevant computer system metrics (so-called aging indicators) such as Resident Set Size or Heap Usage (see Section III-A). As many types of aging (e.g., memory leaks) manifest via increased resource usage, positive (upward) slope of such indicators is commonly interpreted as a sign of software aging. Hence,
experimental studies of software aging have applied the Mann-Kendall test along with such indicators for the purpose of aging detection.
Our main criticism with respect to this approach is the observation that detected trend may not be caused by software aging. For example, if an aging indicator has large variance by nature, this test might produce multiple false alarms. Another cause for existence of a trend in a metric might be increased yet legitimate resource usage. This can have multiple causes, e.g., specific application design patterns or interaction of underlying system components, which may or may not be dependent on the workload.
The variance of the aging indicator values might be mitigated by increasing the amount of data (i.e., length of traces/time series) subjected to the test. To this aim, one can measure the system for a prolonged amount of time and apply tests repeatedly on all data since the beginning of measurements. We also experimentally evaluate this alternative approach against using Mann-Kendall test on a moving window of a maximal size. Results for both approaches are contrasted in Section III.
B. Mann-Kendall test
The Mann-Kendall test [9], [10] verifies the null hypothesis, $H_0$, indicating that there is no trend over time, against the alternative hypothesis, $H_1$, presenting there is an upward or a downward monotonic trend against zero slope. Let $Y = (t_i, y_i)$ be a time series data, where $y_i$ is the series value at time point $t_i$ ($i$ is an integer index). Given $n$ consecutive data points, the Mann-Kendall statistic $S$ is computed by
$$S = \sum_{k=1}^{n-1} \sum_{i=k+1}^{n} \text{sgn}(y_i - y_k), \quad (1)$$
where $\text{sgn}$ denotes the signum function
$$\text{sgn}(x) = \begin{cases} -1, & \text{if } x < 0 \\ 0, & \text{if } x = 0 \\ 1, & \text{if } x > 0. \end{cases} \quad (2)$$
The value of the $S$-statistic, computed via Equation (1), is compared against a critical value with respect to a given significance level $\alpha$, where the critical values are drawn from standard tables (see e.g., [11]).
According to Kendall [10], a Normal approximation test could be used for data sets with more than ten values, providing that there are not many ties in $Y$ (i.e., we have $y_l \neq y_k$ for any $l > k$). A standardized test statistic $Z$ is used in this case and is compared to the Normal distribution.
C. Varying the amount of input data for $S$-statistic
While the Mann-Kendall test is robust against missing values, the amount $n$ of data contributing to the $S$-statistic might be significant for reducing the impact of “noise” (e.g. variance) of aging indicators. We evaluate this aspect by contrasting the two approaches explained below. Their respective performance for trend detection is evaluated in Sections III-B and III-C.
1) Sliding window of limited size - MaxWinSize: The first approach, namely MaxWinSize, uses the limited number of consecutive values in time series data. In online monitoring of aging indicators the number of measurements increases over time. Here all the observed values are taken into account for the test until this number is less than or equal to $n_{\text{max}}$. When the number of observations exceeds $n_{\text{max}}$, the moving window is used to sample the most recent $n_{\text{max}}$ values. Since the Mann-Kendall test is used with less than or equal to 40 samples in [11], we also implement this approach with $n_{\text{max}} = 40$. Note that we can switch to the normal approximation test when we have enough sample points, but here we focus on the Mann-Kendall test using sliding window with limited number of samples.
2) Sliding window of unlimited size - UnlimWinSize: The alternative approach, namely UnlimWinSize, removes the size limitation in the first approach and uses all values from the initial data point (i.e., beginning of the measurement). This approach might be useful to detect the long time trend that is not clearly observed in a short time period. In addition, the accuracy of trend detection might be improved due to averaging out “noise” in the system metric. However, the test might become computationally more expensive than MaxWinSize approach, since now each data value needs to be compared to all subsequent data values.
III. EXPERIMENTAL STUDY AND EVALUATION
A. Experimental scenario
Our experiments focus on detecting software aging caused by memory leaks. For the controlled experiments, we create a synthetic workload generator (SWG) that emulates the behavior of a general-purpose application by requesting and releasing memory blocks, repeatedly, at random intervals (up to 30 seconds). Figure 1 presents the pseudo-code of SWG. The size of memory blocks is randomly sampled from a range of values according to the selected workload intensity; low, normal or high.
The SWG is programmed in C language, under Linux OS as the operating system, and uses the standard functions malloc() / free() for dynamic memory allocation. We injected a fault in free(), which leaks memory chunks in a given percentage, $p$, such that $p\%$ of free() calls fail to release the allocated memory. The leak rate, $p$, is also a controlled variable in SWG. If $p$ is greater than $0\%$, the amount of memory consumption increases over time and it results in a memory leak.
Choosing a relevant aging indicator is a key to robust software aging detection. As discussed in previous works (e.g., [12], [13]), Free Memory (FM) is a system-wide aging indicator and hence its values might include some noises on actual aging trend. Application specific indicators such as Resident Set Size (RSS) and Heap Usage (HUS) are more precise to trace the actual memory demand from a target application. The values of FM and RSS are obtained by monitoring the Linux /proc directory, while the value of HUS is collected by instrumenting the memory allocator [14].
Figure 2. Observed time series data of the aging indicators Free Memory (FM), Resident Set Size (RSS), and Heap Usage (HUS) under workload generated by SWG for all combinations of leak rates (p = 0%, 0.5%, or 1.0%) and workload intensity (low, normal, or high). The scale for RSS and HUS is on the left side of each chart, while the scale of FM is shown on the right side.
By changing the leak rate (0%, 0.5% or 1.0%) and workload intensity (low, normal or high), we obtained a set of time series data for FM, RSS, and HUS, shown in Figure 2. In each experiment, the values of aging indicators are collected about every nine seconds and we use the data observed until 5000 seconds after the start of the software execution. The FM values decrease over time regardless of the leak rate. On the other hand, the values of RSS and HUS gradually increase for leak cases (i.e. p = 0.5% or p = 1.0%).
B. Results for MaxWinSize
First we evaluate the Mann-Kendall test with inputs from moving windows of maximum size 40 (i.e. MaxWinSize-approach, see Section II-C). The results are summarized in Figure 3. Plotted dot at a specific time point shows that the Mann-Kendall test has confirmed the trend existence (i.e. $H_0$ is rejected) with a confidence level of 95%. In every chart, results for each of the aging indicators (FM, RSS, and HUS) are plotted. All combinations of leak rates (p = 0%, 0.5% and 1.0%) and workload intensities (low, normal, high) are shown.
For the aging indicator FM (at the topmost of the charts), trends are detected almost all the time, regardless of leak rates and workload intensities. As a result, we conclude that FM is considered to be hardly suitable for aging detection using the MaxWinSize-approach.
For the non-leak case (p = 0%), trend detection is less frequent in the cases of RSS and HUS than the case of FM, although there are still some false alarms. The workload intensities also affect the amount of false alarms as higher workloads have more false alarms in both RSS and HUS. HUS produces less false alarms when compared with RSS, hence HUS is more robust to false alarms of memory-related aging.
For the leak cases (p = 0.5% and p = 1.0%), the test should
<table>
<thead>
<tr>
<th>p=0%</th>
<th>Low</th>
<th>Normal</th>
<th>High</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="image1.png" alt="Graph" /></td>
<td><img src="image2.png" alt="Graph" /></td>
<td><img src="image3.png" alt="Graph" /></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>p=0.5%</th>
<th>Low</th>
<th>Normal</th>
<th>High</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="image4.png" alt="Graph" /></td>
<td><img src="image5.png" alt="Graph" /></td>
<td><img src="image6.png" alt="Graph" /></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>p=1.0%</th>
<th>Low</th>
<th>Normal</th>
<th>High</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="image7.png" alt="Graph" /></td>
<td><img src="image8.png" alt="Graph" /></td>
<td><img src="image9.png" alt="Graph" /></td>
<td></td>
</tr>
</tbody>
</table>
Figure 3. Time charts plotting the aging detection time points for MaxWinSize-approach; a plotted dot shows that the Mann-Kendall test has confirmed trend at this time point (plotted separately for FM, RSS, and HUS).
reject the null hypothesis as early as possible. However, both RSS and HUS cause a lot of false negatives (i.e., no trend detection), especially in case \( p = 0.5\% \). The amount of false negatives is reduced for \( p = 1.0\% \), but still it is quite likely that no leak is indicated, even after the measurement lasts for more than 3000 seconds. For low and normal workloads, HUS has lesser false negatives than RSS. The difference between HUS and RSS becomes blurred for high workload cases.
We observe that if the aging rate is constant, the trend detection performance of Mann-Kendall test does not improve with longer measurement duration. This can be attributed to the fact that in the MaxWinSize-approach the tests consider only a small fragment of recent values of the aging indicator, and the local profile of aging indicators does not evolve over time.
**C. Results for UnlimWinSize**
Next, we apply the Mann-Kendall test combined with UnlimWinSize-approach (i.e., test input is all available data since beginning of measurement, see Section II-C) to the same data set. The results are shown in Figure 4 (presentation conventions are the same as in Figure 3).
Also in this case FM is not useful to distinguish leak cases from the non-leak case, since the trend is detected almost all the time. Results for HUS in cases of low and normal workloads are satisfactory. The tests are reliable after 2000 seconds, as they can distinguish the leak cases from the non-leak case accurately. For high workload case, however, the test using HUS indicates many false alarms (i.e., for \( p = 0\% \)) after about 3000 seconds. Except for this specific scenario, the combination of HUS with the UnlimWinSize-approach achieves good performance of aging detection.
RSS as an aging indicator can also detect the trend with high confidence in normal and high workload cases. In high workload, however, it faces many false alarms in the non-leak case. Moreover, it almost completely fails to detect the trend in low workload case with leak rate \( p = 0.5\% \).
**D. Comparative analysis**
To evaluate the performance of aging indicators for software aging detection, here we focus on the results of RSS and HUS, comparing the sensitivity and specificity of the experimental results. Sensitivity is defined as the fraction of true positives in cases of a true leaking. Figure 5 shows these sensitivities for the case \( p = 1.0\% \). The UnlimWinSize-approach achieves higher sensitivity than the MaxWinSize-approach for all cases (i.e., different workloads combined with different aging indicators). The better result for the UnlimWinSize-approach can be explained by the increased input size of the data to be analyzed in the test.
On the other hand, specificity is defined as the fraction of true negatives in cases of no leaking. Figure 6 summarizes the results for both approaches (MaxWinSize vs. UnlimWinSize). Obviously there are no significant differences between the two approaches, only in selected cases MaxWinSize-approach is better (higher specificity).
However, if we take into consideration Figure 3 and Figure 4, we see that most of false positives appears only in an earlier phase of the experiments using the UnlimWinSize-approach (especially in low and normal workloads cases). We conclude that the UnlimWinSize-approach becomes more stable with longer runtime of measurements, i.e. its ratio of false positives might be reduced over time.
**E. Lessons learned**
As a brief summary of this section, we list the lessons learned from this study.
First, we compare the appropriateness of aging indicators. As observed in the results in the both approaches, the test using FM cannot distinguish the leak cases from the non-leak case. The detection performance is significantly improved by using RSS or HUS. The difference between these two metrics is not large. In detail, RSS has a slightly lower sensitivity and specificity, but this is also case-dependent.
The Mann-Kendall test with moving window for limiting the number of samples (MaxWinSize-approach) is not a reliable technique for aging detection without knowledge about the appropriate window size. In our experiments with 40 samples as a windows size, even with RSS or HUS used as aging indicator, the MaxWinSize-approach produces lot of false alarms in the non-leak case and false negatives in the leak cases. While the detection performance can be improved by increasing the window size, it is not a trivial issue to determine an appropriate window size. The primary reason is that the optimal size depends on various factors including the type of software aging, monitoring interval, aging indicator, and application workloads.
The Mann-Kendall test without any limitation of sample points (UnlimWinSize-approach) is more suitable for aging detection, but it needs more data to achieve high confidence and it depends on the quality of the used aging indicators. As the number of samples increases, the false alarms and false negatives are reduced. To distinguish the leak case from the non-leak case, sufficiently long period of observation is required.
Our experimental results clarify that applying trend tests alone is not enough to have accurate aging detection. Thus, in order to consider all details involved, we conclude that it is necessary to follow a comprehensive protocol for aging
detection. This protocol should provide clear procedures to answer questions such as:
- How to calculate the sample size with respect to the selected aging indicators and trend test technique?
- Which trend test is more appropriate for a given class of aging indicators?
- How to compute the risk of false-positives or false-negatives occurrences?
These are among others necessary questions to help the experimenter to conduct a more educated and reliable decision making.
IV. CONCLUSION
In this paper we studied the effectiveness of the Mann-Kendall tests applied to software aging detection as it has been used in previous works. We showed through an experimental study that the Mann-Kendall test suffers high rates of false positives, commonly indicating software aging even where there is no aging. This can be explained by the fact that variability (or “noise”) of some underlying system metrics (aging indicators) creates short-term trends which are detected by the test.
To mitigate the latter effect, we contrasted the results of applying Mann-Kendall test to time series data from a moving window of maximum size 40 (MaxWinSize-approach) versus applying this test to all data since beginning of measurement until the current time point (UnlimWinSize-approach). Indeed, the UnlimWinSize-approach is more accurate and produces - in most cases - less false positives after a certain amount of metric data is available (in our case, after about 60% of the complete trace data). Thus, there is a trade-off of the UnlimWinSize-approach between running time and accuracy. This limits its utility for detecting aging during standardized software testing as suggested in [15], [16].
Our study has also uncovered that the choice of underlying system metrics as aging indicators has significant impacts on the aging detection capability. For example, the Free Memory metric turned to be useless (in our experimental setting) as it exhibits strong trends (see Figure 2, top row) and so always indicates presence of aging. On the other hand, the metrics Resident Set Size and Heap Usage turned out to be more useful.
Our future work will refine these results in several directions. We will broaden the study including other system metrics and experimental environment. Further explorations will consider the confidence in trend presence, not only a binary trend confirmation. Finally, we will further study the efficiency of aging detection by version comparison method [15].
V. ACKNOWLEDGMENTS
This work is supported in part by the grant AN 405/2-1 entitled Automated, minimal-invasive Identification and Elimination of Defects in Complex Software Systems financed by the Deutsche Forschungsgemeinschaft (DFG), and Brazilian research agencies CNPq, CAPES, and FAPEMIG.
|
{"Source-Url": "http://pvs.ifi.uni-heidelberg.de/fileadmin/papers/2013/WoSAR_2013_-_Mann-Kendall_Test_-_Machida_et_al.pdf", "len_cl100k_base": 5049, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22685, "total-output-tokens": 5199, "length": "2e12", "weborganizer": {"__label__adult": 0.000308990478515625, "__label__art_design": 0.0003032684326171875, "__label__crime_law": 0.0003418922424316406, "__label__education_jobs": 0.0005273818969726562, "__label__entertainment": 6.80088996887207e-05, "__label__fashion_beauty": 0.00011986494064331056, "__label__finance_business": 0.00018310546875, "__label__food_dining": 0.000308990478515625, "__label__games": 0.0005211830139160156, "__label__hardware": 0.00104522705078125, "__label__health": 0.0004935264587402344, "__label__history": 0.00016617774963378906, "__label__home_hobbies": 6.455183029174805e-05, "__label__industrial": 0.0002655982971191406, "__label__literature": 0.0002543926239013672, "__label__politics": 0.00016880035400390625, "__label__religion": 0.00027489662170410156, "__label__science_tech": 0.0261077880859375, "__label__social_life": 8.302927017211914e-05, "__label__software": 0.0107574462890625, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.00019359588623046875, "__label__transportation": 0.0002586841583251953, "__label__travel": 0.00015032291412353516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22395, 0.00803]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22395, 0.19811]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22395, 0.90549]], "google_gemma-3-12b-it_contains_pii": [[0, 5509, false], [5509, 11411, null], [11411, 13613, null], [13613, 17808, null], [17808, 19621, null], [19621, 22395, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5509, true], [5509, 11411, null], [11411, 13613, null], [13613, 17808, null], [17808, 19621, null], [19621, 22395, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22395, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22395, null]], "pdf_page_numbers": [[0, 5509, 1], [5509, 11411, 2], [11411, 13613, 3], [13613, 17808, 4], [17808, 19621, 5], [19621, 22395, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22395, 0.10112]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
48621eca3aacec2be2e419d2931fbe885f0d4164
|
Parallelizing RRT on Distributed-Memory Architectures
Didier Devaurs, Thierry Siméon and Juan Cortés
Abstract—This paper addresses the problem of improving the performance of the Rapidly-exploring Random Tree (RRT) algorithm by parallelizing it. For scalability reasons we do so on a distributed-memory architecture, using the message-passing paradigm. We present three parallel versions of RRT along with the technicalities involved in their implementation. We also evaluate the algorithms and study how they behave on different motion planning problems.
I. INTRODUCTION
Due to a wide range of applications, sampling-based path planning has benefited from a considerable research effort [1], [2]. It has proven to be an effective framework suitable for a large class of problems in domains such as autonomous robotics, manufacturing, virtual prototyping, computer graphics, structural biology, and medicine. These application fields yield increasingly difficult, high-dimensional problems with complex geometric and kinodynamic constraints.
The Rapidly-exploring Random Tree (RRT) has become a popular algorithm for solving single-query motion planning problems [3]. It is suited to solve robot motion planning problems involving holonomic, non-holonomic, kinodynamic, or kinematic closure constraints [3]–[5]. It is also applied to the validation and control of hybrid systems [6], [7]. In biology, it is used to analyze genetic network dynamics [8] or protein-ligand interactions [9], [10]. However, when applied to complex problems, the incremental growth of an RRT can become computationally expensive [9], [11], [12]. Some techniques have been proposed to improve the efficiency of RRT, by controlling the sampling domain [12], reducing the complexity of the nearest neighbor search [13], or employing gap reduction techniques [11].
Our objective is to further investigate RRT improvement by exploiting speedup from parallel computation. Some results have been obtained in that sense (Section II). However, existing work considers mainly shared-memory architectures and thus small-scale parallelism, up to 16 processors [14]–[17]. In this work, we are interested in what can be achieved by larger-scale parallelism. We focus on parallelizing RRT on distributed-memory architectures, using the message-passing paradigm. Our contribution is three-fold. First, we propose three parallel versions of RRT, based on classical parallelization schemes: OR parallel RRT, Distributed RRT and Manager-worker RRT (Section III). Second, beside the abstract view provided by the algorithms themselves, we present the main technicalities involved in their development (Section III). Third, we evaluate the algorithms on several motion planning problems and show their differences in behavior, depending on the problem type (Section IV).
II. RELATED WORK
A. Parallel Motion Planning
The idea of improving motion planning performance by using parallel computation was raised in prior work. In a survey of some early work [18], a classification scheme was proposed to review different motion planning approaches and some related parallel processing methods. A more recent trend is to exploit the current multi-core technology available on many of today’s PCs, that easily allows having multiple threads collaboratively solving a problem [19].
Among the most classical approaches, the embarrassingly parallel paradigm exploits the fact that some randomized algorithms, such as the Probabilistic Road-Map (PRM), are what is termed “embarrassingly parallel” [20]. The massive inherent parallelism of the basic PRM algorithm enables a significant speedup, even with relatively simplistic parallelizing strategies, especially on shared-memory architectures. In this approach, computation time is minimized by having several processes cooperatively building the road-map.
Another simple approach is known as the OR parallel paradigm. It was first applied to theorem proving, before being used to provide a parallel formulation of the Randomized Path Planner (RPP) [21]. Its principle is to have several processes running the same sequential randomized algorithm, each one trying to build its own solution. The first process to reach a solution reports it and broadcasts a termination message. The idea here is to minimize computing time by finding a small-sized solution. Despite its simplicity, the OR parallel paradigm has been successfully applied to other algorithms, such as in [22].
A more sophisticated approach is a master-slave scheme developed to distribute the computation of the Sampling-based Roadmap of Trees (SRT) algorithm [23]. In a first step, several trees, which can be RRTs or Expansive Space Trees (ESTs), are computed in parallel by all processes. In a second step, several master processes cooperate to distribute the computation of edges linking these trees, evenly among their respective slave processes.
An approach based on growing several independent trees can lead to a straightforward parallelization. This is the case for RRTLocTrees [24] and for the Rapidly exploring Random Forest of Trees (RRFT) [7], [8]. However, the focus of this paper lies elsewhere, our aim being to provide a parallel version of the basic (single-tree) RRT algorithm.
B. Parallel RRT
There is relatively little work related to parallelizing RRT [14]–[17]. The first one [14] applies the simple OR parallel and embarrassingly parallel paradigms, and a combination of both. To benefit from the simplicity of the shared-memory case, the embarrassingly parallel algorithm is run on a single SMP (symmetrical multiprocessor) node of a multi-nodes parallel computer. The only communication involved is a termination message that is broadcast when a solution is reached, but some coordination is required to avoid concurrent modifications of the tree. This scheme does not make use of the full computational power of the parallel platform, contrary to the OR parallel algorithm, which is run on all processors of all nodes. The same paradigms are also applied on a dual-core CPU in [15], where they are renamed OR and AND implementations. In the Open Motion Planning Library (OMPL) of the ROS framework, the AND paradigm is implemented via multi-threading, thus for shared memory. In [16], the OR paradigm is used on shared memory.
To the best of our knowledge, there has been only one attempt to develop a parallel version of RRT on a distributed-memory architecture. In [17], the construction of the tree is distributed among several autonomous agents, using a message passing model. However, no explanation is given on how the computation is distributed, and how the tree is reconstructed from the parts built by the agents.
III. PARALLELIZING RRT
For scalability purposes, we will parallelize RRT on a distributed-memory architecture, using the message-passing paradigm, one of the most widespread approaches for programming parallel computers. Since this paradigm imposes no requirement on the underlying hardware and requires an explicit parallelization of the algorithms, it enables a wide portability. Any algorithm developed following this approach can also be run on a shared-memory architecture, even though this would mean not making an optimal use of this architecture. Besides, scalable distributed-memory architectures are rather commonly available, in the form of networks of personal computers, clustered workstations or grid computers. To develop our parallel algorithms, we have chosen to comply to the standard and widely-used Message Passing Interface (MPI). Its logical view of the hardware architecture consists of p processes, each with its own exclusive address space. Our message-passing programs are based on the Single Program Multiple Data (SPMD) paradigm and follow a loosely synchronous approach: all processes execute the same code, containing mainly asynchronous tasks, but also a few tasks that synchronize to perform interactions.
A. OR Parallel RRT
The simplest way to parallelize RRT is to apply the OR parallel paradigm. Algorithm 1 presents our version of an OR parallel RRT, which is similar to the one in [14].
```
Algorithm 1: OR parallel RRT
input : the configuration space C, the root qinit
output: the tree T
1 T ← initTree(qinit)
2 while not stopCondition(T) or received(endMsg) do
3 qrand ← sampleRandomConfiguration(C)
4 qnear ← findBestNeighbor(T, qrand)
5 qnew ← extend(qnear, qrand)
6 if not tooSimilar(qnear, qnew) then
7 addNewNodeAndEdge(T, qnear, qnew)
8 if stopCondition(T) then
9 broadcast(endMsg)
```
Each process computes its own RRT (lines 1-7) and the first to reach a stopping condition broadcasts a termination message (lines 8-9). This broadcast operation cannot actually be implemented as a regular MPI broadcast routine, for it is a collective operation that would require all processes to synchronize. Rather, the first process to finish sends a termination message to all others, using MPI Send routines, matched with MPI Receive routines. As we do not know beforehand when these interactions should happen, a non-blocking receiving operation that will “catch” the termination message is initiated before entering the while loop.
The `received(endMsg)` operation is implemented as an MPI Test routine checking the status (completed or pending) of the request generated by the non-blocking receiving operation. Finally, in case of several processes reaching a solution at the same time, the program ends with a collective operation for these processes to synchronize and agree on which one should report its solution.
B. Collaborative Building of a Single RRT
Instead of constructing several RRTs concurrently, another possibility is to have all processes working collaboratively on building a single RRT. Parallelization is then achieved by partitioning the task of building an RRT into sub-tasks, assigned to the various processes. We propose two ways of doing so, based on different decomposition techniques.
(1) Since constructing an RRT consists in exploring a search space, we can use an exploratory decomposition [25]. Each process performs its own sampling of the search space and maintains its own copy of the tree, exchanging with the others the newly constructed nodes. This leads to a distributed (or decentralized) scheme where no task scheduling is required, aside from a termination detection mechanism.
(2) Another classical approach is to perform a functional decomposition of the task [26]. In the RRT algorithm, two kinds of sub-tasks can be distinguished: the ones that require knowledge of the tree (initializing it, adding new nodes and edges, finding the best neighbor of `qrand` and evaluating the stopping conditions) and those that do not (sampling a random configuration and performing the extension step).
1 http://www.ros.org/doc/api/ompl/html
2 http://www.mpi-forum.org
Two configurations are deemed too similar if the distance between them is less than the minimum validation step-size along the path.
Note that no space partitioning is involved here.
Algorithm 2: Distributed RRT
```
input : the configuration space C, the root q_init
output: the tree T
1 T ← initTree(q_init)
2 while not stopCondition(T) or received(endMsg) do
3 while received((nodeData(q_new, q_near)) do
4 addNewNodeAndEdge(T, q_near, q_new)
5 q_rand ← sampleRandomConfiguration(C)
6 q_near ← findBestNeighbor(T, q_rand)
7 q_new ← extend(q_near, q_rand)
8 if not tooSimilar(q_near, q_new) then
9 broadcast((nodeData(q_new, q_near))
10 else
11 broadcast((endMsg)
12 end
13 endMsg := received(endMsg)
14 if stopCondition(T) then
15 broadcast((endMsg)
```
This leads to the choice of a manager-worker (or master-slave) scheme as the dynamic and centralized task-scheduling strategy, the manager being in charge of maintaining the tree, and the workers having no knowledge of it. We now present both schemes in greater details.
1) Distributed RRT: Our version of a Distributed RRT is given by Algorithm 2. In each iteration of the tree construction loop (lines 2-10), each process first checks whether it has received new nodes from other processes (line 3). If this is the case, the process adds them to its local copy of the tree (line 4). Then, it performs its own expansion attempt (lines 5-10). If it is successful (line 8), the process adds the new node to its local copy of the tree (line 9) and broadcasts it (line 10). Adding all the received nodes before attempting an expansion, ensures that every process works with the most up-to-date state of the tree. At the end, the first process to reach a stopping condition broadcasts a termination message (lines 11-12). This broadcast operation is implemented in the same way as for the OR parallel RRT. Similarly, the broadcast of new nodes (line 10) is not implemented as a regular MPI_Broadcast routine, which would cause all processes to wait for each other. As a classical way to overlap computation with interactions, we again use MPI_Send routines matched with non-blocking MPI_Receive routines. That way, the received((nodeData)) test (line 3) is performed by checking the status of the request associated with a non-blocking receiving operation initiated beforehand, the first one being triggered before entering the while loop, and the subsequent ones being triggered each time a new node is received and processed. Note also that a Universally Unique Identifier (UUID) is associated with each node, in order to provide processes with a homogeneous way of referring to the nodes. Finally, the case of several processes reaching a solution at the same time has to be dealt with.
2) Manager-Worker RRT: Algorithm 3 presents our version of a Manager-worker RRT. The program contains the code executed by the manager (lines 2-10) and the workers (lines 12-16). The manager is the only process having access to the tree. It performs the operations related to its construction, and delegates the expansion attempts to workers. In general, the expansion is the most computationally expensive stage in the RRT construction, since it involves motion simulation and validation. The manager could also delegate the sampling step, but this would not be worthwhile because of the low computational cost of this operation in our settings (i.e. in the standard case of a uniform random sampling in the whole search space); the communication cost would then outweigh any potential benefit. At each iteration of the tree building (lines 3-9) the manager first checks whether it has received new nodes from workers (line 4). If so, it adds them to the tree (line 5). Then, it samples a random configuration (line 6) and identifies its best neighbor in the tree (line 7). Next, it looks for an idle worker (line 8), which means potentially going through a waiting phase, and sends it the data necessary to perform an expansion attempt (line 9). Finally, when a stopping condition is reached, it broadcasts a termination message (line 10). On the other hand, workers are active as long as they have not received this message (line 12), though they can go through waiting phases. During each computing phase, a worker receives some data from the manager (line 13) and performs an expansion attempt (line 14). If it is successful (line 15), it sends the newly constructed node to the manager (line 16).
Contrary to the previous algorithms, this one does not require non-blocking receiving operations for broadcasting the termination message. Workers being idle if they receive no data, there is no need to overlap computation with interactions. Before entering a computing phase, a worker waits on a blocking MPI_Receive routine implementing both the received((expansionData)) operation and the received((endMsg)) test. The type of message received determines its next action: stopping or attempting an expansion. On the manager side, blocking MPI_Send routines implement the broadcast((endMsg)) and send((expansionData)) operations. The remaining question about the latter is to which worker should the data be sent. An important task of
the manager is to perform load-balancing among workers, through the chooseWorker() function. For that, it keeps track of the status (busy or idle) of all workers and sends one sub-task at a time to an idle worker, choosing it in a round robin fashion. If all workers are busy, the manager waits until it receives a message from one of them, which then becomes idle. This has two consequences. First, on the worker side, the send(nodeData) operation covers two MPI_Send routines: one invoked to send the new node when the expansion attempt is successful, and the other containing no data used otherwise. Second, on the manager side, two matching receiving operations are implemented via non-blocking MPI_Recv routines, allowing for the use of MPI_Wait routines if necessary. This also enables to implement the received(nodeData) test with an MPI_Test routine. These non-blocking receiving operations are initiated before entering the while loop, and re-initiated each time the manager receives and processes a message. Finally, to reduce the communication costs of the send(nodeData) operation, workers do not send back the configuration qnear. Rather, the manager keeps track of the data it sends to each worker, which also releases us from having to use UUIDs.
C. Implementation Framework
Among the various implementations of MPI, we have chosen OpenMPI\(^5\). Since the sequential implementation of RRT we wanted to parallelize was written in C++, and MPI being primarily targeted at C and Fortran, we had to use a C++ binding of MPI. We were also confronted with the low-level way in which MPI deals with communications, requiring the programmer to explicitly specify the size of each message. In our application, messages were to contain instances of high-level classes, whose attributes were often pointers or STL containers. Thus, we have decided to exploit the higher-level abstraction provided by the C++ library Boost.MPI\(^6\). Coupled with the Boost.Serialization library\(^7\), Boost.MPI enables processes to exchange instances of high-level classes in a straightforward manner, making the tasks of gathering, packing and unpacking the underlying data transparent to the programmer. Finally, we have used Qt's implementation of UUIDs\(^8\).
IV. EXPERIMENTS
A. Performance Metrics
When evaluating a parallel algorithm on a given problem, we want to know how much performance gain it achieves over its sequential counterpart. Aimed at measuring so, the speedup \( S \) of a parallel algorithm is defined as the ratio of the runtime of its sequential counterpart to its own runtime: \( S(p) = T_S / T_P(p) \) [25], [26]. In theory \( S(p) \) is bounded by \( p \), but in practice super-linear speedup \( (S(p) > p) \) can be observed. The parallel runtime \( T_P(p) \) is measured on a parallel computer, using \( p \) processors, and the sequential runtime \( T_S \) is measured on one processor of the same computer. We define \( T_P(p) \) (resp. \( T_S \)) as the mean time needed to reach a solution, by averaging the runtime obtained over 100 executions of a parallel (resp. sequential) algorithm. We can then evaluate the scalability of a parallel algorithm, i.e. study whether the speedup increases in proportion to the number of processors. We can also measure the efficiency of a parallel algorithm \( (E(p) = S(p) / p) \) which is a decreasing function of \( p \) theoretically having values in \([0; 1]\) [25], [26].
B. Parallel Computer Architecture
The numerical results presented in this section have been obtained by running the algorithms on an HP cluster platform composed of 24 HP ProLiant DL160 G5 servers connected by a high-speed InfiniBand\(^©\) switch warranting 10 Gbit/s of bandwidth. Each server includes two 64-bit quad-core Intel\(^®\) Xeon\(^®\) E5430 processors at 2.66 GHz, with 12 MB of L2 cache, and sharing 7.79 GB of memory.
C. Motion Planning Problems Studied
We have evaluated the algorithms on three motion planning problems involving molecular models\(^9\). However, it is important to note that our algorithms are not application-specific and can be applied to any kind of motion planning problem. The studied problems involve free-flying objects (i.e. six degrees of freedom\(^10\)) and are characterized by different configuration-space topologies (cf. Fig. 1). BCL is a protein-ligand exit problem, where a ligand exits the active site of a protein through a pathway that is relatively short and large but locally constrained by several side-chains. CALB is a similar problem, but with a longer and very narrow exit pathway, i.e. more geometrically constrained than BCL. In GAB, a protein goes around another one in an empty space, thus involving the weakest geometrical constraints, but the longest distance to cover of all problems. Fig. 1 also presents the numerical results obtained when solving these problems with the sequential RRT.
\(^{5}\)http://www.open-mpi.org
\(^{6}\)http://www.boost.org/doc/libs/1_43_0/doc/html/mpi.html
\(^{7}\)http://www.boost.org/doc/libs/1_43_0/doc/html/serialization
\(^{8}\)http://doc.trolltech.com/4.3/quuid.html
\(^{9}\)The application we have used is the molecular motion planning toolkit we are currently developing [27].
\(^{10}\)To facilitate the algorithms’ evaluation, we have chosen not to increase dimensionality. Increasing it would mainly raise the cost of the nearest neighbor search, and it has been shown that, in high-dimensional search spaces, this operation can be performed most efficiently by using projections on a lower-dimensional space, without a significant loss in accuracy [28].
D. Speedup Achieved by the Parallel Algorithms
The first row of Fig. 2 presents the scalability achieved by the algorithms on each problem. Unsurprisingly, the scalability of the OR parallel RRT is strongly correlated with the variability of the tree size, measured by the ratio of the standard deviation to the mean of the number of nodes $N$ (cf. Fig. 1). This algorithm can achieve good results only on problems where this variability is large, such as CALB. When this variability is low, e.g. in GAB, it provides almost no improvement over the sequential RRT. The Manager-worker RRT shows a very poor speedup on all problems. This is partly explained by the fact that it involves much more communication than the other schemes. Each expansion attempt is preceded and followed by a communication between the manager and a worker, contrary to the Distributed RRT, in which communications between processes happen only after a new node is built. In the Distributed scheme, the total number of messages exchanged over the network increases linearly with $p$, but at each processor’s level, the number of messages is bounded by $N$. Thus, as long as the network bandwidth can withstand the communication load, the Distributed RRT can show a good scalability.
Although speedup curves of the Distributed RRT flatten when $p$ increases, we would have to use many more processors to see a decrease, contrary to other schemes. The best speedup it achieves on BCL, CALB and GAB is 10, 25.3 and 5.3, which correspond to quite a low efficiency of 0.2, 0.2 and 0.1. The greatest number of processors for which its efficiency is greater than 0.5 is 14, 10 and 3 respectively. Several factors contribute to this low efficiency. (1) Runtime is quite short on these problems, especially BCL. When more and more processors are added, the communication load increases significantly, thus outweighing the reduction in computation time and leading to a smaller increase in speedup. (2) When an RRT is built collaboratively, a side-effect of adding more processors is to change the balance between exploration and refinement (these terms being defined as in [12]) in favor of more refinement. This translates into generating larger trees (i.e. the number of nodes $N$ increases with $p$), thus reducing the increase in speedup, especially on not very constrained problems, such as GAB.
Classically, efficiency can be improved by increasing the problem difficulty. In doing so, we will also show that the Manager-worker RRT can perform better in some settings. Intuitively, it is worth using this scheme when the manager can delegate costly sub-tasks to its workers. However, in our settings the cost of the expansion step is quite low, as $q_{new}$ is generated by a simple linear interpolation between $q_{near}$ and $q_{rand}$, and motion validation is limited to collision detection. Expansion could be much more expensive, e.g. if a dynamic simulator was producing robot motions, or if some potential energy was computed in the case of molecular models. To test whether this could have an impact on the algorithms’ performance, we have run a controlled experiment in which we have artificially increased the cost of the expansion step to emulate different settings. To do so, during an expansion attempt we repeat $I$ times the collision detection routine in the `extend()` function. Tests were performed on the BCL problem, as it is characterized by a medium-level difficulty in its configuration-space topology. The second row of Fig. 2 shows the evolution of the algorithms’ speedup in relation to $I$, on 8, 16 and 32 processors. As $I$ goes up, we observe first a dramatic increase in the speedup of the
Manager-worker RRT, followed by a slower decrease due to the fact that the manager becomes a bottleneck waiting for busy workers. This higher speedup is enabled by the growth in computational load making the communication load not significant anymore. The maximum speedup corresponds to an optimal use of this scheme, which depends on $p$: when $p$ is increased, this maximum raises and is reached for higher values of $J$. The best efficiency values obtained for $p = 8$, 16 and 32 are 1.1, 1.1 and 0.9 respectively. Similarly, though not so dramatically, an increase in the expansion cost also translates into a better use of the Distributed RRT, which is more visible as $p$ goes up. As expected, no benefit is observed for the OR parallel RRT, whose optimal use relates to variability in tree size and not to computational load.
V. CONCLUSION
We have proposed three parallel versions of the RRT algorithm, designed for distributed-memory architectures using message passing: OR parallel RRT, Distributed RRT and Manager-worker RRT. Our OR parallel RRT is similar to the one in [14] and to those developed for shared memory [15], [16]. Our Distributed RRT and Manager-worker RRT are the counterparts for distributed memory of the AND (or embarrassingly parallel) RRT [14], [15]. None of these algorithms can be held as the best parallelization of RRT: it really depends on the studied problem. The Distributed RRT shows the most consistent results across experiments, but its efficiency does not scale well when the problem becomes more difficult. It could also suffer from memory scalability issues, since each process maintains its own tree. It is outperformed by the OR parallel RRT on problems yielding a great variability in tree size. It is also outperformed by the Manager-worker RRT in settings involving high expansion costs. The Manager-worker RRT shows the best efficiency scalability when the problem difficulty increases.
This paper was focused on a high-level parallelization of RRT. It could be extended by parallelizing its sub-routines, such as the nearest neighbor search. For that, we could use data decomposition and evaluate the speedup achieved depending on the search paradigm (brute force, kd-trees, etc.). Algorithms involving the construction of several independent RRTs can directly benefit from this work. For example, in the simple variant of the bidirectional-RRT where both trees are extended toward the same random configuration, processes can be separated in two building groups getting random configurations from an extra process. When the RRTs are not independently built, specific algorithms have to be developed.
As part of our future work, we plan to investigate approaches combining the three paradigms. We are currently extending our molecular motion planning application to allow for potential energy computation, in order to pursue our study started by artificially increasing the tree expansion costs. We also plan to better exploit the architecture of our cluster platform, by combining multi-threading and message passing approaches. Allowing the eight processes sharing the same memory to work on a common tree would mitigate the memory scalability issue of the Distributed RRT.
REFERENCES
|
{"Source-Url": "http://homepages.laas.fr/nic/Papers/11Icra_Didier.pdf", "len_cl100k_base": 6021, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22700, "total-output-tokens": 7948, "length": "2e12", "weborganizer": {"__label__adult": 0.0004088878631591797, "__label__art_design": 0.0004551410675048828, "__label__crime_law": 0.0006451606750488281, "__label__education_jobs": 0.0009794235229492188, "__label__entertainment": 0.00014841556549072266, "__label__fashion_beauty": 0.00023698806762695312, "__label__finance_business": 0.0003457069396972656, "__label__food_dining": 0.0004546642303466797, "__label__games": 0.0011701583862304688, "__label__hardware": 0.0025310516357421875, "__label__health": 0.000911235809326172, "__label__history": 0.0006537437438964844, "__label__home_hobbies": 0.00020754337310791016, "__label__industrial": 0.0011453628540039062, "__label__literature": 0.0003361701965332031, "__label__politics": 0.0004892349243164062, "__label__religion": 0.0006628036499023438, "__label__science_tech": 0.484619140625, "__label__social_life": 0.00014162063598632812, "__label__software": 0.010894775390625, "__label__software_dev": 0.490234375, "__label__sports_fitness": 0.0005588531494140625, "__label__transportation": 0.0015239715576171875, "__label__travel": 0.0003275871276855469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33129, 0.02701]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33129, 0.35805]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33129, 0.91138]], "google_gemma-3-12b-it_contains_pii": [[0, 5271, false], [5271, 11098, null], [11098, 16184, null], [16184, 21790, null], [21790, 25487, null], [25487, 33129, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5271, true], [5271, 11098, null], [11098, 16184, null], [16184, 21790, null], [21790, 25487, null], [25487, 33129, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33129, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33129, null]], "pdf_page_numbers": [[0, 5271, 1], [5271, 11098, 2], [11098, 16184, 3], [16184, 21790, 4], [21790, 25487, 5], [25487, 33129, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33129, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
8f5e37d1f716c707e51f548d3a08e7901e0a544c
|
Lightweight Detection of Android-specific Code Smells
The aDoctor Project
Palomba, Fabio; Di Nucci, Dario; Panichella, Annibale; Zaidman, Andy; De Lucia, Andrea
DOI
10.1109/SANER.2017.7884659
Publication date
2017
Document Version
Accepted author manuscript
Published in
Proceedings - 24th International Conference on Software Analysis, Evolution and Reengineering, SANER 2017
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim.
Lightweight Detection of Android-specific Code Smells: the aDoctor Project
Fabio Palomba\textsuperscript{1,2}, Dario Di Nucci\textsuperscript{2}, Annibale Panichella\textsuperscript{3}, Andy Zaidman\textsuperscript{1}, Andrea De Lucia\textsuperscript{2}
\textsuperscript{1}Delft University of Technology — \textsuperscript{2}University of Salerno — \textsuperscript{3}University of Luxembourg
f.palomba@tudelft.nl, ddinucci@unisa.it, annibale.panichella@uni.lu, a.e.zaidman@tudelft.nl, adelucia@unisa.it
Abstract—Code smells are symptoms of poor design solutions applied by programmers during the development of software systems. While the research community devoted a lot of effort to studying and devising approaches for detecting the traditional code smells defined by Fowler, little knowledge and support is available for an emerging category of Mobile app code smells. Recently, Reimann\textit{et al.} proposed a new catalogue of Android-specific code smells that may be a threat for the maintainability and the efficiency of Android applications. However, current tools working in the context of Mobile apps provide limited support and, more importantly, are not available for developers interested in monitoring the quality of their apps. To overcome these limitations, we propose a fully automated tool, coined \textit{aDoctor}, able to identify 15 Android-specific code smells from the catalogue by Reimann\textit{et al.} An empirical study conducted on the source code of 18 Android applications reveals that the proposed tool reaches, on average, 98\% of precision and 98\% of recall. We made \textit{aDoctor} publicly available.
Index Terms—Android-specific Code Smells; Detection Tool; Empirical Study;
I. INTRODUCTION
During software maintenance and evolution, a software system undergoes several changes to be adapted to new contexts or to be fixed with regard to urgent bugs [1]. In such a scenario, developers need to manage the complexity of changes as soon as possible in order to meet the unavoidable time constraints, possibly adopting sub-optimal design choices leading to the introduction of so-called \textit{technical debt} [2], i.e., \textit{“not-quite-right”} code that programmers write to meet a deadline or to deliver the software to the market in the shortest time possible.
One noticeable factor contributing to technical debts are bad code smells (shortly “code smells” or simply “smells”) originally defined by Fowler, \textit{i.e.}, symptoms of poor design or implementation choices applied by programmers during the development of a software system [3].
Researchers and practitioners widely recognized code smells as a harmful source of maintenance issues [4], [5], [6], [7], which result in a lower productivity [8], [9] and higher re-work [10], [11] for developers. For these reasons, researchers have been particularly active in the definition of techniques for detecting code smells [12], [13], [14], [15], as well as in the understanding of the effects of such code smells on non-functional attributes of source code [4], [5], [10], [16].
While the main focus of previous research was on the analysis of standard applications, little effort has been devoted to mobile apps [17]. In this context, a set of new peculiar bad programming practices of Android developers has been defined by Reimann\textit{et al.} [18]. These Android-specific smells may threaten several non-functional attributes of mobile apps, such as security, data integrity, and source code quality [18]. As highlighted by Hetch\textit{et al.} [19], these type of smells can also lead to performance issues.
The aforementioned reasons highlight the need of having specialized detectors that identify code smells in Mobile apps. Hetch\textit{et al.} [20] first faced the problem by devising PAPIRA\textit{K}, a code smell detector for Android apps. However, the tool is able to detect a limited number of the Android-specific code smells defined by Reimann\textit{et al.} (just 4 out of the total 30), and is not publicly available.
In this paper we introduce \textit{aDoctor} (AnDrOid Code smell detectOR), a novel code smell detector that identifies 15 Android-specific code smells. The tool exploits the Abstract Syntax Tree of the source code and navigates it by applying detection rules based on the exact definitions of the smells provided by Reimann\textit{et al.} [18]. We also conducted an empirical study to evaluate the overall ability of our tool in recommending portions of code affected by a design flaw. In particular, we ran \textit{aDoctor} against the source code of 18 Android apps and compared the set of candidate code smells given by the tool with a manually-built oracle. According to the results, \textit{aDoctor} is able to suggest code smells with an average precision of 98\% and an average recall of 98\%. The tool has been also employed in the evaluation of the impact of a subset of Android-specific code smells (\textit{i.e.}, the ones supposed to be related to energy efficiency) on the energy consumption of Android apps [21].
Tool and Data Replication. Our detector, as well as the executable file and all the data used in the experiment are available on the \textit{aDoctor} website [22].
Structure of the paper. Section II describes the detection rules and the underlying architecture of the proposed tool, while Section III reports the design and results of the empirical study conducted to measure the performances of \textit{aDoctor}. Finally, Section IV concludes the paper.
II. THE aDOCTOR PROJECT
The \textit{aDoctor} project is built on top of the Eclipse Java Development Toolkit (JDT)\textsuperscript{1}. While the catalogue by Reimann\textit{et al.} [18] proposes a set of 30 design flaws related to both
\textsuperscript{1}\url{http://www.eclipse.org/jdt/}
implementation and UI design, in this demo we focus our attention solely on the smells characterizing a problem in the source code. Therefore, our tool supports the identification of 15 Android-specific code smells. In the following, we present the detection rules applied by ADOCTOR, as well as the underlying architecture supporting the identification.
A. Detecting Android-specific Code Smells
This section reports, the definition of each smell supported by ADOCTOR as well as the rule followed for its detection.
Data Transmission Without Compression (DTWC). The smell arises when a method transmits a file over a network infrastructure without compressing it, causing an overhead of communication [18]. ADOCTOR detects the smell if a method performs an HTTP request involving an instance of the class File without using a compression library such as ZLIB2 or the Apache HTTP Client3.
Debuggable Release (DR). In Android, the attribute android:debuggable of the AndroidManifest file is set during the development for debugging an app. Leaving the attribute true when the app is released is a major security threat since every external app can have full access to the source code. In this case, the detector simply parses the AndroidManifest file looking for the android:debuggable properties. If it is explicitly set to true, the smell is detected.
Durable Wakelock (DW). A Wakelock is the mechanism allowing an app to keep the device on in order to complete a task. However, when such task is completed, the lock should be released to reduce battery drain [18]. In Android, the class PowerManager.WakeLock is in charge to define the methods to acquire and release the lock. If a method using an instance of the class WakeLock acquires the lock without calling the release, a smell is identified.
Inefficient Data Format and Parser (IDFP). When analyzing XML or JSON files, the use of TreeParser slows down the app, and thus it should be avoided and replaced with other more efficient parsers (e.g., StreamParser) [18]. In this case, ADOCTOR identifies the smell by evaluating whether a method uses the TreeParser class.
Inefficient Data Structure (IDS). The mapping from an integer to an object through the use of a HashMap<Integer, Object> is slow, and should be replaced by other efficient data structures, such as the SparseArray [18]. Therefore, methods using an instance of HashMap<Integer, Object> are identified by ADOCTOR as smelly.
Inefficient SQL Query (ISQLQ). In Android, the use of a SQL query is discouraged as it introduces overhead, while other solutions should be preferred (e.g., using webservices) [18]. If a method defines a JDBC connection and sends an SQL query to a remote server, the smell is identified.
Internal Getter and Setter (IGS). In Android development, the use of accessors methods (i.e., getters and setters) are expensive and, thus, internal fields should be accessed directly [18]. All the methods accessing other objects using getters and/or setters are identified by ADOCTOR as affected by this smell.
Leaking Inner Class (LIC). Reimann et al. defined this smell as a “non-static nested class holding a reference to the outer class” [18]. This could lead to a memory leak. Analyzing the files having nested classes, ADOCTOR identifies this smell by counting the relationships that the outer class has with the nested classes. If the counter is higher than 1, a Leaking Inner Class is detected.
Leaking Thread (LT). In Android programming a thread is a garbage collector (GC) root. The GC does not collect the root objects and, therefore, if a thread is not adequately stopped it can remain in memory for all the execution of the application, causing an abuse of the memory of the app. If an Activity starts a thread and does not stop it, this is considered a design flaw [18]. ADOCTOR detects this smell if a method of an Activity class starts a thread without stopping it through the stop method.
Member Ignoring Method (MIM). Non-static methods that do not access any internal properties of the class should be made static in order to increase their efficiency [18]. In this case, our detector exploits the references of a method, and if it does not reference any internal fields, a smell is identified.
No Low Memory Resolver (NLMR). An Android developer can define the behavior of the app when it runs in background overriding the method Activity.onLowMemory [18]. This method should be used to clean caches or unnecessary resources. If it is not defined, the app can lead to abnormal memory use. Consequently, if a mobile app does not contain the method onLowMemory, ADOCTOR detects a smell.
Public Data (PD). This smell arises when private data is kept in a store that is publicly accessible by other applications, possibly threatening the security of the app [18]. In Android, this is done by setting the context of the class as private, using the Context.MODE_PRIVATE command. Classes that do not define the context or define the context as non-private are detected by ADOCTOR as smelly.
Rigid Alarm Manager (RAM). The AlarmManager class allows to execute operations at specific moments. Obviously, an Alarm Manager-triggered operation wakes up the phone, possibly threatening the energy and memory efficiency of the app. It is recommended to use the AlarmManager.setInexactRepeating method, which ensures that the system is able to bundle several updates together [18]. Therefore, a code smell is identified by our detector if a class using an instance of AlarmManager does not define the method setInexactRepeating.
Slow Loop (SL). The standard version of the for loop is slower than the for-each loop [18]. Therefore, Android
2http://www.zlib.net
3https://hc.apache.org
developers should always use an enhanced version of the loop to improve the efficiency of the app. Our detector identifies smelly instances as all the methods using the for loop.
**Unclosed Closable (UC).** A class that implements the java.io.Closeable interface is supposed to invoke the close method to release resources that an object is holding [18]. If the class does not call such a method, aDOCTOR identifies a smell.
### B. aDoctor Architecture and Its Inner-Working
Figure 1 depicts the architecture of aDOCTOR. The Presentation Layer is composed of the classes implementing two types of user interfaces, i.e., command line and graphical user interfaces. The tool is executable via command line through the following command:
```
java -cp aDoctor.jar
RunAndroidSmellDetection <project-path>
<output-path> <smells>
```
where `<project-path>` is a string representing the path to the directory containing the source code of the Android app to analyze, `<output-path>` is the path to the file where the code smell candidates will be printed, and `<smells>` is a string defining the code smells to analyze. This type of interface allows our tool to be run programatically and be employed in mining software repository studies. In addition, we provide a graphical user interface.
The configuration view in Figure 2 allows the software engineer to set the parameters needed for running the analysis, i.e., folder where the project is located and CSV file where to save the candidate smells. Moreover, the software engineer can select the smells that she is interested in. Once the start button is pressed, the computation starts. When completed, the results are shown in a second view, depicted in Figure 3. The candidate smells can be filtered by class name, and in every case the results are saved in the `<output-path>.
The Application Logic Layer is the core of the aDOCTOR project and it contains all the subsystems implementing the detection rules of the Android-specific smells described in the previous section, as well as the classes that output the candidate smells. The layer relies on the Eclipse JDT APIs in order to (i) extract the Abstract Syntax Tree of the source classes contained in the app under analysis, and (ii) navigate the Abstract Syntax Tree and compute the detection rules. The single smell detection rules are implemented as separate classes of the Android-specific Smells Detector subsystem. As for the Output generator subsystem, it is responsible for executing the detection process calling the classes of the Android-specific Smells Detector subsystem (see the dependence between the two subsystems in Figure 1), and output of the candidate code smells found. Specifically, the output is represented by a CSV file where:
• each line of the CSV file represents a code element of the analyzed app;
• the first column of each line specifies the granularity of the code element (i.e., class or method);
• columns from #2 to #n in each line report a boolean value indicating the presence/absence of the 15 Android-specific code smells (e.g., column #2 will be true if a Data Transmission Without Compression has been detected, false otherwise).
III. Evaluation
The empirical study has the goal to quantify the ability of ADOKTOR in recommending portions of source code affected by a design flaw, with the purpose of investigating its effectiveness during the detection of Android-specific code smells in Android applications. Specifically, our research question is the following:
RQ1: What are the precision and recall scores of ADOKTOR in detecting Android-specific code smells?
The context of the study consists of a set of 18 Android apps belonging to different categories, and having different scope and size. Due to space limitations, the complete list of apps considered in the study is available on the ADOKTOR website [22].
A. Empirical Study Design
To answer RQ1 we ran ADOKTOR on the apps in our context. To evaluate its precision and recall, we needed an oracle reporting the actual code smell instances contained in the considered Android apps. Since there is not an annotated set of Android-specific code smells available in literature, we built our own oracle. To this aim, we asked a Master’s student from the University of Salerno to manually analyze the apps taken into account in order to extract the methods affected by each of considered smells. Starting from the definition of the 15 smells, the student manually analyzed the source code of the latest version of the apps, looking for instances of those smells. This process took approximately 180 man-hours of work. Then, a second Master’s student (still from the University of Salerno) validated the produced oracle, to verify that all affected code components identified by the first student were correct. Just 14 of the instances classified as smelly by the first student were classified as false positives by the second student. After a discussion performed between the two students, 8 of these 14 instances were definitively classified as false positives (and, therefore, removed from the oracle). Note that we cannot ensure about the completeness of the oracle. Moreover, to avoid bias the students were not aware of the experimental goals and of specific algorithms used by ADOKTOR to identify smells. The oracle defined is available on the ADOKTOR website.
Once the set of actual smells was ready and the set of candidate smells identified by ADOKTOR was available, we compared the two sets using two widely adopted Information Retrieval (IR) metrics, i.e., precision and recall [23].
To have an aggregate indicator of precision and recall, we also report the F-measure, defined as the harmonic mean of precision and recall.
Due to space limitations, we report the overall precision and recall obtained analyzing each smell type on the 18 apps. The results achieved on the single apps are available on the ADOKTOR website [22].
B. Analysis of the Results
Over all the 18 apps considered, ADOKTOR detects 1,444 code smell instances (on average, 80 per app). The most frequent ones are the Member Ignoring Method (467 instances), Slow Loop (378 instances), and Data Transmission Without Compression (266 instances) smells. Since the analyzed apps contain on average 121 classes, our results reveal that the Android-specific smells are quite diffused and, thus, the phenomenon is worth investigating. Note that the complete results on the distribution of code smells are available on the ADOKTOR website [22].
Table I reports, for each Android-specific smell, the results achieved over the set of 18 apps taken into account. The results clearly show that ADOKTOR is able to correctly identify almost all the code smell instances present in the Android apps. Only in two cases the results do not reach 100% precision and recall, i.e., Data Transmission Without Compression and Inefficient SQL Query. We manually analyzed these cases in order to understand the reasons behind the results, finding that the detector missed some instances because the classes affected by such smells used different compression libraries with respect to the ones considered in the detection rules. Indeed, both smells are related to the communication with remote servers. To do so, Android apps usually rely on some widely spread libraries such as ZLIB or the APACHE HTTP CLIENT. However, there are some cases where other libraries are employed and, therefore, the detector is not able to correctly identify the design flaws. For instance, ADOKTORS iden-
<table>
<thead>
<tr>
<th>Code Smell</th>
<th>Precision</th>
<th>Recall</th>
<th>F-Measure</th>
</tr>
</thead>
<tbody>
<tr>
<td>DTWC</td>
<td>87%</td>
<td>89%</td>
<td>88%</td>
</tr>
<tr>
<td>DR</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>DW</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>IDFP</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>IDS</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>ISQLQ</td>
<td>85%</td>
<td>88%</td>
<td>86%</td>
</tr>
<tr>
<td>IGs</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>LIC</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>LT</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>MIM</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>NLMR</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>PD</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>RAM</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>SL</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>UC</td>
<td>100%</td>
<td>100%</td>
<td>100%</td>
</tr>
</tbody>
</table>
Average 98% 98% 98%
ifies a false positive Data Transmission Without Compression instance in the class AndroidmaticKeyerActivity, belonging to the package com.templaro.opsiz.aka of the ANDROIDOMATIC KEYER app. This class relies on the SILICOMPRESSOR library4 to compress files before sending them, but ADOCTOR does not recognize the compression because the method calls done by the class do not refer to the libraries it consider.
While in this case ADOCTOR fails in the identification of the smell, it is worth noting that we configured our detector in order to work with the most common libraries used by Android developers. Moreover, the issue reveals a potential way to improve the detection accuracy of the tool. Indeed, as the support to other libraries will be implemented, the performances of the tool will be higher.
The discussion is different for the other smells, since ADOCTOR always reaches 100% of F-Measure. This is due to the fact that the detection rules described in Section II are effective in capturing all the small programming issues applied by Mobile developers. In conclusion, we can affirm that the proposed tool is efficient in terms of accuracy of the recommendations.
IV. DEMO REMARKS
In this demo we presented ADOCTOR, a tool supporting the detection of 15 Android-specific code smells from the catalogue by Reimann et al. [18]. To identify design flaws, the tool navigates the Abstract Syntax Tree of a class and applies detection rules implementing the exact definitions provided by Reimann et al.
We conducted an empirical study involving 18 Android apps to validate the proposed tool. The results showed an average precision and recall of 98%, clearly highlighting the ability of our tool to correctly identify design flaws in the source code. For two of the considered smells, i.e., Data Transmission Without Compression and Inefficient SQL Query, the average F-Measure is slightly lower than the others, but this is due to the fact that sometimes the apps use compression libraries different from the most popular ones.
We plan to integrate the code smell detector in the most common Integrated Development Environment (IDE) used by Android developers, i.e., Android Studio. Moreover, we plan to extend the functionalities of ADOCTOR in order to allow the extraction and the automation of meaningful refactoring operations aimed at removing code smells from the source code.
REFERENCES
[^4]: https://github.com/Tourenathan-G5organisation/SiliCompressor
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/32869405/palombaSANER2017.pdf", "len_cl100k_base": 5267, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19521, "total-output-tokens": 7322, "length": "2e12", "weborganizer": {"__label__adult": 0.00041031837463378906, "__label__art_design": 0.0002319812774658203, "__label__crime_law": 0.0003402233123779297, "__label__education_jobs": 0.00031495094299316406, "__label__entertainment": 4.45246696472168e-05, "__label__fashion_beauty": 0.0001423358917236328, "__label__finance_business": 0.00011527538299560548, "__label__food_dining": 0.0002944469451904297, "__label__games": 0.00032210350036621094, "__label__hardware": 0.000652313232421875, "__label__health": 0.00035691261291503906, "__label__history": 0.00012576580047607422, "__label__home_hobbies": 6.54458999633789e-05, "__label__industrial": 0.00021529197692871096, "__label__literature": 0.00017011165618896484, "__label__politics": 0.00021278858184814453, "__label__religion": 0.0003294944763183594, "__label__science_tech": 0.002468109130859375, "__label__social_life": 9.191036224365234e-05, "__label__software": 0.0036220550537109375, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0002493858337402344, "__label__transportation": 0.0003135204315185547, "__label__travel": 0.00016427040100097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28559, 0.04894]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28559, 0.28147]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28559, 0.87022]], "google_gemma-3-12b-it_contains_pii": [[0, 1371, false], [1371, 7197, null], [7197, 12939, null], [12939, 15703, null], [15703, 21422, null], [21422, 28559, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1371, true], [1371, 7197, null], [7197, 12939, null], [12939, 15703, null], [15703, 21422, null], [21422, 28559, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28559, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28559, null]], "pdf_page_numbers": [[0, 1371, 1], [1371, 7197, 2], [7197, 12939, 3], [12939, 15703, 4], [15703, 21422, 5], [21422, 28559, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28559, 0.12782]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
8cfbf2cb3549e3c14063cd87504b795643d3dcb2
|
Captcha for Visually Impaired People: A Review
Sakshi Baheti¹, Hardik Shah¹, Rijul Sherathia¹, Prakash Waghamode²
¹Student, ²Professor, School of Computer Engineering and Technology, MIT World Peace University
Kothrud, Pune 411038
Abstract - Nowadays the Internet users are from different ages and groups. The disabled people also use the Internet. Some websites are especially created for disabled people. Many Internet sites offer services for human users, but unfortunately some computer programs are designed which abuse these services. As a result, some systems named CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart) [3] have been introduced to tell apart human users and computer software. In this paper, a new CAPTCHA method is introduced which blind people can use. In this method, a simple shape-based problem is created according to predefined patterns and converted to speech using a Text-To-Speech (TTS) system. Then the sound is played for the user and he/she must draw the answer of the question. Because answering this problem requires recognizing the speech, understanding the problem, and drawing the problem, only a human user can answer this question and present computer programs are unable to solve it. In addition, answering the question is easy for blind people, because the question consists of a number of natural language sentences and the answer is a number, which can be entered easily.
Key Words: Web application, captcha, machine learning, visually impaired, motion capture, natural language processing, text-to-speech
1.INTRODUCTION
Today lots of daily activities such as education, shopping and mailing are done through the Internet. By rapid growth of the Internet and the easy access to it, a great deal of private and personal information is available on the web.
Various methods are presented to overcome the above problem. The goal of these methods is distinguishing human users from computer programs. These methods are done automatically by computer programs since examination of a large amount of registration forms by human forces needs a great deal of time and expense. CAPTCHA (Completely Automatic Public Turing Test to Tell Computer and Human Apart) are systems which are used to tell humans and machines apart automatically. These systems are based on Artificial Intelligence (AI) topics [3]. They are similar to the Turing test, but they differ in that the judge is a computer. The goal of these systems is to ask questions which human users can easily answer, while current computer programs cannot. CAPTCHA systems also have other applications such as prevention from sending spam. Considering today’s scenario, nowadays the Internet is not only for special groups of people, but also people from any age and different groups are using the Internet. There are many websites for children on the Internet and the children using the Internet for various activities such as entertaining, educating, etc. The elderly people communicate and chat with their children and relatives using the Internet. In addition to regular people, the disabled peoples are also using the Internet. Deploying only visual captchas creates a considerable obstacle to a certain group of users, such as the visually impaired, colour-blind, near sighted users. This necessitates the need for more accessible captchas.
Current CAPTCHAs rely on superior human perception, leading to CAPTCHAs that are predominantly visual and, therefore, unsolvable by people with vision impairments. Audio CAPTCHAs that rely instead on human audio perception were introduced as a non-visual alternative but are much more difficult for web users to solve. But the conventional Audio CAPTCHA alone cannot solve this problem, thus we need a better version of the audio captcha.
We need to keep in mind a few things while building a captcha for the blind: [3]
- Must have audio output
- Must be multilingual
- Questions must be easy
- Response ways should be effective
Audio CAPTCHAs have been shown previously to be difficult for blind web users. Sauer et al. found that six blind participants had a success rate of only 46% in solving the audio version of the popular reCAPTCHA, and Bigham et al. observed that none of the fifteen blind high school students in an introductory programming class were able to solve the audio CAPTCHA guarding a web service required for the course.
Thus, with these observations in mind, we have thought of a modified and effective way of presenting the audio captcha. This will be multilingual for ease of understanding, simple questions for clarity, ML and motion-based captcha for security and supported by the latest backend technologies for faster response.
2. LITERATURE REVIEW
Paper [2] states that CAPTCHA methods can be generally divided into three groups: OCR-Based, Visual Non-OCR-Based and Non-Visual. Optical Character Recognition (OCR) programs are used for automatically reading the texts, but
they have difficulty reading texts printed with a low quality and can only recognize high-quality typed texts that use common standard formats. However, the defects of the OCR systems can be taken as an advantage by changing the picture of a word so that it can only be recognized by a human and not by any OCR system.
2.1 OCR - Based Captcha
In OCR-Based methods [2], the image of a word with distortion and various pictorial effects is shown to the user and he/she is asked to type that word. Due to the presence of various pictorial effects, the computer will encounter problems in the recognition of the word and only a human user can recognize the word. Examples of these methods are: Persian/Arabic Baffle Text and Gimpy. But these methods usually result in dissatisfaction of users. On the other hand, efforts such as Non-OCR have been made for breaking these methods.
Most CAPTCHAs on the web today exhibit the following pattern: the solver is presented text that has been obfuscated in some way and is asked to type the original text into an answer box. The technique for obfuscation is chosen such that it is difficult for automated agents to recover the original text but humans should be able to do so easily. Visually this most often means that graphic text is displayed with distorted characters.
Text-based Captcha [3] is the most widely used captcha in web application. It is an image of distorted text/numbers and addition with some background noise or clutter. The content is generated randomly either text or alphanumeric. The user asked to identify the distorted letters or numbers whatever displayed in the captcha challenge and entered them. It requires a large question bank. It is uncomplicated to solve for visual users, but it becomes very difficult to read for blind users.
Visual CAPTCHAs, in paper [1], are perceived as a whole and can be viewed even when focus is on the answer box. Once focusing the answer box, solvers can continue to look at visual CAPTCHAs, edit the answer that they provided, and verify their answer. They can repeat this process until satisfied without pressing any keys other than those that form their answer. Errors primarily arise from CAPTCHAs that are obfuscated too much or from careless solvers.
Image-based CAPTCHAs [7] are designed by using various image objects. The user has to recognize a specific image to pass the test. Sometimes the images are provided with tags and the user is asked to identify the correct image and enter appropriate words in the box given or asked to click on a specific image to prove as a human user. The advantage of image-based CAPTCHA is that pattern recognition is a very hard AI problem and thus it becomes difficult to break this test using pattern recognition technique.
2.2 Non-OCR Based Captcha
In [4], there are Non-OCR Based methods which are more comfortable for users than OCR-based methods. These methods mainly based on the features of multimedia such as pictures and sounds and usually using methods like small puzzle games. Examples of these methods are Pix and Collage CAPTCHA. One of the main categories of non-visual methods are sound based CAPTCHA methods. In these methods, a word is said and the user must type the word. They are based on weaknesses of speech recognition systems. These systems are usually used beside other CAPTCHA methods, especially OCR-Based methods, for disabled people.
However, some of the Internet websites are especially designed for disabled persons. Also, these websites need protection against computer programs which try to use website resources. But common CAPTCHA systems are usually more difficult for disabled people than some of the Non-OCR Based methods which can be used by disabled people.
According to [7], generally a puzzle-based CAPTCHA can either be a graphics-based puzzle or a mathematical puzzle. In picture-based puzzles, the picture is divided into some pieces and provides these pieces randomly. Each piece of the picture will have a piece number. The user needs to arrange these pieces properly by following piece numbers to form a complete original picture, the puzzle based on mathematics is hundred percent effective and it can be incorporated into the login process and online form registration in the websites for ensuring legitimate access. The user needs to solve that math puzzle provided in order to get legal permission to access web contents.
Audio-based CAPTCHA methods [8] are usually used as a complement for text-based CAPTCHAs. Many popular websites such as eBay, yahoo and Microsoft use both visual and audio CAPTCHAs. An audio CAPTCHA generally picks a random sequence of letters or numbers; renders them into a sound clip; includes some level of distortion; and then presents the recording to the users. The user is asked to type the contents of the recording. In one type of audio CAPTCHAs, known as spoken CAPTCHA, the users are required to repeat the test instead of typing it. This feature makes this CAPTCHA also suitable for blind users.
Paper [8] also describes motion-based CAPTCHA in which a movie or animation is presented to the users and they are asked to recognize an action, animated word or image in the movie. This CAPTCHA is convenient for users. In addition, since the required processing time in this CAPTCHA is relatively high, it is more secure. However, the high loading time can be a disadvantage from a usability viewpoint. Another disadvantage is requiring a large database of animations. Finally, the term “hybrid CAPTCHA” has been selected for a CAPTCHA that is a combination of different types or designed for special purposes.
2.3 Visual Non-OCR Based & Non-Visual Captcha
Visual Non-OCR-Based and Non-Visual methods which are easier to pass for users than OCR-based ones[2]. Visual Non-OCR-Based methods use the drawbacks of computer vision systems such as their difficulties in identifying the type of an object in an image. For example, in Collage CAPTCHA, the image of some objects with distortion is shown to the user and he/she is asked to click on a certain object. One of the main categories of non-visual methods are sound based CAPTCHA methods. In these methods, a word is said and the user must type the word. They are based on weaknesses of speech recognition systems. These systems are usually used beside other CAPTCHA methods, especially OCR-Based methods, for disabled people. Disabled people usually have problems with CAPTCHA methods, because most CAPTCHA methods are designed for and tested by non-disabled persons.
Paper [4] studied and examined Drawing Captcha and Collage Captcha (Non-OCR Based captcha) methods which can be used by disabled people.
Drawing Captcha method is for devices like PDA (Personal Digital Assistant) which uses a stylus. In this method, numerous dots are drawn on a screen with noisy background and the user is asked to connect certain dots to each other.
In the Collage Captcha method, the images of some different objects are chosen. Then some effects such as rotation are done on the images and they merged to create a single image. This image is shown to the user and he/she is asked to click on a certain object (for example on the image of the apple). Collage CAPTCHA requires a database of labeled images. Creating this database is expensive and requires a lot of time. It is an easy CAPTCHA method for users, because in this method the user must find the object image whose name is shown. In addition, this method may have a high rate of random passing. If the images of different objects can be easily separated, and the number of different objects is N, the probability of passing the test with a random answer can be \(1/N\). Disabled people such as hearing or sight impaired persons can use this method because it uses images without any distortion. In addition, mobility impaired persons can use this method easily because it requires only one click.
Paper [4] suggests an Non-OCR-based CAPTCHA method that is designed for blind people. In this suggested method, a simple mathematical problem is created according to predefined patterns and converted to speech using a Text-To-Speech (TTS) system. Then the sound is played for the user and he/she must answer the question. A computer requires the following abilities to answer the question:
- Recognition of the question using Speech Recognition systems.
- Understanding the meaning of the question.
- Solving the problem and answering the question.
Since it is difficult for computer programs to succeed in doing any of these operations, only a human user can answer the question.
Paper [2] proposes a new structure for CAPTCHA systems. In this method, a word is selected and then converted into an audio file using a Text-To-Speech (TTS). Then the audio is played for the user and he/she is asked to repeat the word which he/she hears. The user response is then analyzed by two modules. The first module identifies synthesized voices and hence identifies computer users. The second module is a speech recognizer and checks the user response to be the desired word. This module prevents recorded human voice attacks. The main contribution of our system is that it uses drawbacks of computers in both speech recognition and speech synthesis, while previous sound-based methods use only speech recognition drawbacks. So, its tests are easier for human users and more difficult for computers, in comparison to previous sound-based CAPTCHA.
3. RESEARCH GAPS
On reviewing [1], [3], [4] papers we figured out a few important things, which are not present very widely, and we would be planning to highlight and develop those features when we implement the project.
In [3] paper, we observed various captcha methods that were used for generation of captcha on websites and forms. The type which grabbed our attention was the Audio captcha as it is an effective method to be presented before visually impaired people. After reading through it, we understood that the conventional audio captcha was missing out on the capabilities of ML-AI, which is at its peak in today’s time and is ever growing. It is extremely simple for a ML algorithm to capture the audio in the audio Captcha shown on the site and convert it into text and feed into the input. Thus, even if this method is suitable for the visually impaired people, we cannot rely on the old audio captcha methods for differentiating between human and robot.
In [4] paper, we noticed a really effective and secure way of using Captcha that was Drawing CAPTCHA. This method is for devices like PDA (Personal Digital Assistant) which uses a stylus. In this method, numerous dots are drawn on a screen with noisy background and the user is asked to connect certain dots to each other. In view of the problems that computers face in recognizing the dots from the noise, only a human user can. With all the noise added and the approach where the user needs to connect dots to crack the Captcha would make it easier for the system to identify if a bot is trying to crack it. After our analysis, we decided to modify this way of CAPTCHA so that it is user friendly for visually impaired people as well along with maintaining the level of security it provides. With combination of ML Image recognition and motion capture in JavaScript, we can...
probably figure out a functionality to modify the above approach for drawing captcha.
Paper [1] revolved more around research that identified the groups of people that were targeted for the captcha. This included visually impaired and people with normal eyesight. Various surveys were to understand how do the target audience react to different methods of captcha. Participants were first presented with a questionnaire asking about their experience with web browsing, experience with CAPTCHAs and the level of difficulty they present, as well as demographic information. They were then asked to solve 10 visual CAPTCHAs and 10 audio CAPTCHAs (for sighted participants) or 10 audio CAPTCHAs (for blind participants). Each participant was asked to solve one problem randomly drawn from each CAPTCHA type, and the CAPTCHA types were presented in random order to help avoid ordering effects.
From the results of the above survey, we were able to understand that blind people show much more frustration towards visual Captcha compared to audio captcha and that is obvious. However, this tells us that audio method is a suitable method for the blind and thus we need to maintain the same level of ease in audio CAPTCHA but modify it such that it is safer and more effective. Also, we saw that there is an opportunity to explore more features for showing the multi-linguality of the captcha audios.
Along with this we also noticed that there were no proper instructions provided to the users (visually impaired) on solving the captcha. We propose to build our solution where the target audience will get a clear and simple explanation on the quick steps they need to follow to crack the captcha and that will be available in their mother tongue.
All the previous work done on this topic were definitely turning points in the advancement of CAPTCHA but with the growth of ML-AI, we need to dig in more and find ways such that these methods retain themselves on today's web sites but are more secure and in a way more effective towards visually impaired along with general public.
4. PROPOSED ARCHITECTURE
4.1. OBJECTIVES
- To assist a visually impaired person to use and crack captcha on web applications.
- To combine the strengths of NLP (Natural Language Processing) to make multilingual captcha for better understanding.
- To classify the user as bot or human.
- To track the user keystrokes.
- To track the user cursor movements. Capturing the motion.
- To consume user input as speech if required.
- To build multiple choice captcha for ease of the user so that they can check the preferred captcha option for recognition.
4.2. MODULES
This application aims at simplification for visually impaired people while accessing captcha authentication. System will provide features of language selection in which the user is comfortable. A web application will be built using Flask at backend and HTML CSS features for UI along with the bootstrap library. This will be helpful for providing a strong backend and user-friendly UI for the users. Along with that, we will also be utilizing some features of Natural Language Processing like text to speech and speech to text conversion. We will also be using multiple corpuses for providing a multilingual experience to our target audience.
The proposed application uses machine learning algorithms for image classification and helps distinguish between a bot and human. One of the approaches for questioning will also be drawing captcha which will explicitly use motion capture and then feed that image to the backend ML model [5] which will classify and verify the image.
A simple use case diagram explains all the scenarios when a user interacts with the application in Fig: 1.
Module 1: Captcha Generation
In the Captcha Generation module, the user will get different types of Captcha questions which they will have to answer and if answered correctly the ML model will classify them as human response and the app can then redirect to the required form or page. Given a question on the screen, the app will pick up different sets of questions from our database and present it on the screen. For a normal user, it will be easier to just read the question and fill in the answer while for the visually impaired, we have tried to maintain the same level of simplicity but have kept the way of communication as audio which is discussed in the next module.
Module 2: Audio output with multilingual options
As direct reading of the captcha question on the screen is not possible for visually impaired users and thus, we need audio to let the users understand what is the exact question.
The audio output can be generated using the text to speech conversion in NLP. [6] Along with this, NLP also provides various methods and using different corpuses, we can generate audio output in different languages. The main purpose of implementing this functionality is increasing the scope of our project. For various geographies, we can implement this method and people can hear the audio output in their mother tongue.
Module 3: Drawing captcha and keystroke detection
As we have audio output for blind users, we must also take care of the inputs that we will take and make it simple for the target audience to give input. Audio output will help users communicate which keys they need to press to perform a specific function or navigate around. Along with that, we also have one captcha question type where users will be asked to draw a simple shape. Thus, to capture the shape, we will use the motion capturing in JavaScript where the cursor movement will be captured. Once these are captured and converted to an image, it is sent to the backend ML model which will compare it with existing test data and classify if it was a human response or bot response. Thus, it will help us maintain the security of the system while keeping it simplistic.
A sequence diagram in Fig: 2 depicts a desired flow of user inputs and responses from the system.

**Fig: 2**
### 5. REQUIREMENT
#### 5.1. HARDWARE REQUIREMENTS
- Monitor: CRT or LCD monitor
- Keyboard: Normal or Multimedia
- Mouse: Compatible mouse or Touchpad
- Audio Output Device
#### 5.2. SOFTWARE REQUIREMENTS
- **NLP**: Natural Language Processing, or NLP for short, is broadly defined as the automatic manipulation of natural language, like speech and text, by software. The study of natural language processing has been around for more than 50 years and grew out of the field of linguistics with the rise of computers.
- **Machine Learning**: Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves.
- **Python**: An open-source general-purpose interpreted, high-level programming language can be used to create web apps, desktop applications, games, data science applications, and a variety of other items.
- **Flask**: Flask is a lightweight WSGI web application framework. It is designed to make getting started quick and easy, with the ability to scale up to complex applications. It began as a simple wrapper around Werkzeug and Jinja and has become one of the most popular Python web application frameworks.
- **JavaScript**: JavaScript, often abbreviated as JS, is a programming language that conforms to the ECMAScript specification. JavaScript is high-level, often just-in-time compiled, and multi-paradigm. It has curly-bracket syntax, dynamic typing, prototype-based object-orientation, and first-class functions.
### 6. CONCLUSIONS
In this paper, we have studied various types of CAPTCHA. A brief review has been carried out on the Audio Captcha and lists out the limitation in sound captcha in diverse methodologies. A huge scope for research exists in outlining new and novel CAPTCHA procedures that are easy to use, require less server handling and offer enhanced security control against bots and feasible for disabled as well as normal people.
In this paper, we propose a new structure for CAPTCHA systems. In this method, a task performing question is selected and then converted into an audio file using a Text-To-Speech (TTS). Then the audio is played for the user and he/she is asked to perform the task which he/she hears using the movement of the cursor. The ML model designed to detect the movement of the cursor will validate the performed task according to the question asked and allow
the visually impaired people to successfully login with any trouble.
7. REFERENCES
|
{"Source-Url": "https://www.irjet.net/archives/V8/i5/IRJET-V8I5101.pdf", "len_cl100k_base": 5007, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19883, "total-output-tokens": 5960, "length": "2e12", "weborganizer": {"__label__adult": 0.0005121231079101562, "__label__art_design": 0.00209808349609375, "__label__crime_law": 0.0013360977172851562, "__label__education_jobs": 0.0027923583984375, "__label__entertainment": 0.00035953521728515625, "__label__fashion_beauty": 0.0003209114074707031, "__label__finance_business": 0.0002512931823730469, "__label__food_dining": 0.0005311965942382812, "__label__games": 0.001712799072265625, "__label__hardware": 0.0027790069580078125, "__label__health": 0.0019702911376953125, "__label__history": 0.0004634857177734375, "__label__home_hobbies": 0.00012564659118652344, "__label__industrial": 0.0005068778991699219, "__label__literature": 0.000885009765625, "__label__politics": 0.0004801750183105469, "__label__religion": 0.0006728172302246094, "__label__science_tech": 0.455322265625, "__label__social_life": 0.00017964839935302734, "__label__software": 0.0579833984375, "__label__software_dev": 0.4677734375, "__label__sports_fitness": 0.0003483295440673828, "__label__transportation": 0.00034689903259277344, "__label__travel": 0.00018990039825439453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26833, 0.02433]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26833, 0.88197]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26833, 0.944]], "google_gemma-3-12b-it_contains_pii": [[0, 4989, false], [4989, 10641, null], [10641, 16323, null], [16323, 20779, null], [20779, 24928, null], [24928, 26833, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4989, true], [4989, 10641, null], [10641, 16323, null], [16323, 20779, null], [20779, 24928, null], [24928, 26833, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26833, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26833, null]], "pdf_page_numbers": [[0, 4989, 1], [4989, 10641, 2], [10641, 16323, 3], [16323, 20779, 4], [20779, 24928, 5], [24928, 26833, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26833, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
e53d2052c3f90d3ecc085c011da4ef1004b095ce
|
Package ‘async’
May 26, 2023
Title Coroutines: Generators / Yield, Async / Await, and Streams
Version 0.3.2
Date 2023-05-24
BugReports https://github.com/crowding/async/issues
Description Write sequential-looking code that pauses and resumes.
- gen() creates a generator, an iterator that returns a value and pauses each time it reaches a yield() call.
- async() creates a promise, which runs until it reaches a call to await(), then resumes when information is available.
These work similarly to generator and async constructs from 'Python' or 'JavaScript'. Objects produced are compatible with the 'iterators' and 'promises' packages.
Version 0.3 supports on.exit, single-step debugging, stream() for making asynchronous iterators, and delimited goto() in switch() calls.
License GPL-2
Encoding UTF-8
Depends R (>= 4.1)
Imports iterators, nseval (>= 0.4.3), later, promises, testthat (>= 3.0.0), stringr, methods
Suggestions rmarkdown, knitr, dplyr, curl, audio, profvis, ggplot2, XML
RoxygenNote 7.2.3
VignetteBuilder knitr
Config/testthat/edition 3
Create an asynchronous task from sequential code.
**Description**
`async(...)`, with an expression written in its argument, allows that expression to be evaluated in an asynchronous, or non-blocking manner. `async` returns an object with class `c("async", "promise")` which implements the `promise` interface.
**Usage**
```r
async(
expr,
..., split_pipes = TRUE,
compileLevel = getOption("async.compileLevel"),
debugR = FALSE,
debugInternal = FALSE,
trace = getOption("async.verbose")
)```
async
)
await(prom, error)
Arguments
expr
An expression, to be executed asynchronously.
... Undocumented.
split_pipes
Rewrite chained calls that use await (see below)
compileLevel
Compilation level; same options as for gen.
dbgR
Set TRUE to enter the browser immediately on executing the first R expression.
dbgInternal
Set TRUE to single-step at implementation level, immediately upon execution.
trace
Enable verbose logging by passing a function to trace, like trace=cat. This function should take a character argument.
prom
A promise, or something that can be converted to such by promises::as.promise().
error
This argument will be forced if the promise rejects. If it is a function, it will be called with the error condition.
Details
An example Shiny app using async/await is on Github: https://github.com/crowding/cranwhales-await
When an async object is activated, it will evaluate its expression until it reaches the keyword await. The async object will return to its caller and preserve the partial state of its evaluation. When the awaited promise is resolved, evaluation continues from where the async left off.
When an async block finishes (either by reaching the end, or using return()), the promise resolves with the resulting value. If the async block stops with an error, the promise is rejected with that error.
Async blocks and generators are conceptually related and share much of the same underlying mechanism. You can think of one as "output" and the other as "input". A generator pauses until a value is requested, runs until it has a value to output, then pauses again. An async runs until it requires an external value, pauses until it receives the value, then continues.
The syntax rules for an async are analogous to those for gen(): await must appear only within the arguments of functions for which there is a pausable implementation (See pausables()). For async the default split_pipes=TRUE is enabled; this will rearrange some expressions to satisfy this requirement.
When split_pipes=FALSE, await() can only appear in the arguments of pausables and not ordinary R functions. This is an inconvenience as it prevents using await() in a pipeline. With split_pipes=TRUE applies some syntactic sugar: if an await() appears in the leftmost, unnamed, argument of an R function, the pipe will be "split" at that call using a temporary variable. For instance, either
async(makeRequest() |> await() |> sort())
or, equivalently,
async(sort(await(makeRequest())))
will be effectively rewritten to something like
async({.tmp <- await(makeRequest()); sort(.tmp)})
This works only so long as `await` appears in calls that evaluate their leftmost arguments normally. `split_pipes` can backfire if the outer call has other side effects; for instance `suppressWarnings(await(x))` will be rewritten as `{.tmp <- await(x); suppressWarnings(x)}`, which would defeat the purpose.
If `async` is given a function expression, like `async(function(...))`, it will return an "async function" i.e. a function that constructs an async.
Value
`async()` returns an object with class "promise," as defined by the `promises` package (i.e., rather than the kind of promise used in R’s lazy evaluation.)
In the context of an async or stream, `await(x)` returns the resolved value of a promise `x`, or stops with an error.
Examples
```r
myAsync <- async(for (i in 1:4) {
await(delay(5))
cat(i, "\n")
})
```
---
### `awaitNext`
**Wait for the next value from a channel or stream.**
**Description**
`awaitNext` can be used within an `async` or `stream` coroutine. When reached, `awaitNext` will register to receive the next element from an async or a coroutine object.
**Usage**
`awaitNext(strm, or, err)`
**Arguments**
- `strm` A channel or stream object.
- `or` This argument will be evaluated and returned in the case the channel closes. If not specified, awaiting on a closed stream will raise an error with message "StopIteration".
- `err` A function to be called if the channel throws an error condition.
**Value**
In the context of an async or stream, `awaitNext(x)` returns the resolved value of a promise `x`, or stops with an error.
---
**channel**
*An object representing a sequence of future values.*
---
**Description**
A channel is an object that represents a sequence of values yet to be determined. It is something like a combination of a promise and an iteror.
**Usage**
```r
channel(obj, ...)
```
```r
## S3 method for class 'function'
channel(
obj,
...,
max_queue = 500L,
max_awaiting = 500L,
wakeup = function(...) NULL
)
```
```r
is.channel(x)
```
**Arguments**
- **obj**
A user-provided function; it will receive three callback functions as arguments, in order, `emit(val)`, `reject(err)` and `close()`
- **...**
Specialized channel methods may take other arguments.
- **max_queue**
The maximum number of outgoing values to store if there are no listeners. Beyond this, calling `emit` will return an error.
- **max_awaiting**
The maximum number of pending requests. If there are this many outstanding requests, for values, calling `nextThen(ch, ...)` or `nextElem(ch)` will raise an error.
- **wakeup**
You may optionally provide a callback function here. It will be called when the queue is empty and there is at least one listener/outstanding promise.
- **x**
an object.
Details
The channel interface is intended to represent and work with asynchronous, live data sources, for instance event logs, non-blocking connections, paginated query results, reactive values, and other processes that yield a sequence of values over time.
channel is an S3 method and will attempt to convert the argument obj into a channel object according to its class.
The friendly way to obtain values from a channel is to use awaitNext or for loops within an async or stream coroutine.
The low-level interface to obtain values from a channel is to call nextThen(ch, onNext=, onError=, onClose=, ...), providing callback functions for at least onNext(val). Those callbacks will be appended to an internal queue, and will be called as soon as data is available, in the order that requests were received.
You can also treat a channel as an iteror over promises, calling nextOr(pri) to return a promise representing the next available value. Each promise created this way will be resolved in the order that data come in. Note that this way there is no special signal for end of iteration; a promise will reject with a condition message "StopIteration" to signal end of iteration.
Be careful with the iterator-over-promises interface though: if you call as.list.iteror(pr) you may get stuck in an infinite loop, as as.list keeps calling nextElem and receives more promises to represent values that exist only hypothetically. This is one reason for the max_listeners limit.
The friendly way to create a channel with custom behavior is to use a stream coroutine. Inside of stream() call await to wait on promises, awaitNext to wait on other streams and yield to yield values. To signal end of iteration use return() (which will discard its value) and to signal an error use stop().
The low-level interface to create a channel with custom behavior is to call channel(function(emit, reject, cancel) {...}), providing your own function definition; your function will receive those three callback methods as arguments. Then use whatever means to arrange to call emit(val) some time in the future as data comes in. When you are done emitting values, call the close() callback. To report an error call reject(err); the next requestor will receive the error. If there is more than one listener, other queued listeners will get a close signal.
Value
a channel object, supporting methods "nextThen" and "nextOr"
is.channel(x) returns TRUE if its argument is a channel object.
Author(s)
Peter Meilstrup
---
combine Combine several channels into one.
description combine(...) takes any number of promise or channel objects. It awaits each one, and returns a channel object which re-emits every value from its targets, in whatever order they are received.
debugAsync
Usage
combine(...)
Arguments
... Each argument should be a promise or a channel.
Value
a channel object.
Author(s)
Peter Meilstrup
---
debugAsync
*Toggle single-step debugging for a coroutine.*
Description
Toggle single-step debugging for a coroutine.
Usage
debugAsync(x, R, internal, trace)
Arguments
x
A coroutine object as constructed by (async, gen or stream).
R
Set TRUE to step through expressions at user level
internal
Set TRUE to step through at coroutine implementation level.
trace
Set TRUE or provide a print function to print each R expression evaluated in turn.
Value
a list(R=, internal=, trace=) with the current debug state.
delay
Asynchronous pause.
Description
"delay" returns a promise which resolves only after the specified number of seconds. This uses the R event loop via later. In an [async] construct you can use `await(delay(secs))` to yield control, for example if you need to poll in a loop.
Usage
delay(secs, expr = NULL)
Arguments
secs
The promise will resolve after at least this many seconds.
expr
The value to resolve with; will be forced after the delay.
Value
An object with class "promise".
Examples
# print a message after a few seconds
async({await(delay(10)); cat("Time's up!
")})
format.coroutine
Query / display coroutine properties and state.
Description
The coroutine format method displays its source code, its effective environment, whether it is running or finished, and a label indicating its last known state. The summary method returns the same information in a list.
`summary(obj)` returns a list with information on a coroutine's state, including:
- code: the expression used to create the coroutine;
- state: the current state (see below);
- node: is a character string that identifies a location in the coroutine source code; for example, a typical state string might be ".{.<-2.await__then", which can be read like "in the first argument of \{, in the second argument of <-, in a call to await(), at internal node then.";
- envir: the environment where the coroutine is evaluating R expressions;
- err: the error object, if the coroutine caught an error.
summary(g)$state for a generator g might be "yielded", "running" (if nextElem is currently being called,) "stopped" (for generators that have stopped with an error,) or "finished" (for generators that have finished normally.)
summary(a)$state of an async might be "pending", "resolved" or "rejected".
summary(s)$state on a stream might be "resolved", "rejected", "running", "woken", "yielding", or "yielded".
Usage
## S3 method for class 'coroutine'
format(x, ...)
## S3 method for class 'coroutine'
summary(object, ...)
## S3 method for class 'generator'
summary(object, ...)
## S3 method for class 'async'
summary(object, ...)
## S3 method for class 'stream'
summary(object, ...)
Arguments
x A coroutine.
...
object a coroutine (async, generator, or stream) object.
Description
gather takes a channel as argument and returns a promise. All values emitted by the channel will be collected into a vector matching the prototype mode. After the source channel closes, the promise will resolve with the collected vector.
Method as_promise.channel is a synonym for gather.
collect and collector are used in the implementation of the above functions. collect calls the function fn in its argument, supplying a callback of the form function (val, name=NULL). I like to call it emit. While fn is running, it can call emit(x) any number of times. After fn returns, all the values passed to emit are returned in a vector, with optional names.
collector() works similarly to collect() but does not gather values when your inner function returns. Instead, it provides your inner function with two callbacks, one to add a value and the second to extract the value; so you can use that callback to extract values at a later time. For an example of collector usage see the definition of gather.
Usage
gather(ch, type = list())
## S3 method for class 'channel'
as.promise(x)
collect(fn, type = list())
collector(fn, type = list())
Arguments
ch a channel object.
type A prototype output vector (similar to the FUN.VALUE argument of vapply) De-
defaults to list().
x a channel.
fn A function, which should accept a single argument, here called emit.
Value
gather(ch, list()) returns a [promise] that eventually resolves with a list. If the channel emits
an error, the promise will reject with that error. The partial results will be attached to the error’s
attr(err, "partialResults").
collect returns a vector of the same mode as type.
Author(s)
Peter Meilstrup
Examples
ch <- stream(for (i in 1:10) {await(delay(0.1)); if (i %% 3 == 0) yield(i)})
## Not run: ch |> gather(numeric(0)) |> then(
x)cat(x, "n")
#cumulative sum with collect
cumsum <- function(vec) {
total <- 0
collect(type=0, function(emit) {
for (i in vec) total <- emit(total+i)
})
}
# `as.list.iteror` is implemented simply with `collect`:
as.list.iteror <- function(it) {
collect(
yield) repeat yield(nextOr(it, break))
}
Create an iterator using sequential code.
Description
gen({...}) with an expression written in its argument, creates a generator, an object which computes an indefinite sequence.
When written inside a generator expression, yield(expr) causes the generator to return the given value, then pause until the next value is requested.
When running in a generator expression, yieldFrom(it), given a list or iterator in its argument, will yield successive values from that iterator until it is exhausted, then continue.
Usage
gen(
expr,
...
, split_pipes = FALSE,
compileLevel = getOption("async.compileLevel")
)
yield(expr)
yieldFrom(it, err)
Arguments
expr An expression, to be turned into an iterator.
...
Undocumented.
split_pipes Silently rewrite expressions where "yield" appears in chained calls. See async.
compileLevel Current levels are 0 (no compilation) or -1 (name munging only).
it A list, iterator or compatible object.
err An error handler
Details
On the "inside", that is the point of view of code you write in {...}, is ordinary sequential code using conditionals, branches, loops and such, outputting one value after another with yield(). For example, this code creates a generator that computes a random walk:
rwalk <- gen(
x <- 0;
repeat {
x <- x + rnorm(1)
yield(x)
On the "outside," that is, the object returned by \texttt{gen()}, a generator behaves like an iterator over an indefinite collection. So we can collect the first 100 values from the above generator and compute their mean:
\begin{verbatim}
rwalk |> itertools2::take(100) |> as.numeric() |> mean()
\end{verbatim}
When \texttt{nextOr(rwalk, \ldots)} is called, the generator executes its "inside" expression, in a local environment, until it reaches a call to \texttt{yield(). \texttt{The generator} 'pauses', preserving its execution state, and \texttt{nextElem} returns what was passed to \texttt{yield}. The next time \texttt{nextElem(rwalk)} is called, the generator resumes executing its inside expression starting after the \texttt{yield()}. If you call \texttt{gen} with a function expression, as in:
\begin{verbatim}
gseq <- gen(function(x) for (i in 1:x) yield(i))
\end{verbatim}
then instead of returning a single generator it will return a \emph{generator function} (i.e. a function that constructs and returns a generator.) The above is morally equivalent to:
\begin{verbatim}
gseq <- function(x) {force(x); gen(for (i in 1:x) yield(i))}
\end{verbatim}
so the generator function syntax just saves you writing the \texttt{force} call.
A generator expression can use any R functions, but a call to \texttt{yield} may only appear in the arguments of a "pausable" function. The \texttt{async} package has several built-in pausable functions corresponding to base R's control flow functions, such as \texttt{if}, \texttt{while}, \texttt{tryCatch}, \texttt{<-}, \texttt{\{}\texttt{\}}, and so on (see \texttt{pausables} for more details.) A call to \texttt{yield} may only appear in an argument of one of these pausable functions. So this random walk generator:
\begin{verbatim}
rwalk <- gen({x <- 0; repeat {x <- yield(x + rnorm(1))}})
\end{verbatim}
is legal, because \texttt{yield} appears within arguments to \texttt{\{}\texttt{\}}, \texttt{repeat}, and \texttt{<-}, for which this package has pausable definitions. However, this:
\begin{verbatim}
rwalk <- gen({x <- rnorm(1); repeat {x <- rnorm(1) + yield(x)}})
\end{verbatim}
is not legal, because \texttt{yield} appears in an argument to +, which does not have a pausable definition.
\section*{Value}
\texttt{gen(...)} returns an \texttt{iteror}.
\texttt{yield(x)} returns the same value \texttt{x}.
\texttt{yieldFrom} returns NULL, invisibly.
\section*{Examples}
\begin{verbatim}
i_chain <- function(...) {
iterators <- list(...)
gen(for (it in iterators) yieldFrom(it))
}
\end{verbatim}
goto
Coroutine switch with delimited goto.
Description
The switch function implemented for coroutines in the async package is more strict than the one in base R. In a coroutine, switch will always either take one of the given branches or throw an error, whereas base R switch will silently return NULL if no branch matches switch argument. Otherwise, the same conventions apply as base::switch() (e.g. empty switch branches fall through; a character switch may have one unnamed argument as a default.)
Usage
goto(branch = NULL)
Arguments
branch A character string naming the new branch. If missing or NULL, jumps back to re-evaluate the switch argument.
Details
Coroutine switch also supports a delimited form of goto. Within a branch, goto("other_branch") will stop executing the present branch and jump to the named branch. Calling goto() without arguments will jump back to re-evaluate the switch expression.
If a goto appears in a try-finally call, as in:
```
switch("branch",
branch=tryCatch({...; goto("otherBranch")},
finally={cleanup()}),
otherBranch={...}
)
```
the finally clause will be executed before switching to the new branch.
graphAsync
Draw a graph representation of a coroutine.
Description
graphAsync will traverse the objects representing a generator or async and render a graph of its structure using Graphviz (if it is installed.)
Usage
graphAsync(
obj,
basename = if (is.name(substitute(obj))) as.character(substitute(obj)) else
stop("Please specify basename"),
type = "pdf",
...,
envs = TRUE,
vars = FALSE,
handlers = FALSE,
orphans = FALSE,
dot = find_dot(),
filename = paste0(basename, ".", type),
dotfile = if (type == "dot") filename else paste0(basename, ".dot")
)
Arguments
obj A generator, async or stream object.
basename The base file name. If basename="X" and type="pdf" you will end up with two files, "X.dot" and "X.pdf".
type the output format. If "dot", we will just write a Graphviz dot file. If another extension like "pdf" or "svg", will write a DOT file and then attempt to invoke Graphviz dot (if it is available according to Sys.which) to produce the image. If type="" graphAsync will return graphviz DOT language as a character vector
... Unused.
envs If TRUE, multiple nodes that share the same environment will be grouped together in clusters.
vars If TRUE, context variables used in each state node will be included on the graph, with edges indicating reads/stores.
handlers If TRUE, state nodes will have thin edges connecting to trampoline handlers they call, in addition to the dashed edges connecting to the next transition.
orphans If TRUE, nodes will be included even if there are no connections to them (this mostly being interface methods and unused handlers).
dot Optional path to the dot executable.
filename Optionally specify the output picture file name.
dotfile Optionally specify the output DOT file name.
Details
graphAsync will write a Graphviz DOT format file describing the given generator or async/await block. The graph shows the generator as a state machine with nodes that connect to each other.
If type is something other than dot graphAsync will then try to invoke Graphviz dot to turn the graph description into an image file.
The green octagonal node is where the program starts, while red "stop" and blue "return" are where it ends. Nodes in green type on dark background show code that runs in the host language unmodified; gray nodes implement control flow. Dark arrows carry a value; gray edges carry no value. A "semicolon" node receives a value and discards it.
Some nodes share a context with other nodes, shown by an enclosing box. Contexts can have state variables, shown as a rectangular record; orange edges from functions to variables represent writes; blue edges represent reads.
Dashed edges represent a state transition that goes through a trampoline handler. Dashed edges have a Unicode symbol representing the type of trampoline; (DOUBLE VERTICAL BAR) for await/yield; (TOP ARC ANTICLOCKWISE ARROW WITH PLUS) or (TOP ARC CLOCKWISE ARROW WITH MINUS) to wind on or off an exception handler; (ANTICLOCKWISE TRIANGLE-HEADED BOTTOM U-SHAPED ARROW) for a plain trampoline with no side effects (done once per loop, to avoid overflowing the stack.) Meanwhile, a thin edge connects to the trampoline handler. (So the user-facing "yield" function registers a continuation to the next step but actually calls the generator's yield handler.)
Value
If type="", a character vector of DOT source. Else The name of the file that was created.
Examples
```r
randomWalk <- gen({x <- 0; repeat {yield(x); x <- x + rnorm(1)}})
```
## Not run:
graphAsync(randomWalk, "pdf")
# writes "randomWalk.dot" and invokes dot to make "randomWalk.pdf"
# or, display it in an R window with the Rgraphviz package:
```
g <- Rgraphviz::agread("randomWalk.dot")
Rgraphviz::plot(g)
```
## End(Not run)
#Or render an HTML sidget using DiagrammeR:
## Not run:
dot <- graphAsync(randomWalk, type="")
DiagrammeR::DiagrammeR(paste0(dot, collapse="\n"), type="grViz")
## End(Not run)
---
nextThen
Receive values from channels by callback.
Description
nextThen is the callback-oriented interface to work with channel objects. Provide the channel callback functions to receive the next element, error, and closing signals; your callbacks will be stored in a queue and called when values are available.
Usage
```r
nextThen(x, onNext, onError, onClose, ...)
```
```r
subscribe(x, ...)
```
Arguments
- **x**: A channel object
- **onNext**: For `nextThen`, a function to be called with the next emitted value. For `subscribe`, a function to be called with each emitted value until the stream finishes.
- **onError**: Function to be called if channel stops with an error. Note that if you call `nextThen` multiple times to register multiple callbacks, only the first will receive `onError`; the rest will be called with `onClose`.
- **onClose**: Function to be called if the channel finishes normally.
- **...**: Undocumented.
Details
`subscribe` is similar to `nextThen` except that your `onNext` will be called for each value the channel emits. It is just implemented in terms of `nextThen`, with a callback that re-registers itself.
---
**pausables**
*Pausable functions.*
---
**Description**
Coroutines rely on "pausable" workalikes for control flow functions like `if`, `while`, and so on. `pausables()` scans for and returns a list of all pausable functions visible from the present environment.
**Usage**
```r
pausables(envir = caller(), packages = NULL)
```
Arguments
- **envir**: The environment to search (defaulting to the calling environment).
- **packages**: By default, will only look for pausable functions visible from the caller’s environment. `packages` argument additionally specifies additional packages to search. `packages=base:::packages()` will search all currently loaded packages. `[.packages(all.available=TRUE)]` will search all installed packages.
Details
A pausable function is a public function that has a corresponding private function with a name ending with `_cps`. Most of these private functions are defined in `async` source file `cps.r`. For instance, `async:::for_cps` contains the pausable implementation of `for`.
**Value**
A list of expressions (either names or : : : calls)
---
**Description**
`run(expr)` with an expression directly written, will parse that expression as a coroutine, but then run it without pausing.
**Usage**
```r
run(
expr,
type = list(),
...,
split_pipes = FALSE,
debugR = FALSE,
debugInternal = FALSE,
trace = getOption("async.verbose")
)
```
**Arguments**
- `expr` A generator expression, same as you would write in `gen`.
- `type` A value whose mode will determine the output vector mode (as in `vapply`).
- `...` Undocumented.
- `split_pipes` See `async`; defaults to FALSE.
- `debugR` Will open a browser at the first and subsequent R evaluations allowing single-stepping through user code.
- `debugInternal` Will set a breakpoint at the implementation level, allowing async-stepping through async package code.
- `trace` a tracing function.
**Details**
If the expression contains any calls to `yield()`, `run()` will collect all the values passed to `yield()` and return a list. If the expression contains a `yield()` but it is never called, `run()` returns an empty list. If the expression does not contain a `yield` at all, `run` returns the expression’s final return value.
`run(expr)` is similar to `as.list(gen(expr))`, except `run(expr)` evaluates its expression directly in the calling environment, while `gen` creates a new enclosed environment to run in.
`run` is useful if you want to take advantage of coroutine language extensions, such as using for loops over iterators, or using `goto()` in switch statements, in otherwise synchronous code. If you want to collect a variable-length sequence of values but don’t need those features, using `collect` directly will have better performance.
Value
If `expr` contains any `yield` calls, a vector of the same mode as `type`; otherwise the return value of `expr`.
Examples
```r
run(type=0, {
for (i in iterors::iseq(2, Inf, by=5)) {
if (i %% 37 == 0) break
else yield(i)
}
})
```
**stream**
Create an asynchronous iterator by writing sequential code.
Description
(Experimental as of async 0.3) `stream(...)` constructs a channel object, i.e. an asynchronous iterator, which will compute and return values according to sequential code written in `expr`. A stream is a coroutine wearing a channel interface in the same way that async is a coroutine wearing a promise interface, and a gen is a coroutine sitting behind an iteror interface.
Usage
```r
stream(
expr,
...,
split_pipes = TRUE,
lazy = TRUE,
compileLevel = getOption("async.compileLevel"),
debugR = FALSE,
debugInternal = FALSE,
trace = getOption("async.verbose")
)
```
Arguments
- `expr` A coroutine expression, using some combination of `yield`, `await`, `awaitNext`, `yieldFrom`, standard control flow operators and other calls.
- `...` Undocumented.
- `split_pipes` See description under async; defaults to TRUE.
- `lazy` If TRUE, start paused, and pause after `yield()` (see above.)
- `compileLevel` Compilation level.
debugR
Set TRUE to single-step debug at R level. Use `debugAsync()` to enable or disable debugging on a stream after it has been created.
dbgInternal
Set TRUE to single-step debug at coroutine implementation level.
trace
An optional tracing function.
**Details**
In a stream expression, you can call `yield()` to emit a value, and `await()` to wait for a value from a *promise*. To have your stream wait for values from another stream or channel, call `awaitNext();` you can also use `awaitNext` when you are writing an `async`. You can also use a simple `for` loop to consume all future values from a stream or channel.
The lower-level interface to consume values from a stream is by using `nextThen` from the `channel` interface.
Streams come in both "lazy" and "eager" varieties. If `lazy=TRUE`, a stream starts idle, and does not process anything until it is woken up by a call to its channel’s `nextThen`. It will pause after reaching `yield` if there are no more outstanding requests. If `lazy=FALSE`, a stream will begin executing immediately, not pausing on `yield`, possibly queuing up emitted values until it needs to await something.
(For comparison, in this package, `gen` are lazy in that they do not start executing until a call to `nextOr` and pause immediately after `yield`, while `async` blocks are eager, starting at construction and running until they hit an `await`.)
Like its coroutine counterparts, if stream is given a function expression, like `stream(function(...) ...), it will return a "stream function" i.e. a function that constructs a stream object.
**Value**
An object with (at least) classes "stream", "channel", "coroutine", "iteror", "iter".
**Author(s)**
Peter Meilstrup
**Examples**
```r
# emit values _no more than_ once per second
count_to <- stream(function(n, interval=1) {
for (i in 1:n) {
await(delay(interval))
yield(i)
}
})
accumulate <- stream(function(st, sum=0) {
for (i in st) {sum <- sum + i; yield(sum)}
})
print_each <- async(function(st) for (i in st) print(i))
```
count_to(10) |> accumulate() |> print_each()
# Index
| as.promise.channel(gather) | async | await | await(async) | awaitNext | base::switch() | channel | collect | collector(gather) | combine | debugAsync | debugAsync() | delay | force | format.coroutine | gather | gen | gen() | generator | goto | goto() | graphAsync | is.channel(channel) | iteror | later | nextThen | pausables | promise | promises | promises::as.promise() | run | stream | subscribe | subscribe(nextThen) | summary.async(format.coroutine) | summary.coroutine(format.coroutine) | summary.generator(format.coroutine) | summary.stream(format.coroutine) | Sys.which | vapply | yield | yield(gen) | yieldFrom(gen) |
|---------------------------|-------|-------|-------------|-----------|--------------|---------|--------|----------------|---------|------------|-------------|------|-------|----------------|--------|----|-------|-----------|------|--------|------------|----------------|--------|------|--------|-----------|--------|--------|----------------|----|--------|-----------|----------------|----------------|----------------|----------------|----------------|---------------|----------|--------|-------|---------|---------|
|
{"Source-Url": "https://cran.r-project.org/web/packages/async/async.pdf", "len_cl100k_base": 7635, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 45241, "total-output-tokens": 9108, "length": "2e12", "weborganizer": {"__label__adult": 0.00029587745666503906, "__label__art_design": 0.00028824806213378906, "__label__crime_law": 0.0001863241195678711, "__label__education_jobs": 0.0001844167709350586, "__label__entertainment": 7.838010787963867e-05, "__label__fashion_beauty": 7.581710815429688e-05, "__label__finance_business": 6.443262100219727e-05, "__label__food_dining": 0.000255584716796875, "__label__games": 0.0005459785461425781, "__label__hardware": 0.0003228187561035156, "__label__health": 0.00012350082397460938, "__label__history": 0.00010466575622558594, "__label__home_hobbies": 4.631280899047851e-05, "__label__industrial": 0.0001379251480102539, "__label__literature": 0.00013935565948486328, "__label__politics": 0.00011372566223144533, "__label__religion": 0.00023949146270751953, "__label__science_tech": 0.0013217926025390625, "__label__social_life": 6.383657455444336e-05, "__label__software": 0.01110076904296875, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00015664100646972656, "__label__transportation": 0.00014388561248779297, "__label__travel": 0.00014138221740722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32620, 0.00678]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32620, 0.53996]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32620, 0.78585]], "google_gemma-3-12b-it_contains_pii": [[0, 1369, false], [1369, 1875, null], [1875, 4346, null], [4346, 5927, null], [5927, 7261, null], [7261, 10020, null], [10020, 10709, null], [10709, 12203, null], [12203, 13999, null], [13999, 15144, null], [15144, 16455, null], [16455, 19025, null], [19025, 20439, null], [20439, 22319, null], [22319, 24483, null], [24483, 26349, null], [26349, 28093, null], [28093, 29370, null], [29370, 31419, null], [31419, 31464, null], [31464, 32620, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1369, true], [1369, 1875, null], [1875, 4346, null], [4346, 5927, null], [5927, 7261, null], [7261, 10020, null], [10020, 10709, null], [10709, 12203, null], [12203, 13999, null], [13999, 15144, null], [15144, 16455, null], [16455, 19025, null], [19025, 20439, null], [20439, 22319, null], [22319, 24483, null], [24483, 26349, null], [26349, 28093, null], [28093, 29370, null], [29370, 31419, null], [31419, 31464, null], [31464, 32620, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32620, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32620, null]], "pdf_page_numbers": [[0, 1369, 1], [1369, 1875, 2], [1875, 4346, 3], [4346, 5927, 4], [5927, 7261, 5], [7261, 10020, 6], [10020, 10709, 7], [10709, 12203, 8], [12203, 13999, 9], [13999, 15144, 10], [15144, 16455, 11], [16455, 19025, 12], [19025, 20439, 13], [20439, 22319, 14], [22319, 24483, 15], [24483, 26349, 16], [26349, 28093, 17], [28093, 29370, 18], [29370, 31419, 19], [31419, 31464, 20], [31464, 32620, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32620, 0.00377]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
87a4cd4ebad1b27d8a8e3e404cf72e46cb3911e0
|
An Experimental Evaluation of a De-biasing Intervention for Professional Software Developers
Martin Shepperd
Brunel University London
London, UK
martin.shepperd@brunel.ac.uk
Carolyn Mair
Fashion.Psychology
London, UK
carolyn.mair@gmail.com
Magne Jørgensen
Simula Research Laboratory
Oslo, Norway
magnej@simula.no
ABSTRACT
Context: The role of expert judgement is essential in our quest to improve software project planning and execution. However, its accuracy is dependent on many factors, not least the avoidance of judgement biases, such as the anchoring bias, arising from being influenced by initial information, even when it’s misleading or irrelevant. This strong effect is widely documented.
Objective: We aimed to replicate this anchoring bias using professionals and, novel in a software engineering context, explore de-biasing interventions through increasing knowledge and awareness of judgement biases.
Method: We ran two series of experiments in company settings with a total of 410 software developers. Some developers took part in a workshop to heighten their awareness of a range of cognitive biases, including anchoring. Later, the anchoring bias was induced by presenting low or high productivity values, followed by the participants’ estimates of their own project productivity. Our hypothesis was that the workshop would lead to reduced bias, i.e., a de-biasing intervention.
Results: The anchors had a large effect (robust Cohen’s $d = 1.19$) in influencing estimates. This was substantially reduced in those participants who attended the workshop (robust Cohen’s $d = 0.72$). The reduced bias related mainly to the high anchor. The de-biasing intervention also led to a threefold reduction in estimate variance.
Conclusion: The impact of anchors upon judgement was substantial. Learning about judgement biases does appear capable of mitigating, although not removing, the anchoring bias. The positive effect of de-biasing through learning about biases suggests that it has value.
KEYWORDS
Software engineering experimentation, Software effort estimation, Expert judgement, Cognitive bias
ACM Reference Format:
1 INTRODUCTION
Effective management of software projects demands, amongst other things, accurate resource predictions. For this reason cost or effort modelling has been a major topic of research over many years [17, 26]. However, the preponderance of this research has focused on the development and evaluation of formal predictive models. In contrast, the role of human experts — who engage in this process, who make choices about model inputs and outputs — has been somewhat neglected [13, 14].
Human judgement and decision-making has been studied for decades by cognitive psychologists, e.g., the well-known work of Kahneman et al. [21, 33]. An important finding is that humans typically use heuristics, i.e., simple mental strategies which, although sufficient in most circumstances, may lead to poor judgements and decisions in others and software engineering is not exempt from this.
When the use of heuristics leads to a deviation from a rational norm, such as when the heuristic does not fit the context or is based on misleading or irrelevant input, it leads to errors we call judgement and decision biases. Heuristics, and consequently the judgement and decision biases, are frequently unconscious. This means that the users of heuristics typically will not be able to explain properly how a judgement and or decision was made, why a poor judgement or decision was made or know how to improve the judgement and decision process.
Many judgement and decision biases have been identified, however our study focuses on the impact of the anchoring bias. This bias is thoroughly documented as widespread and leading to significant distortions of judgement [5, 22].
A judgement based on the anchoring heuristic, e.g., when estimating effort or productivity, may frequently be useful. Imagine a situation where a technically competent project leader indicates that she believes that a software development task should take about 10 work-hours. You are then asked about giving your judgement about the effort you would need for that task. Given that the project leader is competent, it saves you time and mental effort to base your thinking process on that 10 work-hours as a good
starting point, or to compare the current task with other tasks with size of about 10 work-hours to find out whether this is larger or smaller. It may even improve the accuracy of the effort estimate. But what if the number used by your anchoring heuristics is totally irrelevant, such as the number of hours spent on your previous task, or misleading, such as a very low number of work-hours a technically incompetent client wants you to use? Several studies suggest that software professionals, like everyone else, are affected by presented numbers, even when they are irrelevant or misleading [10, 15]. This happens even when professionals are explicitly requested to ignore them [18, 29].
While there are hundreds of studies on the presence of human biases in judgement and decision making, including many on the anchoring bias, there has not been much research on the impact of increased awareness of cognitive biases on the reduction of such biases (i.e., de-biasing). To investigate this topic, we conducted an experiment where the intervention was a workshop to increase participant awareness of cognitive biases and then compared these results with those of participants from a previously published experiment completing the same task who had not attended the workshop [16].
Another limitation of previous research is that most evidence for the existence of the anchoring bias comes from student samples and the use of tasks where students have little previous experience. In contrast, the sample in our study comprised professional software developers who were asked to estimate their own productivity on a task they had previously completed. This, we believe, makes the task more familiar for the subject and increases the relevance of the results to real-world tasks.
The remainder of the paper is organized as follows. First we present related work and supporting evidence for cognitive biases and how this might impact judgement and decision making. Next we describe the two-factor experimental design (low and high anchor, de-biasing and no intervention) experiment. We present the results of our robust statistical analysis, initially from 118 professional participants and then pooled with participants from a set of previous experiments. We conclude by discussing the implications of these results for improving professional judgements and outline some areas for further investigation.
2 RELATED WORK
The anchoring bias is one of the strongest, easiest to create, robust, long-lasting and studied of the human biases [9]. The most famous study of the anchoring bias involved a rigged wheel of fortune and the question: What percentage of the members of the UN are African countries? First, the research participants spin the wheel, which stopped at 10 or 65 depending on how the wheel was rigged, and were asked whether they thought the percentage African countries in the UN was more than or less than the number on the wheel. Following that question, the participants were asked to predict the proportion of African countries in the UN. The difference in answers between the two groups was large. Those in the first group (wheel stopping at 10) gave a median prediction of 25% African countries in the UN, while those in the second group (wheel stopping at 65) gave a median prediction of 45% [33]. It is hard to imagine that the participants would think that a number on a wheel of fortune, which they believed gave a random number between 0 and 100, revealed any information about the actual proportion of African countries in the UN. Nevertheless, they were strongly affected by the number presented to them. Numerous subsequent studies, following similar anchoring inducing procedures, have shown similar effects. Even completely irrelevant anchors, such as digits from social security numbers or phone numbers, have been demonstrated to strongly bias people’s judgements [2].
The anchoring bias is clearly relevant outside artificial experimental settings. Software professionals’ time predictions were for example strongly affected by knowledge about what a customer had communicated as her expectation of time usage, in spite of being informed that the customer had no competence in predicting the time usage [18]. When asking these professionals whether they thought they had been affected by the customer’s expectations, i.e., by the anchoring information, they either denied it or responded that they were just affected a little. This feeling of not being much affected, when in reality being affected a lot, is part of what makes the anchoring bias potent and hard to avoid. Even extreme anchors or suggestions, for instance that the length of a whale is 900 metres (unreasonably high anchor) or 0.2 metres (unreasonably low anchor), is effective in influencing people’s judgements [32]. Anchoring effects seem to be pretty robust to all kinds of warnings. The following are instructions from a software development effort estimation study on anchoring: I admit I have no experience with software projects, but I guess this will take about two months to finish, I may be wrong, of course; we’ll wait for your calculations for a better estimate [1]. In spite of the warnings, the software developers were strongly affected by the anchoring value of two months.
The cognitive basis of the anchoring bias is disputed and there are at least three different, partly overlapping, explanations: 1) Anchoring as communication (the attitude change theory), i.e., that it is natural for us to give weight to what other people communicate [34]. 2) Anchors as a starting point (the anchoring and adjustment theory), i.e., that the anchor is the starting point and that the adjustment away from the anchor typically is insufficient [21]. 3) Anchors as an activating experience (the selective accessibility theory), i.e., that the anchor activates experiences and that recently activated experience is more likely to be used in the subsequent judgement process [28]. All explanations have supporting evidence and it is possible that they all contribute to the observed anchoring bias.
De-biasing is applying mitigating interventions to reduce the impact of a bias. Fischhoff [8] suggests a fourfold classification scheme:
- (a) warning about the possibility of bias without specifying its nature.
- (b) describing the direction (and possibly extent) of the bias that might typically be observed.
- (c) providing feedback, preferably at a personal level.
- (d) offering an extended program of training with feedback, coaching, etc.
Given the large effect size and importance of the anchor bias, it is not surprising that research has been devoted to study de-biasing strategies, including how to reduce or remove the anchoring bias. Although several methods for de-biasing have been proposed and tested, researchers have struggled to remove this effect. Examples
of de-biasing strategies with some positive effect, but far from eliminating the bias, are to "consider the opposite" [30] and introduction of new, more relevant, anchors (known as re-biasing) [23]. The study by Lovallo and Sibony [24] reported that the 25% companies best at avoiding and reducing decision biases, i.e., better at de-biasing, had a 5.3% advantage over the 25% worst (i.e., 6.9% vs 1.6% typical ROI). This suggests that de-biasing strategies are of substantial real-world importance.
In our paper, we examine the de-biasing effect of increasing the awareness of the anchoring effect among software developers. The evidence in support of this type of de-biasing is mixed. A positive, although not very large, effect of a training-based increase of bias awareness, including the anchoring bias, was reported in [27]. In contrast, no positive effect was found from teaching-based increase of bias awareness by [31]. The study reported by Welsh et al. [36] found a positive effect from increased bias awareness on the overconfidence bias, but none for the anchoring bias. The general finding seems to be that increased bias awareness typically has moderate to no effect on how much people are biased in their judgements and decisions [20]. No prior studies have, as far as we know, reported on the effect of increased anchoring bias awareness in the context of professional software developers.
3 EXPERIMENTAL METHOD
3.1 Participants
This study is based upon two series of experiments. The first was conducted by MJ and involved 292 participants from industry with no workshop (de-biasing) intervention. The second series were conducted by CM and MS with MJ involved for the first experiment of the second series. These experiments replicated the initial experimental design (this is documented in [16] as Estimation Task 1). In addition, the de-biasing intervention of a workshop was introduced prior to the actual experimental task. Table 1 shows the counts of participants by treatment. The participants were all professional software developers drawn from a total of 15 companies and seven different countries as indicated by Table 2. They were recruited as volunteers from companies with whom MJ had previously collaborated. This was supplemented by attendees from effort estimation workshops delivered by MS and CM in the UK and New Zealand.
<table>
<thead>
<tr>
<th>Variable</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>P_id</td>
<td>Unique participant id</td>
</tr>
<tr>
<td>Workshop</td>
<td>Y or N depending on the use of a de-biasing intervention</td>
</tr>
<tr>
<td>Block</td>
<td>Specific id of the experiment, e.g., there are multiple deliveries for some companies either at different times or locations.</td>
</tr>
<tr>
<td>Company</td>
<td>The employing company of the software developer - anonymised</td>
</tr>
<tr>
<td>Country</td>
<td>The country where the software development company is located</td>
</tr>
<tr>
<td>Anchor</td>
<td>High or low depending on the randomly allocated treatment</td>
</tr>
<tr>
<td>EstProd</td>
<td>Estimated coding productivity in LOC per hour for the last completed software project. This is the response variable.</td>
</tr>
</tbody>
</table>
In terms of data cleaning we discarded participants who estimated their productivity as:
- missing values (5 cases eliminated)
### Table 1: Participants by Treatment
<table>
<thead>
<tr>
<th>Anchor</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>High</td>
<td>Low</td>
</tr>
<tr>
<td>N</td>
<td>142</td>
</tr>
<tr>
<td>Y</td>
<td>60</td>
</tr>
<tr>
<td>Total</td>
<td>203</td>
</tr>
</tbody>
</table>
### Table 2: Participants by Country
<table>
<thead>
<tr>
<th>Country</th>
<th>Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>Nepal</td>
<td>59</td>
</tr>
<tr>
<td>New Zealand</td>
<td>18</td>
</tr>
<tr>
<td>Poland</td>
<td>92</td>
</tr>
<tr>
<td>Romania</td>
<td>48</td>
</tr>
<tr>
<td>United Kingdom</td>
<td>16</td>
</tr>
<tr>
<td>Ukraine</td>
<td>114</td>
</tr>
<tr>
<td>Vietnam</td>
<td>63</td>
</tr>
<tr>
<td>Total</td>
<td>410</td>
</tr>
</tbody>
</table>
In terms of data cleaning we discarded participants who estimated their productivity as:
- missing values (5 cases eliminated)
• zero values as this implied that the participant had not engaged in coding (3 cases eliminated)
• excessively high values of $\geq 500$ LOC per hour since this implies an implausible level of productivity of almost one LOC per 7 seconds! (4 cases eliminated)
A representative sample of five rows of the data are given in Table 4. The raw data and R scripts are available from https://doi.org/10.6084/m9.figshare.5414200.v3.
4 RESULTS
4.1 Summary statistics
In this section we present the results of our analysis of the experimental data. First we give the basic descriptive statistics for the response variable Estimated Productivity, then explore the basic anchoring effect and finally our main intervention: the de-biasing effect of the workshop.
Table 5 describes our response variable Estimated Productivity. We see values that range from 0.5 to 300 (after the data cleaning described in Section 3.3) with a strong positive skew (evidenced by the mean being greater than the median and the strong deviations particularly of the upper tail in the qqplot (Fig. 1). For this reason we also compute a 20% trimmed mean and standard deviation as more robust estimators [37]. Both are less than their untrimmed counterparts due to the positive skew (Table 5).
The qqplot also reveals the presence of many ties (horizontal segments of the curve) which correspond to popular round numbers. For example there are no predictions of 9LOC, but 41 of 10LOC and two of 11LOC. This is illustrated clearly by the stem and leaf plot where we see zero dominates as a trailing digit, followed by a five (see Fig. 2). Perhaps even more remarkable is that not one participant made an estimate ending in a nine. We conjecture that there is a high degree of uncertainty in the estimates which leads participants to use 5, 10, 20, ... rather than 9 (which would suggest a strong belief in estimation accuracy). For a discussion of the rounding phenomenon see [12].
4.2 The Anchor Effect
Recall that we do not know the true productivity levels for each participant. But given the random allocation of participants to the anchor treatments we do not believe there is any good reason to expect one group to be more productive than the other. The first thing to observe is the impact of the anchor on all participants, shown graphically in Fig. 3 as boxplots. Note the presence of extreme outliers, denoted by individual observations, for both anchor treatments. Note also the substantial difference in medians, shown by the line across each box and the 95% confidence limits for the medians shown by the notches which do not overlap.
More formally we can compare the two samples using the robust Yuen test with bootstrap to estimate the 95% confidence interval. The impact of the anchor is statistically significant, $p \approx 0$. The trimmed mean difference is 60.5 and the 95% confidence interval is (51.6, 69.4). In terms of effect size, this is either simply the trimmed mean difference of $\sim 60$ LOC per hour between a low and high anchor estimate. Alternatively, if we want to standardise the effect size we can compute a robust version of Cohen’s $d$ using a pooled trimmed standard deviation which yields $\sim 1.18$, an effect size which is between large and very large (0.8–1.3) [7]. Essentially when software professionals are asked to estimate coding productivity, the percentage difference between the low and high anchor groups was approximately 350%.
4.3 The Workshop Effect
So having shown that the anchor effect is very strong in the context of software estimation, we next consider the impact of the de-biasing intervention of the workshop. But first, we need to address a potential confounder in that the study design is unbalanced; we can see that there are experimental blocks that didn’t receive the intervention at all, or vice versa (see Table 6). This is potentially problematic as the productivity estimates also differ considerably by country (see Table 7). The UK shows much lower Estimated Productivity and Nepal and Vietnam much higher than the other
countries. Therefore we exclude the UK, Nepal and Vietnam to mitigate this problem. This leaves 272 participants with 102 receiving the de-biasing intervention.
We compare the estimates visually in Fig. 4 that groups participants both by anchor (low or high) and by de-biasing treatment (Y or N). It is clear from the boxplots that the median Estimated Productivity for the high anchor without de-biasing (high.N) is substantially greater than the median with de-biasing (high.N). Recall that the notches indicate the 95% confidence limits and note that these do not overlap. As indicated by the size of the whiskers, the spread of estimates also seems greater when there is no de-biasing workshop. Likewise, we see extreme outliers particularly for the no workshop condition. However, the effect for the low anchor is less obvious.
The median estimate of hourly productivity for the high anchor is reduced from 100 to 30 LOC/hr but for the low anchor the median remains unchanged at 10 LOC/hr. There are three possible reasons for the similarity of the median estimates for those in the low anchor group. First, the companies, and their software professionals, in the workshop group may have been more productive and as a consequence produced even lower estimates in a no workshop context. Second, it is harder to influence people to be negative about one’s own performance, i.e., that there is less room for de-biasing interventions for the low anchor. Third, the de-biasing intervention may have increased their awareness of the optimism-inducing effect of anchor values, which in this case is the increase in productivity values through a high anchor, but not so much the optimism-reducing effect, corresponding to a low productivity anchor. More studies are needed to analyse and better understand this potentially interesting finding.
We also tabulate comparisons of means and, in parentheses, standard deviations in Table 8 and robust analogues based on 20% trimming in Table 9. Since trimming tends to remove extreme values we see the general effect is to slightly reduce our estimates of centre and dispersion.
Formally we can compare the central tendency and dispersion of the two conditions. For central tendency we apply the robust Yuen’s test and find the trimmed mean difference is 25.6, \( p \approx 0 \) and the 95% confidence interval is (15.5, 35.7). This strongly suggests that the de-biasing workshop reduces estimates of productivity. Inasmuch as the higher estimates are influenced upwards by the anchor this is a desirable outcome.
However, we might also expect the spread of estimates to be narrowed if the effect of the anchors are reduced. To compare spread or dispersion we use a simple robust test to compare variance. We expect the de-biasing to reduce the variance of the estimates since the anchors will have less impact and not stretch out the distribution of estimates. Robust 20% trimmed estimates of standard deviation are given in Table 10 which indicates that the standard deviation is reduced about threefold with the de-biasing workshop intervention. As a formality we test that this reduction is significant. Since we already know the distribution is heavy-tailed, skewed and generally non-Gaussian, we use the Brown-Forsythe median variant of Levene’s test of homogeneity of variance [3]. This gives a Test Statistic of 36.3, \( p \approx 0 \) meaning it is highly likely the two groups have different variances.
Considering both factors, the Anchor and the de-biasing Workshop together we use ANOVA, specifically the robust 2-way between-between method of Wilcox [25, 37]. The results are given in Table 11 however, we need to sound a note of caution. The variance is strongly heteroscedastic, the data imbalanced and therefore there may be ordering effects, so we only consider gross outcomes. There is strong evidence that both Anchor and Workshop are associated with estimated productivity, Anchor more so. It is also clear there is an interaction between Anchor and De-biasing confirmed by the Interaction Plot (Fig. 5). Essentially the de-biasing intervention only seems to impact the high anchor condition. This might be because (i) negative values for the estimate are meaningless and (ii) as we suspect the many of the higher values e.g., greater than 100 LOC/hr are somewhat hard to accept. Therefore it is probable that the high anchor is causing more bias or distortion than the low anchor.
Table 8: Mean and Standard Deviations for Estimated Productivity by Anchor and De-biasing Workshop
<table>
<thead>
<tr>
<th>Anchor</th>
<th>No workshop</th>
<th>Workshop</th>
</tr>
</thead>
<tbody>
<tr>
<td>high</td>
<td>92.85</td>
<td>37.73</td>
</tr>
<tr>
<td></td>
<td>(53.72)</td>
<td>(27.34)</td>
</tr>
<tr>
<td>low</td>
<td>19.17</td>
<td>13.20</td>
</tr>
<tr>
<td></td>
<td>(25.89)</td>
<td>(12.67)</td>
</tr>
</tbody>
</table>
Table 9: 20% Trimmed Mean and Standard Deviations for Estimated Productivity by Anchor and De-biasing Workshop
<table>
<thead>
<tr>
<th>Anchor</th>
<th>No workshop</th>
<th>Workshop</th>
</tr>
</thead>
<tbody>
<tr>
<td>high</td>
<td>86.44</td>
<td>31.42</td>
</tr>
<tr>
<td></td>
<td>(58.51)</td>
<td>(22.20)</td>
</tr>
<tr>
<td>low</td>
<td>13.08</td>
<td>10.18</td>
</tr>
<tr>
<td></td>
<td>(14.05)</td>
<td>(10.26)</td>
</tr>
</tbody>
</table>
Table 10: 20% Trimmed Standard Deviations for Estimated Productivity by De-biasing Workshop
<table>
<thead>
<tr>
<th>Factor</th>
<th>F</th>
<th>p</th>
</tr>
</thead>
<tbody>
<tr>
<td>Anchor</td>
<td>192.5</td>
<td>< 0.001</td>
</tr>
<tr>
<td>Workshop</td>
<td>72.2</td>
<td>< 0.001</td>
</tr>
<tr>
<td>Anchor:Workshop</td>
<td>58.4</td>
<td>< 0.001</td>
</tr>
</tbody>
</table>
To summarise, we have strong evidence of both the anchor effect and a mitigating effect from the de-biasing workshop. In terms of effect size, this is either simply the trimmed mean difference of 26 LOC per hour between an estimate with and without de-biasing. (This is substantial but less than the Anchor effect). If we want to standardise we can compute a robust version of Cohen’s \( d \) using a pooled trimmed standard deviation giving \( d \sim 0.72 \) which suggests a medium to large effect (0.5 – 0.8) [7]. Alternatively the impact of de-biasing can be assessed by considering the reduction in the spread of estimates (since the anchors will have a reducing distorting effect as a consequence of the de-biasing). We find that the standard deviation of the de-biased estimates is reduced about threefold so again support for the impact of our de-biasing workshops.
5 DISCUSSION AND CONCLUSIONS
In this study we have addressed the real world problem of how biases, specifically the anchoring bias, influence software professionals making estimates and then how they might be mitigated. To do this we have conducted a series of experiments across seven countries with 410 participants. We believe this study is important because despite the emphasis on formal prediction systems, project cost decisions are ultimately made by humans, and these judgments are infrequent, but of high value. Therefore they cannot be conceived of as purely technical problems.
Our experiments yield four main findings.
(1) The effect of anchors on software professionals performing estimation tasks, in line with previous studies, such as [23], is very strong.
(2) The de-biasing workshop significantly reduces — but does not eliminate — this bias.
(3) The workshop also substantially reduces the variability in the estimates of professionals approximately threefold.
(4) The workshop has a greater impact for the high rather than low anchor (although given the meaninglessness of a negative estimate, low estimates could only change in one direction).
However, there are some limitations to this work. First, we have only considered one type of bias and a relatively simple de-biasing intervention based on a 2-3 hour workshop. There are many other cognitive biases and judgement fallacies, at least some of which could be relevant to software engineering. Another limitation is that we don’t know how long the de-biasing effect will last, but it is quite possible it is only transient. Therefore follow up work might be useful.
Nevertheless, this study has practical significance. It shows how professionals can be easily misled into making highly distorted judgements. This matters in that despite all our tools and automation, software engineering remains a profession that requires judgement and flair. Fortunately, we show, that it is possible to reduce, although not eliminate, these deleterious effects. There may well also be considerable scope for refining and improving de-biasing interventions.
ACKNOWLEDGEMENTS
This work was funded by EPSRC Grants EP/I038225/1 and EP/1037881/1. We are also grateful to the participants of the experiments.
REFERENCES
[31] G. Oliver, G. Oliver, and R. Body. 2017. BET 2: Poor evidence on whether teaching cognitive debiasing, or cognitive forcing strategies, lead to a reduction in errors attributable to cognition in emergency medicine students or doctors. *Emergency Medicine Journal* 34, 8 (2017), 553–554.
|
{"Source-Url": "http://people.brunel.ac.uk/~csstmms/SAC2018_final.pdf", "len_cl100k_base": 6288, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25432, "total-output-tokens": 8823, "length": "2e12", "weborganizer": {"__label__adult": 0.0004067420959472656, "__label__art_design": 0.0005140304565429688, "__label__crime_law": 0.00032901763916015625, "__label__education_jobs": 0.00533294677734375, "__label__entertainment": 9.66191291809082e-05, "__label__fashion_beauty": 0.0001931190490722656, "__label__finance_business": 0.0010385513305664062, "__label__food_dining": 0.0004107952117919922, "__label__games": 0.0005617141723632812, "__label__hardware": 0.0005278587341308594, "__label__health": 0.0007863044738769531, "__label__history": 0.00024771690368652344, "__label__home_hobbies": 0.0001233816146850586, "__label__industrial": 0.0003938674926757813, "__label__literature": 0.0007052421569824219, "__label__politics": 0.0002963542938232422, "__label__religion": 0.0004379749298095703, "__label__science_tech": 0.024871826171875, "__label__social_life": 0.00021648406982421875, "__label__software": 0.007904052734375, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.0002884864807128906, "__label__transportation": 0.0005049705505371094, "__label__travel": 0.00020575523376464844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34533, 0.05628]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34533, 0.36203]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34533, 0.91933]], "google_gemma-3-12b-it_contains_pii": [[0, 4663, false], [4663, 11551, null], [11551, 15331, null], [15331, 19405, null], [19405, 21528, null], [21528, 25612, null], [25612, 31904, null], [31904, 34533, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4663, true], [4663, 11551, null], [11551, 15331, null], [15331, 19405, null], [19405, 21528, null], [21528, 25612, null], [25612, 31904, null], [31904, 34533, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34533, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34533, null]], "pdf_page_numbers": [[0, 4663, 1], [4663, 11551, 2], [11551, 15331, 3], [15331, 19405, 4], [19405, 21528, 5], [21528, 25612, 6], [25612, 31904, 7], [31904, 34533, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34533, 0.24706]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
5f9e760c9976f4b6efb0d8cc22c8cf2e0facca5d
|
Apache e-Commerce Solutions
for ApacheCon 2000, Florida
Prepared by:
Mark J Cox
Geoff Thorpe
Revision 1
20th January 2000
www.awe.com/mark/apcon2000
www.geoffthorpe.net/geoff/apcon2000
ABSTRACT
This paper discusses the deployment of secure web servers, using them as proxies to back-end systems, load balancing SSL, and other issues of performance and reliability for large-scale systems. We investigate the impact of secure transactions and explore innovative approaches to load sharing in multi-server environments. This includes distributing session caches and CPU-intensive operations across machines (and to dedicated hardware), and the optimisation of systems that incorporate cryptographic hardware acceleration and key management.
AUTHOR BIOGRAPHIES
Mark Cox is Managing Director of C2Net Europe in England. He has developed a number of free and open-source software products for more than 9 years; being a founding member of both the OpenSSL group and the Mozilla Crypto Group, a core Apache developer since 1995, and the editor of Apache Week.
mark@awe.com
Geoff Thorpe is Senior Cryptographic Software Engineer for C2Net Europe and has extensive experience with cryptography and SSL, particularly OpenSSL and Cryptlib. He’s mostly fascinated by crypto, networking, and saturating systems with a combination of both.
geoff@geoffthorpe.net
BACKGROUND
The aim of this paper is to look at the effects of doing secure transactions with your web server; specifically how it affects the performance and layout of your systems. We take a look at some of the standard (http) solutions in terms of machine layouts and configurations being used by high-volume sites and see if they can scale up to handling secure (https) transactions. Although some solutions to this exist in the hardware space, we look at how you may be able to do the same thing using Apache.
Recent publicity has focussed on the issues of keeping your private keys in software, so we take a look at whom this affects and if there are other ways of making your systems secure. We try to find out if hardware-based cryptography devices are a cost-effective solution and give you help on how to make sense of manufacturer claims.
Since the majority of browsers and servers support the SSL and TLS security protocols we will focus on them in particular. Since this is an Apache conference, we are not going to consider other servers or go into too much vendor-specific detail, although a good deal of this material generalises to other servers (and to other security-related services also).
SSL IN APACHE
So what is the difference between a secure and a non-secure connection? When you access a site using the prefix "https" you are attempting to establish a connection using the SSL or TLS protocol. Once that connection is established the requests and responses between browser and server are encrypted end-to-end. The two stages of this connection are; the handshake (authentication and key agreement), and the tunnelling (pass back and forth of the encrypted requests and responses).
SSL Cryptography
Once the browser and the server have completed the handshake, they can encrypt the rest of their communication using a standard encryption algorithm such as DES or RC4. These algorithms are known as ciphers and this technique is called symmetric cryptography. Symmetric cryptography uses a common key for both encrypting and decrypting data, and this cryptography is generally very fast. But as the browser and server may not have interacted before, they need some way of establishing that common key. This is achieved in the handshake by key-exchange techniques that are based on asymmetric encryption (public/private key cryptography). This is commonly based on the RSA algorithm and typically uses a 1024-bit RSA key-pair. As the following table illustrates (showing speed tests of 1024-bit RSA signing on various platforms), public key cryptography can be very CPU-intensive.
Table 1 - RSA sign speeds
<table>
<thead>
<tr>
<th>Machine</th>
<th>Operating System</th>
<th>Signs/second</th>
</tr>
</thead>
<tbody>
<tr>
<td>Athlon 600MHz</td>
<td>Linux</td>
<td>100</td>
</tr>
<tr>
<td>Intel PIII 450MHz</td>
<td>Linux</td>
<td>73</td>
</tr>
<tr>
<td>Sparc Ultra 5</td>
<td>Solaris 7</td>
<td>27</td>
</tr>
<tr>
<td>IBM RS6000 43P/140 330MHz</td>
<td>AIX 4.3</td>
<td>27</td>
</tr>
<tr>
<td>SGI Indy</td>
<td>IRIX 6.4</td>
<td>13</td>
</tr>
</tbody>
</table>
The figures in Table 1 were obtained using all of the available CPU resources on an idle machine and, where available, using assembly-optimised versions of the RSA algorithms.
1 https requests are always server-authenticated and are optionally client-authenticated also.
2 The SSL/TLS protocols permit either end of the connection to force a renegotiation (new handshake) at any time. Most servers and browsers do not do this (and it would not alter significantly the points being discussed) so we shall not address this further.
3 Alternatives to RSA exist such as DSA-based certificates and keys, but these are not so widely supported and anyway impose similar computational demands.
So in the worst-case scenario, where each connection to the server requires a new signing operation, this would effectively limit our Sparc machine to handling less than 27 connections a second! Even that assumes that the machine is doing very little other processing to answer the browser’s https request besides performing the RSA sign operations.
In practice, the HTTP 1.1 protocol provides the ability to keep a connection open and make multiple requests with it (one after another, but not concurrently). Unfortunately this “keep alive” functionality is not always enabled at the server end. Additionally, we will often find that browsers will open a number of simultaneous connections to the web server for downloading things such as inline images to avoid the latencies of performing each request one after the other.
So in trying to assess the work required by the server for it to provide SSL support, we must examine what cryptographic operations generate significant overhead, and how often they are required. Aside from these RSA sign operations, and once the initial SSL transaction (handshake) has been completed, is the encryption time of the request and response data significant? It turns out that this symmetric encryption has very low overhead for most machines, and the extra effort over sending data unencrypted is relatively negligible. Table 2 illustrates the volume of data various idle machines can encrypt per second using two common SSL ciphers.
<table>
<thead>
<tr>
<th>Machine</th>
<th>RC4 (Mbps)</th>
<th>3DES (Mbps)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Athlon 600MHz (Linux)</td>
<td>541</td>
<td>40</td>
</tr>
<tr>
<td>Intel PIII 450MHz (Linux)</td>
<td>408</td>
<td>30</td>
</tr>
<tr>
<td>IBM RS6000 43P/140 330MHz</td>
<td>192</td>
<td>17</td>
</tr>
<tr>
<td>Sparc Ultra 5 (Solaris 7)</td>
<td>176</td>
<td>14</td>
</tr>
<tr>
<td>SGI Indy (IRIX 6)</td>
<td>63</td>
<td>5</td>
</tr>
</tbody>
</table>
Where are your keys?
In order to perform the SSL private key operations, the web server needs to have a copy of the server’s private key in its memory whilst it is processing SSL transactions. In early 1999, nCipher found a very efficient way to scan large amounts of data looking for private keys.
Some operating systems in certain configurations allow applications running on the system as the same user to read each other’s memory. As CGI programs normally run in this way, anyone who has access to place their own CGI programs on a web server could potentially scan the web server memory to find the private keys of any secure sites on that server. nCipher demonstrated a CGI program that would do exactly this, returning the private key of any secure virtual host running on that web server.
The solution that they gave to this problem was to secure your keys in an external hardware device, specifically in a hardware accelerator. Then, not only does the accelerator perform the key operations, but it also has the only copy of the key. The web server does not need to supply the keys to the accelerator and indeed the usual technique is to generate keys inside the device and never export them. The web server then has to hand off all key operations to the hardware accelerator, and so each web server will need to have its own hardware accelerator attached.
The attack nCipher demonstrated only works on a few operating systems that allow other processes to read each other’s memory space, and even on those systems it is simple to defeat the attack. Apache just needs to be started as the root user and allowed to fold-back to a non-root user in order to serve pages (which is actually the default configuration). Even if Apache is started as a non-root user you can only be affected if you allow people to write their own CGI programs and run them on the same web server that is being used for secure transactions. Unfortunately this is not quite the picture painted by recent press releases covering the issue.
SSL Session Caching
In order to speed up the processing of requests, the SSL protocol specifically allows a client to ask the server to resume a previously negotiated session when opening a new connection. So once the client and server have performed the public key operations and negotiated a session key (i.e.
---
4 Or the keep-alive functionality is configured for low time-outs to keep the number of open connections and processes under control.
completed the handshake), then in theory a new connection can be made at a future time without having to perform the intensive public key operations again.
Each session must have a timeout period associated with it (for security reasons), so the server administrator can force a new session to be established at least every day or every hour for example.
This means that the server must have a way of remembering the sessions if it is to make use of this feature; a session cache. With Apache 1.3 we have the complication that we have a number of independently forked children running on the server which have no common store. Each request that comes into Apache could potentially end up talking to a different child process. In SSL, session resuming is initiated when the client indicates to the server the session it wishes to resume. Even if the client has negotiated sessions with each child process, it will only be able to resume an SSL session if it correctly guesses which child process it has connected to! With heavily loaded sites having tens or hundreds of children this isn’t useful, and a session cache shared between all Apache child processes is required.
Although this is a fairly simple concept it has taken a while for secure Apache solutions to get it right. Apache-SSL started with session cache running as an external process. Each child would establish a connection to this process that had a memory cache of sessions.
But this solution turned out to be fairly unwieldy for server administrators, as the web server would have to have its external process restarted in-sync with the web server. Also, problems would occur if the process was killed (or crashed) because the web server would continue to run and make attempts to serve requests that would always fail.
mod_ssl had its own set of solutions. It started out with the Apache-SSL approach then moved to a session cache that was implemented using a lightweight database (DBM) instead of an external process. In the last few months mod_ssl moved to supporting a similar method to Stronghold, using a shared memory cache.
Table 3– Caching techniques used on common SSL solutions
<table>
<thead>
<tr>
<th>Server</th>
<th>Cache</th>
</tr>
</thead>
<tbody>
<tr>
<td>Apache + mod_ssl (includes Raven, Redhat)</td>
<td>Shared memory (recommended) or DBM file</td>
</tr>
<tr>
<td>Stronghold</td>
<td>Shared memory (recommended) or file based</td>
</tr>
<tr>
<td>Apache-SSL</td>
<td>Cache server process (TCP/IP or Unix domain socket)</td>
</tr>
</tbody>
</table>
With the new session cache comes the responsibility to configure it correctly. The more sessions that need to be operating concurrently and the longer the session expiry times need to be, the larger the block of memory the session cache will need. Also important is to examine how speed-critical the session cache becomes as a result. All reading and writing in the session cache must be synchronised to avoid data corruption, so if the cache is big and only one Apache child process can communicate with it at a time, then it may itself become the limiting factor to performance rather than solving it. For example, a heavily loaded server running many intensive SSL key operations would correspondingly cause all session cache operations to slow down, which increases the likelihood that child processes will be queuing up for access to it.
As we will see with key operations as well, there is a disadvantage to running an SSL session cache on the same system as a web server. If the cache is too large, the synchronisation of access to it may become the bottleneck, or it may just consume too many shared resources. If the cache is not large enough, then it will soon fill up with entries before they are ready to expire. When the session cache becomes full, the choice is to either honour the session timeouts (in which case new SSL sessions will not be cached at all until entries in the cache begin to expire) or to prematurely expire older sessions. Unfortunately the majority of SSL Apache servers in operation do the former, but neither is particularly ideal.
---
5 Apache-SSL from Ben Laurie, http://www.apache-ssl.org/
6 mod_ssl from Ralf Engelschall, http://www.modssl.org/
If it was possible to run an SSL session cache as a dedicated service on another system, then it could be scaled and resourced independently of the web server.
**SPEEDING UP THE CRYPTO**
Once these software problems are solved and we have a fast working session cache and browsers that support session resuming, it no longer matters as much whether we are using keep-alive connections. The limiting factor now becomes how many new secure requests we can serve per second, giving us an idea of the delay in establishing a new secure connection (and the maximum load level the server can sustain without growing steadily slower\(^7\)).
One advantage that we have over regular http is that secure addresses are rarely advertised, and so are less likely to get a rush of new secure connections over a very short timescale (a load spike)\(^8\). But in the worst case, if you have 200 new connections to your site and you can only handle 50 signs a second, you’ve got potential customers waiting at least 4 seconds for that new connection. A report from Zona Research suggests that customers are unwilling to wait more than eight seconds to place their order before giving up (and this has to include the time to do all the processing of the request, not just the SSL part). What solutions to this sort of problem exist?
**Higher performance machines**
You could buy a faster machine - one that can handle more connections per second. We’ve already shown that an inexpensive Athlon processor is three times faster at RSA sign operations than the more expensive Sparc Ultra 5 machine. Indeed a whole new range of processors are being developed with engineering in them specifically for accelerating cryptographic operations such as key signing. The Ultrasparc III processor will contain instructions that make public-key cryptography operations more efficient. Intel have also announced that their new Itanium 64-bit processor can perform SSL operations 10 times faster than their fastest current chip, the 32-bit Xeon.
**The Crypto Accelerator approach**
The alternative to dealing with the crypto operations on the processor is to use some sort of co-processor that can take all that annoying maths out of the hands of your web server. This is the approach that many hardware companies who make cryptographic accelerators would like you to take.
A number of cryptographic accelerators are currently available with major players including Rainbow, DEC and nCipher. You can buy an accelerator board that gives you a specified level of performance for handling the time-consuming RSA operations. When the web server gets a new connection it passes the maths off to the board, which performs the calculation, and sends it back – hopefully in a fraction of the time it would have taken the web server to do the same task.
Once the initial SSL session is established, the accelerator board plays no part in further connections with the same SSL session. Although some of the boards can do symmetric encryption, the overheads of actually sending the information to the hardware to be encrypted or decrypted is often higher (both in terms of latency and system resources) than just performing the operations directly in the CPU.
The leading cryptographic accelerators are usually in the form of add-on hardware, typically connected by PCI or SCSI, and normally allowing multiple units to be chained to one machine to scale performance (and therefore capacity).
---
\(^7\) If a server can sustain 20 new connections a second, but is receiving 40 each second, then it will be receiving connections faster than it is able to deal with them. The system will then eventually grind to a halt (or to the point that the delays become unacceptable to the users and the site begins to receive less than 20 requests a second!).
\(^8\) E.g. when an online shopping advert appears during the middle of a prime-time show, their site’s homepage may receive a burst of requests, but the number of browsers actually following through to the secure purchasing section will be less dramatic.
What price/performance should you expect from a board and how can you measure it? These companies usually quote the number of 1024-bit RSA signs per second that the board can sustain so they can be easily compared to other accelerators or our software-only figures. Obviously there will be some latencies introduced because the web server child process has to talk to the board via some software and hardware layers and wait for a reply. However, this “blocking” places no demands on the CPU which will be free to attend to other things, so the increase in latency does not directly affect throughput (but may require more running child processes as a result of the time they spend blocking).
We have investigated the prices and offerings of the major hardware crypto manufacturers, as identified in a report done by ICSA Information Security Magazine. We have removed vendor and model names and just listed statistics from a representative list of products available from some major suppliers that can perform RSA key operations and key-management. These units can offer different security features, which may explain the price variations.
### Table 4 - Price/performance of selected hardware crypto units
<table>
<thead>
<tr>
<th>Machine</th>
<th>Approximate cost (US$)</th>
<th>Signs per second</th>
<th>US$ per sign/sec</th>
</tr>
</thead>
<tbody>
<tr>
<td>Crypto Unit A</td>
<td>12000</td>
<td>300</td>
<td>40</td>
</tr>
<tr>
<td>Crypto Unit B</td>
<td>5000</td>
<td>75</td>
<td>67</td>
</tr>
<tr>
<td>Crypto Unit C</td>
<td>5000</td>
<td>50</td>
<td>100</td>
</tr>
<tr>
<td>Crypto Unit D</td>
<td>2000</td>
<td>15</td>
<td>133</td>
</tr>
<tr>
<td>Crypto Unit E</td>
<td>12500</td>
<td>50</td>
<td>250</td>
</tr>
</tbody>
</table>
We can compare this to the costs and performance of doing the same crypto work on some commonly available systems.
### Table 5 – Price/performance of selected systems
<table>
<thead>
<tr>
<th>Machine</th>
<th>Approximate cost (US$)</th>
<th>Signs per second</th>
<th>US$ per sign/sec</th>
</tr>
</thead>
<tbody>
<tr>
<td>Athlon 600MHz (Linux)</td>
<td>1200</td>
<td>100</td>
<td>12</td>
</tr>
<tr>
<td>Intel PIII 450MHz (Linux)</td>
<td>1000</td>
<td>73</td>
<td>14</td>
</tr>
<tr>
<td>Sparc Ultra 5 (Solaris 7)</td>
<td>3000</td>
<td>27</td>
<td>111</td>
</tr>
<tr>
<td>IBM RS6000 43P/140 330MHz</td>
<td>7000</td>
<td>27</td>
<td>259</td>
</tr>
</tbody>
</table>
We can see that if we had three Athlon machines and some way in software of spreading the load between them, then we could produce the same signing throughput as the high-end hardware crypto unit examined for a third the cost. We would also have architecture more easily upgraded and configured, with greater redundancy, and would obviate the need for a controlling system to manage the hardware crypto unit. The software to do this does not currently exist, so at present each machine would have to handle SSL connections and a load balance installed in front of them.
Another problem with deploying hardware crypto accelerators is that running more than one web server requires an accelerator board for each one. If, for redundancy considerations, you have a large number of web server machines, then the additional cost of accelerator boards can soon mount up. This is because the current crypto accelerators are designed for a one-to-one or one-to-many situation (one web server connected to one or more accelerator boards).
Additionally, many sites use hardware crypto units primarily for key-management rather than acceleration. This is the term used when private keys are generated inside tamper-proof units that never export their keys. This makes the keys invisible even to administrators who have access to start and stop the web-server (and who would normally need to know any pass-phrases required for the web server to decrypt private keys). If you have 10 web-servers to handle the loading of database or web-application logic but your SSL requirements are small, you would still need to deploy 10 different hardware crypto units if you wish to use hardware key management. Most of those 10 hardware crypto units would be severely under-utilised.
---
9 Prices were based on price lists and specifications available to us at January 2000.
10 Prices from UK suppliers, January 2000, and includes low-spec monitors, software and accessories.
REMOVING THE BOTTLENECK
There are a number of solutions available for load balancing normal HTTP requests, both software and hardware based. The normal operation of a load balancer is to handle an incoming request, pick one from a number of back-end servers, and pass the request to it. Because HTTP is a stateless protocol this works well in practise as each connection (requesting each page, image, etc) can be sent to a different back-end server without any problem. However with SSL we don’t want each request being sent to a different back-end server because each new server reached will require a new SSL session to be negotiated – something we would like to avoid. (This problem can also become a noticeable performance-hit in the user’s browser too).
Some load balancers try to ensure that the same client always gets routed to the same back-end server, perhaps by looking at the incoming IP address. However this isn’t always possible and defeats the principle of load balancing (think of the number of people that come from behind a corporate proxy server all presenting the same IP address).
If you require hardware accelerators, and have multiple back-end web servers handling the SSL connections, then you probably need a hardware accelerator attached to each box. This will mean the hardware accelerator companies will like you, but it will be pretty expensive (and you could be buying a lot more “signs-per-second” than you are actually using).
Since load balancing of normal HTTP connections is well understood, we can look at common ways this is handled and see how they scale to balancing SSL for HTTPS connections.
Round-Robin DNS
The simplest form of load balancing is round-robin DNS. This is where a host name lookup will be converted to one of several possible IP addresses via a DNS server that reorders its list after each lookup. As more server capacity is required, administrators can add more IP addresses to the rotation. This method isn’t ideal even for standard HTTP requests because it takes no account of the load on the back-end servers or their operating status. It can also be a problem when a large number of users come from a network that has a proxy and caching DNS server (think of all users within AOL attempting to use the same web server address). In essence, the problem is that the “load-balancing” is being managed outside the server environment rather than inside it. Once a browser has obtained an IP address, it will stick with that one and if many others have obtained the same address (e.g. if a large ISP has cached the DNS lookup) then that IP address will be buried in requests when the others are comparatively idle.
Using SSL with round-robin DNS will at least work, and does have the positive of it being likely that the next request from the same browser will hit the same server allowing more SSL session resuming.
Hardware Traffic Switching
A number of manufacturers produce load-balancing hardware such as the Cisco LocalDirector product that intelligently load-balances traffic across multiple servers. The web servers can then be changed or taken out of service at will. Also, the hardware can monitor the back-end servers and use this information to determine where each request should be sent.
Once again however, with SSL we would then have the problem where a different back-end secure server could be used for each connection (each image on a page for example), requiring a new session to be established with each one. So the overheads go up, the performance goes down, and more servers are required to sustain the load.
The Cisco product seems fairly unique as they have come up with the ability for SSL requests to become “sticky”. The hardware looks at the SSL requests and keeps a track of which back-end servers have handled a particular SSL session. It will then try to route new connections to the same back-end server that originally established the session. However, the hardware still needs to
---
11 This is certainly true if the security policy is to use hardware key-management, but for acceleration-only purposes, having accelerators connected to some machines and not others would require either very complicated front-end load balancing, or some machines being more heavily utilised than others.
take account of spreading load and dealing with any broken or out-of-service web servers and so new sessions may have to be established with other back-end servers over time.
Software Gateway (Apache Mirror Proxy)
A popular way of load balancing normal HTTP connections is to use the Apache proxypass facility to emulate a hardware load balancer. Here a single front-end machine handles all web connections, but it can seamlessly pass on any time-consuming requests (such as working with databases) to back-end servers. You may for example decide that all html and jpeg files are static and so can be served quickly by the local machine, whilst requests for CGI processing get handled by a back-end Unix machine running another copy of Apache. This flexibility can even map different server architectures into the same machine URL space, for example having ASP (Active Server Pages) files mirror-proxied to a back-end Microsoft NT box running IIS.
Of course since a single box is handling all the connections, the idea is just to make the back-end machines do the time consuming tasks, or tasks that the server hasn’t the resources to perform (such as communicating with databases or running ASP scripts).
Over the years we have seen many administrators, who were stuck using Microsoft and Netscape servers with export-crippled security, put a copy of a full-strength SSL Apache-based server onto a spare machine to act as an HTTPS-to-HTTP gateway.
Now because simple static pages and images are very quick to serve, the SSL gateway can be configured to serve these pages itself and just pass on other requests. With all cryptography now handled by a single machine, one or more cryptographic accelerator boards could be attached to it to cope with the necessary load. Using this mirror proxy approach works well for balancing SSL requests; you end up with a single machine that can take in SSL requests, establish the sessions as required, and let back-end machines actually perform the other web operations. It is this theme of “componentisation” that we are converging towards, where the resources required to service the CGI or ASP requests can be assessed independently of the resources required to deal with the expected SSL overheads.
However this approach presents a risk-management issue and also doesn’t scale well enough for very large sites:
1. There will be a limit to the number of cryptographic accelerators you can add to a single machine.
2. The internal processing and system-level considerations of that one machine will eventually become a bottleneck.
3. The architecture becomes critically dependent on this one gateway - there is no redundancy.
Hardware Gateway
At least one manufacturer has taken the idea of a software gateway one step further by producing a stand-alone hardware solution. The Intel ‘iPivot’ is a self-contained unit that can be configured in a similar way to the software gateway and includes some crypto acceleration. It normally doesn’t have the ability to serve pages locally, and is less easy to upgrade than a standard Apache proxying system. Using a software gateway solution also has a scalability advantage in that you can just switch to a faster processor or add more accelerators as and when required whereas hardware gateways are a little less upgradable when resourcing requirements can change rapidly. Software gateways are also more customisable; allowing client certificate rules to be set up and maintained in-sync with the web-server configurations, and are able to pass on headers to the back-end servers with details such as the encryption algorithms used or details of client certificates.
SOLVING THE PROBLEM
To summarise, having examined the current solutions to SSL-enabling Apache, we have identified the following as major limitations:
a) Session caching needs to be more flexible and not localised to each web server.
b) Resourcing of key operations (such as RSA and DSA) needs to be more scalable and independent of the web server. It should be possible to provide operating resources for each separately. The systems best suited to performing key operations are often not the preferred platform for running web servers and web-applications.
c) Private keys and key operations need to be protected from web server applications. Also, it is important to distinguish between access rights to administrate the web server and access to the private keys.
d) The current architectures don’t allow the resourcing of systems to follow the demographics of a site’s traffic; certain platforms may be better suited to different tasks, e.g. you may wish to perform RSA operations on Intel platforms but run your web servers on Sparc platforms. Similarly, we may wish to increase our capacity for performing key operations even if our web serving and session caching are sufficiently resourced.
Our solution
The solution we suggest is not unlike the solution used by many web applications that require back-end databases, namely to separate out the component services and have them communicate through an internal network. As with the database analogy, the latencies involved in processing requests (negotiating a new SSL session and processing an https request) will increase slightly due to network communication, but this facilitates a net gain in system capacity, throughput, and performance as a whole.
Previously there was no latency at all in performing key and SSL cache operations within the web server, but the web servers will have to be resourced to satisfy the most demanding requirements of all three services. The solution we propose does introduce minor, but for the most part constant, latencies in the communication channels between the various services, but system throughput can be maximised and each service can be resourced to meet their own specific demands. Crypto operations perform better on certain CPU types and require little in the way of memory, storage, disk and network IO, or sophisticated operating system functionality. Web servers have virtually the opposite requirements, so in this way the solution provides more choice and flexibility. It can also be seen that heavy loading on one service does not directly impact the performance of the other services. So for example, a sharp spike of new https requests could cause the crypto and SSL cache services to become heavily loaded and slow down, but this will not affect the web servers’ ability to serve non-SSL requests at all.
We can summarise how the four problem points detailed earlier are addressed by this approach:
a) Session caching is a dedicated service that can be resourced and customised without touching the “web pool”. Also, the same browser can hit any web server at all from one request to the next and continue to resume the same SSL session, without the load-balancer having to attempt any “intelligent” routing. This also helps in the event of a web server failure.
b) Key operations can be performed on any configuration of machines, crypto accelerators, or both and all web servers will share these resources. Not only can the resourcing be maintained independently, but bursts of SSL-only traffic will not cause the web servers themselves to slow down and affect non-SSL traffic.
c) Keys are stored well away from where the web server can get to them - the web server needs only the public keys (certificates) to operate and administrative access to the web servers does not automatically grant access to the critical private keys.
d) We can choose the right number of units of hardware and software for each of these components separately, allowing us to build redundancy into the architecture.
Selected examples
We have previously illustrated a number of ways in which normal site configurations find performance limitations in one form or another. Here we illustrate real-world scenarios where the above model can free us up from most of these limitations and improve scalability, performance, and cost efficiency.
(1) High web server loads, low SSL requirements, and hardware key-management required.
This example is a site that operates many web servers servicing very high loads with only a small SSL requirement (the traffic is predominantly plain HTTP requests). This site also has a policy of using hardware key-management for all private-key generation and storage. As the SSL loads are comparatively low, it is expensive to have to attach a distinct hardware crypto unit to every web server simply because we need hardware key-management. One or two dedicated hardware crypto units, attached to one or two computers, may be all the processing power that is required to service the SSL crypto and caching requirements. Indeed the SSL session cache could well reside on one of the computers controlling the hardware crypto units.
Using the approach we have outlined, these two small systems can provide the desired hardware key-management and session caching, and will be shared by all the web-servers. Any change in SSL requirements can be handled and resourced directly without altering the web server resources. Previously hardware key-management would have necessitated having a separate hardware crypto unit attached to every running web server and there would have been no sharing of SSL sessions between servers.
(2) High SSL loads requesting predominantly static web content.
This second example is a site with a high hit rate but which is serving relatively easy content (lots of static pages and images). A lot of the traffic is through SSL and comes from many different users. Traditionally this has been solved using intelligent load balancing in front of high-specification web server systems. Each web server has to maintain large session due to the number of different concurrent users. Each web server has to perform the crypto operations
quickly, which requires high-specification CPUs, as well as being a good web server that requires a good multitasking system with solid IO performance.
By distributing the crypto and caching operations, an array of cost-effective Intel-based machines could provide all the necessary crypto resourcing. These crypto boxes would need very little in the way of memory, storage, or IO throughput; their role is very much CPU-bound and so can be purchased accordingly. The web servers do then not need to be so numerous now that the crypto overhead has been offloaded elsewhere; they will simply need sufficient disk and network IO performance to process the predominantly static content (and will not need to possess anywhere near the collective CPU power of the machines performing the crypto). Purchasing web servers to perform all services concurrently can be very costly, as these systems must simultaneously satisfy the resourcing requirements of each service, and in sufficient numbers. For example, if the SSL load does not increase but the web serving requirements become more demanding, then more servers (of the same high specifications) will need to be deployed. By distributing these services as distinct components in the architecture, additional resourcing becomes more cost effective, and more finely managed.
How do we do this?
Although most of the approaches we have discussed in this paper are readily available today, a system to implement our final solution is not. However, all of the conceptual ideas discussed (and the solution) can be implemented by extensions to existing open source software.
The authors already have a functioning prototype that distributes key operations from a number of web-servers to a number of key-servers. This prototyped framework is a proof-of-concept, and it is expected that this functionality will be released as open source once completed.
|
{"Source-Url": "https://awe.com/mark/talks/apachecon2000.pdf", "len_cl100k_base": 7900, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24448, "total-output-tokens": 8182, "length": "2e12", "weborganizer": {"__label__adult": 0.0004134178161621094, "__label__art_design": 0.0004267692565917969, "__label__crime_law": 0.0008344650268554688, "__label__education_jobs": 0.0006146430969238281, "__label__entertainment": 0.0001888275146484375, "__label__fashion_beauty": 0.0001577138900756836, "__label__finance_business": 0.0021305084228515625, "__label__food_dining": 0.0003445148468017578, "__label__games": 0.0007643699645996094, "__label__hardware": 0.01245880126953125, "__label__health": 0.0006337165832519531, "__label__history": 0.0003037452697753906, "__label__home_hobbies": 0.00015544891357421875, "__label__industrial": 0.0012054443359375, "__label__literature": 0.0002160072326660156, "__label__politics": 0.0002601146697998047, "__label__religion": 0.0004012584686279297, "__label__science_tech": 0.36767578125, "__label__social_life": 9.512901306152344e-05, "__label__software": 0.12457275390625, "__label__software_dev": 0.485107421875, "__label__sports_fitness": 0.0002015829086303711, "__label__transportation": 0.0006365776062011719, "__label__travel": 0.00025963783264160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38126, 0.02433]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38126, 0.47948]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38126, 0.93803]], "google_gemma-3-12b-it_contains_pii": [[0, 1361, false], [1361, 5093, null], [5093, 9487, null], [9487, 13595, null], [13595, 17668, null], [17668, 22087, null], [22087, 26378, null], [26378, 29051, null], [29051, 31755, null], [31755, 36231, null], [36231, 38126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1361, true], [1361, 5093, null], [5093, 9487, null], [9487, 13595, null], [13595, 17668, null], [17668, 22087, null], [22087, 26378, null], [26378, 29051, null], [29051, 31755, null], [31755, 36231, null], [36231, 38126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38126, null]], "pdf_page_numbers": [[0, 1361, 1], [1361, 5093, 2], [5093, 9487, 3], [9487, 13595, 4], [13595, 17668, 5], [17668, 22087, 6], [22087, 26378, 7], [26378, 29051, 8], [29051, 31755, 9], [31755, 36231, 10], [36231, 38126, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38126, 0.19753]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
23976d38aabb63b02c16166787cabcd2d8663216
|
Comparative Analysis of Java and AspectJ on the Basis of Various Metrics
Inderjit Singh Dhanoa
IT Department
BIS College of Engineering & Technology
Moga(Punjab),India
inderp10@yahoo.co.in
Er.Dalwinder Singh Salaria
CSE/IT Department
Lovely Professional University
Phagwara(Punjab),India
dalwinder.singh@lpu.co.in
Dr.H.S Johal
CSE/IT Department
Lovely Professional University
Phagwara(Punjab),India
johalhs2008@gmail.com
Abstract -- This paper compares aspect oriented approach using AspectJ with object oriented approach using Java programming in distributed environment and discusses the need to introduce aspects in Java RMI systems. These two approaches compared empirically using RMI auction System in Eclipse’s framework in terms of various metrics. We developed RMI Auction System with AspectJ and Java languages in Eclipse’s framework independently. There are certain properties like tracing, exception handling, distribution and profiling in Java RMI system which we cannot encapsulate properly using object oriented programming and lead to the problem of code tangling and code scattering. Therefore it is difficult to modularize them in separate functional modules. These properties are known as cross cutting concerns, which can be encapsulated into Aspect using Aspect Oriented Programming. In this, AspectJ language is used to encapsulate distributed cross cutting concern of RMI auction system in Eclipse supported framework. We have shown the comparison of AspectJ and Java through a RMI auction system in Eclipse Platform’s Aspect visualizer and Metrics 1.3.6 plug-in.
Keywords-- Crosscutting Concerns; Eclipse; Java RMI; UMRT
I. INTRODUCTION
Most Distributed Systems are designed using Java RMI (Remote Method Invocation) [2] middleware Technology and Object Oriented Programming concepts which are sufficient to encapsulate the functional concerns of application as like setCurrentBid, setMinBid, getCurrentBid, setProduct etc in our distributed Auction System. Despite the many advantages of Object Oriented Programming, it has proven to be encumbered with many hindrances and shortcomings such as repetitive code, disorderly code, low productivity, low reusability, and difficulty to design and maintain the system [3]. The Object Oriented Programming approach cannot efficiently solve the problem that arise from the crosscutting concerns such as the distribution, tracing, exception handling, logging etc, it forces the implementation of those design decisions to be scattered throughout the code, resulting in tangled code that is excessively difficult to develop and maintain. These flaws in Object Oriented Programming are removed with the use of Aspect Oriented Programming concept to encapsulate the non-functional or crosscutting concerns [4] like distribution in Java RMI systems. Our experience with this system has been proved that use of AspectJ language helps to modularize the crosscutting concerns and improved the UMRT (Understandability, Maintainability, Reusability and Testability) attributes of RMI auction system. Our results are shown after execution of RMI auction system using Java and AspectJ code in Metrics 1.3.6 plug-in and Aspect Visualizer of Eclipse Platform.
II. RELATED WORK
To our knowledge, there is no previous metrics based empirical work that compares AspectJ and Java implementation in distributed environment using Eclipse’s plug-in. There are number of publications reporting the possible applications of aspect oriented programming but no one discussed the experimental results to enhance UMRT attributes of distributed system. Fazal-c-Amin et al [5] reviewed the research in Aspect Oriented field and highlight the application domains along with results, opportunities and challenges. The results revealed that the major benefits of using aspect orientation as the product line technology are enhanced modularity (21%) and better variability management (32%). P. Greenwood et al [6] Compares the design stability Aspect Oriented implementation against an Object Oriented implementation (using Java) of Health-Watcher, a system designed to monitor public-health-related complaints and notifications in Recife, Brazil. The study applied numerous common maintenance scenarios to both the Aspect Oriented and Object Oriented versions of this benchmark application. The analysis revealed that concerns modularized using Aspect Oriented techniques showed superior design stability, and modifications tended to be confined to the target modules. Zhang et al [7] verified the advantages of Aspect Oriented Programming paradigm through a small management information system. M. Nishizawa, et al [8] presented extension to AspectJ for distributed computing; the language construct that we call remote pointcut enables developers to write a simple aspect to modularize crosscutting concerns distributed on multiple hosts. A. Stevenson et al [9] described an aspect oriented approach to construct smart proxies in Java RMI. It does not change existing RMI code, and functionality in the smart proxy can be added and removed at runtime. Soares et al [10] reported that they could use AspectJ for improving the modularity of their program written using Java RMI. Without AspectJ, the program must include the code following the programming conventions required by the Java RMI. AspectJ allows separation of that code from the rest into a distribution aspect. Avadhesh kumar et al [11] found that average change impact in aspect oriented system is less than the average change impacts in object oriented system, that means aspect oriented systems are easily maintainable. Katharina Mehner [12] gave a short overview on the necessary steps for validating metrics that are to be used in an evaluation process of Aspect oriented code.
In this paper, we present experimental study to compare AspectJ and Java through RMI auction system and try to show the affect of ‘Aspects’ on system’s attributes through Eclipse’s Aspect Visualizer and Metrics 1.3.6 plug-in.
III. JAVA RMI ARCHITECTURE
Java RMI is based on the distinction between object interface and implementation. It relies on the fact that a client cannot distinguish between objects implementing a remote interface if their behavior is identical. The architecture of Java RMI consists of the three layers in Fig. 1 [13]. The first layer provides a proxy object on the client and a skeleton object at the server. In current versions of Java, there is one skeleton object for the server. The proxy object is a local object on the client JVM that implements the same remote interface as the object implementation on the server. The proxy translates method invocations to remote method invocations to the server. Part of this translation uses the remote object reference for the remote object held in the remote reference layer. The Transport Layer handles client/server communication.
The proxy object may be statically-generated by the rmic stub compiler or may be a dynamic proxy generated at runtime by the JVM. The rmic compiler starts with a class that implements a remote interface (one derived from java.rmi.Remote). From this, rmic generates a proxy class that implements the same remote interface. The name of this proxy class is the name of the implementation with “Stub” appended. For each method in the remote interface, rmic generates code that uses the remote object reference to invoke the same method on the object implementation at the server. At runtime, when the client imports the remote object using the RMI registry, it loads this proxy class using its name. If the proxy class is successfully loaded, a proxy object is created. If not, then the second method of proxy generation is used. The second method of generating a proxy object is using the dynamic proxy mechanism introduced in Java 1.3 [14]. Given a list of interfaces, the JVM can create a proxy implementing them at runtime. Method calls on the proxy are delegated to an invocation handler object provided by the developer. In Java RMI, if the JVM cannot load the rmic-generated proxy class, the client creates a dynamic proxy using the remote interface. A RemoteObjectInvocationHandler object is created as the invocation handler, which provides identical functionality as rmic-generated stubs. Stub and skeleton on server communicate with client’s stub through Remote reference layer. rmiregistry on server side register objects which are remotely available for clients.
IV. ASPECT ORIENTED PROGRAMMING
Aspect Oriented Programming (AOP) is a program development methodology proposed by Gregor Kiczales in "Aspect-Oriented Programming" [1], published in 1997. In AOP, the requirements (requests) of the program are termed ‘concerns’. Concerns are divided into core concerns and crosscutting concerns. An example that is used most frequently to explain core and cross-cutting concerns is the Distributed Auction system. In a system, core concerns are the main functions of the Auction System, which are to set the product for auction, set minimum bid, set current bid etc. However, other features required by a distributed system, such as logging, distribution, profiling and tracing are cross-cutting concerns. Although object oriented programming is currently the most widely used methodology for dealing with core concerns, it comes up short in processing crosscutting concerns. This becomes more so for complex applications. AOP is a new methodology that enables separation of crosscutting concerns and their implementation through a new module termed the ‘aspect’. Fig. 2 displays the weaving process of application code with aspect.
A. AspectJ
AspectJ, originally from Xerox PARC, but now part of the Eclipse initiative supported by IBM, is currently the most widely adopted programming language supporting AOP and was also used for our case study which is described in following sections. AspectJ is built on top of the programming language Java [15,16]. It provides mechanisms to modularize crosscutting concerns as explained above. In AspectJ programs, Java classes are used to implement the core characteristics, and aspects (understandable as pseudo classes) are used to implement crosscutting concerns in a modular fashion. In an AspectJ application, everything revolves around join points. These are points in the control flow graph of a compiled program, where crosscutting concerns are woven in. According to AspectJ’s terminology there are two types of crosscutting:
**Static crosscutting** describes crosscutting that influences the interfaces of the involved types and does not modify the execution behavior of the system. AspectJ provides the following two mechanisms to achieve this kind of influence:
**Introduction** introduces changes to the classes, aspects and interfaces of the system.
**Compile-time Declaration** adds compile time warnings and error messages for the case that certain occurrences of patterns are captured.
**Dynamic crosscutting** describes crosscutting that influence the execution behavior of an application.
AspectJ provides the following two language constructs to achieve this kind of influence:
**Pointcut** is a constructor that selects join points and collects the context at those points based on different conditions. This construct is an aggregation of execution join points and object join points.
**Advice** declares a method which will be executed before, after or around join points in execution flow of application picked up by a pointcut whenever a match is occurred with signature of defined join points. With these additional constructs, there are two execution object pointcut designators: **this()** and **target()** as defined in[17]. The Java developer can add new functionality in the system without changing any code in the core modules (classes). AspectJ retains all the benefits of Java and is therefore platform-independent. As far as compatibility is concerned it is important to note that
- Every syntactically correct Java program is also a syntactically correct AspectJ program, and
- Every successfully compiled AspectJ program can be executed on a standard Java Virtual Machine.
After these preliminary explanations we are now prepared to consider the RMI system in AspectJ language in following sections.
**B. Distribution Aspect**
The server-side distribution aspect is responsible for making the auc instance of Auctioneer class to remotely available. It also ensures that the methods of the auc have serializable parameter and return types, since this is required by RMI. **AuctionI** pointcut is defined to capture the join points of constructor execution and object type execution using **this()** designator. Context of specified join point is collected as parameters using **args()** pointcut in aspect. Distribution aspect code is given below:
```java
public aspect DistributionAspect
{
public String name;
declare parents:Auctioneer implements AuctionInterface;
pointcut AuctionI(Auctioneer auc, String pr, String mb):
execution(Auctioneer.new(...)) throws RemoteException
&& this(auc) && args(pr,mb);
after(Auctioneer auc, String pr, String mb) throws RemoteException:
AuctionI(auc,pr,mb)
}
try {
UnicastRemoteObject.exportObject(auc);
name = "//localhost:1099"+pr;
Naming.rebind(name,auc);
} catch (java.net.MalformedURLException me)
{
System.out.println(me.toString());
}
}
```
The server side aspect has to define a remote interface that has all Auctioneer methods signatures adding a specific RMI API exception (java.rmi.RemoteException).
```java
public interface AuctionInterface extends java.rmi.Remote
{
public void setCurrentBid(String bid) throws RemoteException;
public String getCurrentBid() throws RemoteException;
}
```
We used the AspectJ’s introduction mechanism that can modify the static structure of program to implement **AuctionInterface** interface, as in the following piece of code:
```java
declare parents:Auctioneer implements AuctionInterface;
```
**Distribution Aspect** aspect on server side of RMI system designed to encapsulate the distribution crosscutting concern. Later we consider the error handling aspect code. Distribution aspect code is weaved with java byte code dynamically without changing the code of base application on server side. Advice code of distribution aspect applied in execution flow join points when a match occurred with specified signature of distribution aspect’s pointcut.
**C. Exception Handling Aspect**
An exception is a behavior of the system indicating that the operation in process cannot be successfully completed, but from which other parts of the system can try to recover or chose to ignore. The code for exception handling often tangles the main code. In their study, Martin Lippert et al [18] found that Aspect Oriented Programming supports implementations that drastically reduce the portion of the code related to exception handling. They found that AspectJ provides better support for different configurations of exceptional behaviors, more tolerance for changes in the specifications of exceptional behaviors, better support for incremental development, better reuse and automatic enforcement of contracts in applications. Fernando C Filho et al [19] specifies the benefits and liabilities of Aspects in error handling.
```java
public aspect Handler
{
pointcut exceptionHandler(Exception e):
handler(Exception+) &&& args(e);
after throwing (Exception e): exceptionHandler(e)
{
System.out.println("Exception "+e+" caught while Executing "+thisJoinPoint());
}
}
```
Above code snippet displays an aspect to handle exceptions. This aspect has an **after throwing()** advice that prints a message after an exception is thrown in the methods of the **Auctioneer** class.
D. Eclipse’s Aspect Visualiser
Aspect Visualiser is an extensible plug-in that can be used to visualize anything that can be represented by bars and stripes. It began as the Aspect Visualiser, which was a part of the popular AspectJ Development Tools (AJDT) plug-in. It was originally created to visualize how aspects were affecting classes in a project. As in Fig. 3 we have shown the member view of distribution and exception handling aspects with class Auctioneer in AspectJ RMI system. Here bars represent classes and aspects in server program and yellowish colored stripes represent advised join points in the execution flow of program, which were matched with defined pointcuts in various aspects.
V. METRICS
The Chidamber and Kemerer Metrics (C&K) is a suite of metrics designed for evaluating object oriented designs and is detailed in [20]. Rosenberg et al [21] have proposed a set of guidelines as to how to interpret the metrics in the C&K suite. Table I below summarizes the objective for the values of the metrics.
<table>
<thead>
<tr>
<th>METRIC</th>
<th>OBJECTIVE</th>
</tr>
</thead>
<tbody>
<tr>
<td>Weighted Methods per Class</td>
<td>Low</td>
</tr>
<tr>
<td>Coupling Between Objects</td>
<td>Low</td>
</tr>
<tr>
<td>Response For a Class</td>
<td>Low</td>
</tr>
<tr>
<td>Lack of Cohesion of Methods</td>
<td>Low</td>
</tr>
<tr>
<td>Depth of Inheritance Tree</td>
<td>trade-off</td>
</tr>
<tr>
<td>Number of Children</td>
<td>trade-off</td>
</tr>
</tbody>
</table>
However, as indicated in the last two metrics, there is a trade-off with some of the metrics. A high DIT will increase maintainability complexity but also shows increased reuse. Similarly a high NOC will increase the testing effort but will also increase the extent of reuse efficiency. Thus developers must be aware of the relationship between the metrics as altering the size of one can impact areas such as testing, understandability, maintainability, development effort and reuse as shown by Zakaria and Dr. H. Hosny [22]. Eclipse’s Metrics 1.3.6 plug-in used to measure metric values of RMI system, developed in Java and AspectJ given in the following section.
A. Metrics 1.3.6 Plug-in
To start collecting metrics for a system in Metrics 1.3.6 plug-in, right click on the project and from the popup menu select "Metrics->Enable" (or alternatively, use the properties page). This will tell Eclipse to calculate metrics every time a compile happens. Now that you’ve enabled a project, the easiest way to calculate all its metrics is to do a full rebuild of that project. The metrics view will indicate the progress of the metrics calculations as they are being performed in the background. When it’s all done, the metrics view will look something like shown in Fig. 4 & Fig. 5.
VI. RESULTS
This section details the values gathered for each of the C&K metrics using Metrics 1.3.6 plug-in, when applied to the RMI Auction System using Distribution aspect in Java and AspectJ language.
<table>
<thead>
<tr>
<th>Metrics</th>
<th>Java</th>
<th>AspectJ</th>
</tr>
</thead>
<tbody>
<tr>
<td>WMC</td>
<td>17</td>
<td>12</td>
</tr>
<tr>
<td>DIT</td>
<td>3.5</td>
<td>1.667</td>
</tr>
<tr>
<td>NOC</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>CBO</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>RFC</td>
<td>23</td>
<td>21</td>
</tr>
<tr>
<td>LCOM</td>
<td>0.375</td>
<td>0.25</td>
</tr>
</tbody>
</table>
This allows changing the communication code without impacting other system code. Same application server can use different middleware at the same time. AspectJ RMI system implementation also facilitates functional tests and understandability of the system. Functional tests are easier by testing the system with its local version; therefore, distribution code errors will not affect these tests. There might be some impact on performance of the distributed system because of AspectJ language constructs and context passing from Auctioneer class to aspect named DistributionAspect. At the cost of this, code quality and modularity of system improved which is beneficial for the AspectJ developer’s team. Eclipse plug-in performed a major role in the development of AspectJ Auction System, as it facilitated aspect development and binding process with Rmiregistry. The evaluation of this system was based on metrics that are used for evaluating OO designs. Previous studies of AOP have used OO design metrics to evaluate design of code and counting coupling connections between AO modules equivalent to coupling connections between OO modules. Classes and Aspects are often measured together as equivalent modules like in DIT. Aspect Oriented Programming affected various UMRT attributes of Java RMI system and this affect has been shown by various screen shots of metrics view and aspect Visualization property of Eclipse. Finally, it is evident from the analysis that the results given here are positive to modularize the Java RMI application and in favor of AspectJ as compare to Java.
VII. DISCUSSION
It is evident from this analysis that the results are in favor of AspectJ as it has been highlighted here that AOP is beneficial in terms of understandability, maintainability, reusability and testability (UMRT) attributes of application. It has been shown in the Table II and Fig. 6 that AspectJ RMI application has less value of metrics as compare to Java application except coupling between aspect and core-class, which leads to increase in UMRT attributes. Coupling value in Aspect application shows that coupling between class and aspect is more, which leads to low coupling between core-classes [22]. Low coupling means high UMRT attributes of AspectJ RMI application. All metric values met our objective as given in Table I of this paper and affected the understandability, maintainability, reusability and testability (UMRT) attributes of RMI application in AspectJ. In spite of this, AspectJ system is completely independent of the communication code and middleware to facilitate system maintenance, as communication code is not tangled with user interface code.
VIII. CONCLUSIONS
This paper presented a case study illustrating how aspect oriented software development (AOSD) is useful to resolve the tangled concern specifically in RMI System as compare to object oriented software development. We demonstrate the use of AspectJ in distributed environment as compare to Java and encapsulate the crosscutting concern distribution in RMI auction system. Comparative analysis based on metrics shows that UMRT attributes improved in AspectJ RMI system as compare its Java implementation. We used Eclipse platform’s Aspect visualizer and C&K metrics plug-in to analyze the affect of aspect advice on program execution flow. Aspects helped us to achieve high cohesion, low coupling that the software engineering requires and to enhance the readability of the system, and made it easier to maintain. Constructs of AspectJ helped us to use the context of server in aspects at runtime and in future with this we can share server’s knowledge dynamically with clients.
IX. FUTURE WORK
Aspect oriented programming has tremendous potential for building the distributed applications in the future. Other crosscutting concerns like tracing, profiling, logging, security and exception handling can be modularized using AspectJ in Eclipse framework. We are planning to investigate the finer details of aspect mining techniques with Eclipse’s plug-in FINT tool and security issues in distributed environment as aspects.
REFERENCES
[22] Zakaria, A.A. and D.H. Hosny “Metrics for aspect-oriented software design”. In 10th ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering, July 2006.
|
{"Source-Url": "https://www.researchgate.net/profile/Inderjit_Singh5/publication/261226273_Comparative_Analysis_of_Java_and_AspectJ_on_the_Basis_of_Various_Metrics/links/0f31753b7e753f16b5000000.pdf", "len_cl100k_base": 4816, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18046, "total-output-tokens": 6184, "length": "2e12", "weborganizer": {"__label__adult": 0.00029730796813964844, "__label__art_design": 0.00020015239715576172, "__label__crime_law": 0.0002351999282836914, "__label__education_jobs": 0.0005717277526855469, "__label__entertainment": 3.790855407714844e-05, "__label__fashion_beauty": 0.00011295080184936523, "__label__finance_business": 0.00016057491302490234, "__label__food_dining": 0.0002498626708984375, "__label__games": 0.0003161430358886719, "__label__hardware": 0.0005040168762207031, "__label__health": 0.0002899169921875, "__label__history": 0.00015306472778320312, "__label__home_hobbies": 5.632638931274414e-05, "__label__industrial": 0.00024366378784179688, "__label__literature": 0.0001544952392578125, "__label__politics": 0.00018274784088134768, "__label__religion": 0.00034308433532714844, "__label__science_tech": 0.003597259521484375, "__label__social_life": 7.408857345581055e-05, "__label__software": 0.00379180908203125, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00024187564849853516, "__label__transportation": 0.000347137451171875, "__label__travel": 0.00017178058624267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27271, 0.01929]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27271, 0.51723]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27271, 0.86575]], "google_gemma-3-12b-it_contains_pii": [[0, 5790, false], [5790, 10484, null], [10484, 15887, null], [15887, 18618, null], [18618, 23179, null], [23179, 27271, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5790, true], [5790, 10484, null], [10484, 15887, null], [15887, 18618, null], [18618, 23179, null], [23179, 27271, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27271, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27271, null]], "pdf_page_numbers": [[0, 5790, 1], [5790, 10484, 2], [10484, 15887, 3], [15887, 18618, 4], [18618, 23179, 5], [23179, 27271, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27271, 0.11034]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
246b14282626dffcec700b6d1df1240976b5bf1a
|
[REMOVED]
|
{"Source-Url": "https://hal.sorbonne-universite.fr/hal-01343686/file/iwomp2016.pdf", "len_cl100k_base": 6974, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35521, "total-output-tokens": 8946, "length": "2e12", "weborganizer": {"__label__adult": 0.00035858154296875, "__label__art_design": 0.0004544258117675781, "__label__crime_law": 0.0003731250762939453, "__label__education_jobs": 0.0005435943603515625, "__label__entertainment": 0.00012814998626708984, "__label__fashion_beauty": 0.0001665353775024414, "__label__finance_business": 0.00023031234741210935, "__label__food_dining": 0.0003383159637451172, "__label__games": 0.000888824462890625, "__label__hardware": 0.002559661865234375, "__label__health": 0.0005221366882324219, "__label__history": 0.0003674030303955078, "__label__home_hobbies": 0.00011080503463745116, "__label__industrial": 0.0006265640258789062, "__label__literature": 0.00022423267364501953, "__label__politics": 0.0003170967102050781, "__label__religion": 0.0005884170532226562, "__label__science_tech": 0.11859130859375, "__label__social_life": 9.560585021972656e-05, "__label__software": 0.0118560791015625, "__label__software_dev": 0.859375, "__label__sports_fitness": 0.0004298686981201172, "__label__transportation": 0.0007319450378417969, "__label__travel": 0.0002655982971191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37007, 0.05567]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37007, 0.30151]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37007, 0.88604]], "google_gemma-3-12b-it_contains_pii": [[0, 1150, false], [1150, 3686, null], [3686, 6757, null], [6757, 9593, null], [9593, 11475, null], [11475, 14698, null], [14698, 16987, null], [16987, 18589, null], [18589, 21308, null], [21308, 24244, null], [24244, 25449, null], [25449, 27824, null], [27824, 30948, null], [30948, 33907, null], [33907, 37007, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1150, true], [1150, 3686, null], [3686, 6757, null], [6757, 9593, null], [9593, 11475, null], [11475, 14698, null], [14698, 16987, null], [16987, 18589, null], [18589, 21308, null], [21308, 24244, null], [24244, 25449, null], [25449, 27824, null], [27824, 30948, null], [30948, 33907, null], [33907, 37007, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37007, null]], "pdf_page_numbers": [[0, 1150, 1], [1150, 3686, 2], [3686, 6757, 3], [6757, 9593, 4], [9593, 11475, 5], [11475, 14698, 6], [14698, 16987, 7], [16987, 18589, 8], [18589, 21308, 9], [21308, 24244, 10], [24244, 25449, 11], [25449, 27824, 12], [27824, 30948, 13], [30948, 33907, 14], [33907, 37007, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37007, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
5033ee2c6193d6d6689ee9fc8fed94ee09151cdb
|
Project-Team Dahu
Verification in Database
Saclay - Île-de-France
Theme : Knowledge and Data Representation and Management
Table of contents
1. Team .................................................................................................................. 1
2. Overall Objectives .............................................................................................. 1
3. Scientific Foundations ......................................................................................... 1
4. Application Domains .......................................................................................... 2
5. New Results ......................................................................................................... 2
5.1. XML specification and verification ......................................................... 2
5.2. Automata and logics for data words and data trees ............................... 4
5.3. Automata theory ......................................................................................... 5
6. Other Grants and Activities .................................................................................. 5
6.1. National collaborations ............................................................................... 5
6.2. International collaborations ........................................................................ 6
6.2.1. Cooperation within Europe .............................................................. 6
6.2.2. Cooperation with Tunisia ................................................................. 6
6.2.3. Cooperation with North America ..................................................... 6
7. Dissemination ........................................................................................................ 6
7.1. Thesis ........................................................................................................... 6
7.2. Participation in conferences organisation .............................................. 6
7.3. Participation to symposia, seminars, invitations ....................................... 7
7.4. Scientific Animations .................................................................................. 7
7.5. Presentations to a larger public ................................................................. 8
7.6. Teaching ....................................................................................................... 8
7.7. Thesis jury .................................................................................................... 8
8. Bibliography ......................................................................................................... 8
Dahu is a common project with LSV and ENS de Cachan. The team was created on January the 1st, 2008.
1. Team
Research Scientist
Serge Abiteboul [Research Director (DR), HdR]
Stéphane Demri [Research Director (DR) CNRS, HdR]
Florent Jacquemard [Research assistant (CR), Inria]
Luc Segoufin [Team leader, Research Director (DR), Inria, HdR]
Faculty Member
Cristina Sirangelo [ENS Cachan]
PhD Student
Diego Figueira [Cordi]
Thomas Place [Allocation couplée]
Camille Vacher [Allocation couplée]
Post-Doctoral Fellow
Balder ten Cate [Webdam, from April to September]
Yannis Katsis [Webdam, since October]
Visiting Scientist
Anuj Dawar [In Dahu from January to August. Professor, Cambridge, UK]
Nicole Schweikardt [In Dahu in April and May. Professor, Frankfurt, Germany]
Victor Vianu [In Dahu from mid January to mid September. Professor, UC San Diego, USA]
Amélie Gheerbrant [In Dahu from May to September. PhD student at University of Amsterdam]
Gaëlle Fontaine [In Dahu in July and September. PhD student at University of Amsterdam]
Bruno Marnette [In Dahu in July and August. PhD student at Oxford University]
Szymon Toruńczyk [In Dahu since October. PhD student at Warsaw University]
Administrative Assistant
Marie Dominguès [Secretary (SAR) Inria, until September]
Isabelle Biercewicz [Secretary (SAR) Inria, since September]
2. Overall Objectives
2.1. Overall Objectives
For more information see http://www.lsv.ens-cachan.fr/axes/DAHU/dahu.php.
The need to access and exchange data on the Web has led to database management systems (DBMS) that are increasingly distributed and autonomous. Data extraction and querying on the Web is harder than in classical DBMS, because such data is heterogeneous, redundant, inconsistent and subject to frequent modifications. DBMS thus need to be able to detect errors, to analyze them and to correct them. Moreover, increasingly complex Web applications and services rely on DBMS, and their reliability is crucial. This creates a need for tools for specifying DBMS in a high-level manner that is easier to understand, while also facilitating verification of critical properties.
The study of such specification and verification techniques is the main goal of Dahu.
3. Scientific Foundations
3.1. Scientific Foundations
Dahu has strong connections with the Gemo project and the Cassis project.
Dahu aims at developing mechanisms for high-level specifications of systems built around DBMS, that are easy to understand while also facilitating verification of critical properties. This requires developing tools that are suitable for reasoning about systems that manipulate data. Some tools for specifying and reasoning about data have already been studied independently by the database community and by the verification community, with various motivations. However, this work is still in its infancy and needs to be further developed and unified.
Most current proposals for reasoning about DBMS over XML documents are based on tree automata, taking advantage of the tree structure of XML documents. For this reason, the Dahu team is studying a variety of tree automata. This ranges from restrictions of “classical” tree automata in order to understand their expressive power, to extensions of tree automata in order to understand how to incorporate the manipulation of data.
Moreover, Dahu is also interested in logical frameworks that explicitly refer to data. Such logical frameworks can be used as high level declarative languages for specifying integrity constraints, format change during data exchange, web service functionalities and so on. Moreover, the same logical frameworks can be used to express the critical properties we wish to verify.
In order to achieve its goals, Dahu brings together world-class expertise in both databases and verification.
4. Application Domains
4.1. Application Domains
Databases are pervasive across many application fields. Indeed, most human activities today require some form of data management. In particular, all applications involving the processing of large amounts of data require the use of a database. Increasingly complex Web applications and services also rely on DBMS, and their correctness and robustness is crucial.
We believe that the automated solutions that Dahu aims to develop for verifying such systems will be useful in this context.
5. New Results
5.1. XML specification and verification
Participants: Serge Abiteboul, Diego Figueira, Luc Segoufin, Cristina Sirangelo.
In general, Dahu aims at making systems with data safer and more reliable. This means providing suitable models together with a toolbox for helping in the design and implementation of such systems.
We have tackled this year several specific scenarios: models for describing and verifying incomplete information, models for exchanging data, models for distributed XML repository, decision procedure around the query language XPath and static analysis of dynamic XML systems.
XPath is arguably the most widely used XML query language as it is implemented in XSLT and XQuery and it is used as a constituent part of several specification and update languages. Hence in order to perform static analysis on a system manipulating XML data, it is important to master the static analysis for XPath. Most of the important static analysis problems reduce to satisfiability checking: does a given query return a non-empty answer on some data. In general, the satisfiability of XPath is undecidable, however important fragments can be shown to be decidable. In [26] we have shown that when restricting the navigational axis of XPath to the child and descendant relation then satisfiability can be decided in ExpTime. In [27] we have shown that for many other natural fragments of XPath satisfiability is, if decidable, not primitive recursive.
Active XML is a high-level specification language tailored to data-intensive, distributed, dynamic Web services. Active XML is based on XML documents with embedded function calls. The state of a document evolves depending on the result of internal function calls (local computations) or external ones (interactions with users or other services). Function calls return documents that may be active, so may activate new sub-tasks. In [13], [12], we studied the verification of temporal properties of runs of Active XML systems, specified in a tree-pattern based temporal logic, Tree-LTL, that allows expressing a rich class of semantic properties of the application. The main results establish the boundary of decidability and the complexity of automatic verification of Tree-LTL properties.
Towards a data-centric workflow approach, we introduced in [19] an artifact model to capture data and workflow management activities in distributed settings. As above, the model is built on Active XML. We argue that the model captures the essential features of service calls and the essential features of business artifacts as described informally by Nigam and Caswell in 2003. We also briefly consider the monitoring of distributed systems and the verification of temporal properties for them.
A distributed XML document is an XML document that spans several machines or Web repositories. We assume that a distribution design of the document tree is given, providing an XML tree some of whose leaves are "docking points," to which XML subtrees can be attached. These subtrees may be provided and controlled by peers at remote locations, or may correspond to the result of function calls, e.g., Web services. If a global type τ, e.g. a DTD, is specified for a distributed document T, it would be most desirable to be able to break this type into a collection of local types, called a local typing, such that the document satisfies τ if and only if each peer (or function) satisfies its local type. In [21], we lay out the fundamentals of a theory of local typing and provide formal definitions for several variants of locality.
Data exchange between different independent applications has been a central database application since the early development of database systems, and sees now a renewed interest with XML – originally designed as an exchange language. The general problem is how to transfer data from a source database to a target database, structured according to different schemas, knowing a mapping relation between the two schemas. In the literature two main semantics of data exchange existed: one based on the Open World Assumption (OWA), and another one based on the Closed World Assumption (CWA) on target instances. We have studied the effect of introducing an explicit CWA/OWA annotation on target attributes of schema mappings, and we have formalized a corresponding mixed CWA/OWA semantics of data exchange. We have studied the complexity of answering queries over the set of all possible target solutions by establishing a complexity characterization based on the number of open attributes in schema mappings. We have also studied one of the main schema mapping operations, schema composition, for annotated schema mappings. We have shown that large classes of CWA schema mappings enjoy closure under composition. These results are surveyed in [31].
XML key applications on the Web, such as data integration and exchange, make the presence of incomplete information unavoidable in XML data, due to the incompatibility of schemas and constraints among different sites. In [23] we have developed a general model of incomplete information in XML which, in analogy with its relational counterpart, is centered on the notion of null to represent missing information. However the structure of XML documents is much more involved than that of relational databases, and missing information may occur not only in attribute values, but also in the tree structure of the documents. We have considered several models of incomplete information in XML and we have investigated how different features characterizing these models affect the complexity of some relevant computational problems. Among these, the problem of checking consistency of an incomplete representation and answering queries over it. As a result we have traced a boundary between tractability and intractability of these problems. In particular this study allowed us to find a robust class of incomplete documents and queries that make query answering tractable.
Incomplete information can also be represented using probabilities. In addition to ordinary XML documents, a p-documents have distributional nodes that specify the possible worlds and their probabilistic distribution. Particular families of p-documents are determined by the types of distributional nodes that can be used as well as by the structural constraints on the placement of those nodes in a p-document. Some of the resulting families provide natural extensions and combinations of previously studied probabilistic XML models. The expressive
power of families of p-documents has been investigated in [15]. The evaluation of aggregate functions such as count, sum, avg, for probabilistic XML is the topic of [20].
5.2. Automata and logics for data words and data trees
Participants: Stéphane Demri, Diego Figueira, Luc Segoufin.
Dahu aims at providing tools for specifying and verifying systems with data. This means finding a suitable logical framework for specifying such systems. A logical framework is suitable if it is expressive enough for modeling the operations of interest. Of course, for the logical framework to be useful, it must come with techniques and tools for reasoning about it, in particular it should be decidable. This can be achieved by compiling the model into some form of decidable automata manipulating data. In the presence of data, the design of appropriate classes of logic and automata with interesting complexities is an ongoing research task.
Most of our new results in this direction concern data words and data trees. Those are words and trees where each position contains a data value together with the classical label. Data words and data trees can model many systems with data with a focus on one variable flow. Data trees can also model XML data.
We have studied several extensions of the classical model of logic and automata with features that could be used for manipulating data. This is done either by using registers or memory explicitly in the model or by restricting the transitions of the automata with constraints that can involve data comparisons. Several models have been considered.
As query languages such as XPath and XML schema are closely related to the two variable fragment of first-order logic, we have studied this fragment over data trees. In [16] it is shown that satisfiability for two-variable first-order logic is decidable if the tree structure can be accessed only through the child and the next sibling predicates and the access to data values is restricted to equality tests. From this main result, decidability of satisfiability and containment for a data-aware fragment of XPath and of the implication problem for unary key and inclusion constraints is concluded.
As another line of investigation, we studied a bottom-up model of computation that can test for data equality of distant nodes on different branches of the tree [26]. This model captures XPath with downward and child axes, and has an incomparable expressive power with respect to the previous mentioned approach. The model is decidable in ExpTime.
We have analyzed the computational complexity of the covering and boundedness problems for branching vector addition systems [25]. Branching vector addition systems (BVAS) form a new computational model that is used for instance in computational linguistics and for the verification of cryptographical protocols. This model has tight relationships with data logics interpreted over data trees. Recently, Verma and Goubault-Larrecq (EPI SECSI, LSV) have shown that the covering and boundedness problems for BVAS are decidable. In this work, we have extended and refined the standard proofs for vector addition systems (equivalent to Petri nets) by Rackoff (TCS, 1978) and Lipton (TR, 1976) in order to establish that the covering and boundedness problems for BVAS are 2EXPTIME-complete.
In the article [17], we have studied decidability and complexity issues for fragments of LTL with Presburger constraints obtained by restricting the syntactic resources of the formulae while preserving the strength of the logical operators. It is shown that model-checking and satisfiability problems for the fragments of LTL with difference constraints restricted to two variables and distance one and to one variable and distance two are highly undecidable, enlarging significantly the class of known undecidable fragments. On the positive side, we prove that the fragment restricted to one variable and to distance one augmented with propositional variables is PSPACE-complete.
In [34] we illustrate two aspects of automata theory related to linear-time temporal logic LTL used for the verification of computer systems. A translation from LTL formulae to Büchi automata is presented with the aim to design an elementary translation which is reasonably efficient and produces small automata. Secondly, we recall how temporal operators can be defined from regular languages and we show why adding even a single operator definable by a context-free language can lead to undecidability.
5.3. Automata theory
Participants: Florent Jacquemard, Thomas Place, Luc Segoufin, Camille Vacher.
The links between models for XML and regular tree languages has been advocated in many places. Tree automata seem to be playing for semi-structured data and XML the role of the relational algebra for relational databases. As XML is central in our research we also study tree automata and regular tree languages.
A first line of research concerns the expressive power of various subclasses of regular tree languages. It is usually admitted that a fragment is completely understood, in term of expressive power, when one has a decidable characterization of it. That is an algorithm that given a regular tree language, presented say as a tree automaton, tests whether it belongs to the class being investigated or not. This question is an active research topic that turns out to be quite challenging. A regular tree language \( L \) is said to be locally testable if membership of a tree into \( L \) depends only on the presence or absence of some neighborhoods in the tree. In [32] we have shown that it is decidable whether a regular tree language is locally testable.
We have also considered superclasses of regular tree languages, described by tree automata with features which extend strictly standard tree automata. This is the case of Rigid Tree Automata (RTA), an extension of standard bottom-up tree automata with distinguished states called rigid. Rigid states define a restriction on the computation of RTA on trees: RTA tests for equality of subtrees reaching the same rigid state. In [30], we have studied the expressiveness of these automata and properties like determinism, pumping lemma, Boolean closure, and several decision problems. Our main result is the decidability of whether a given tree belongs to the rewrite closure of a RTA language under a restricted family of term rewriting systems, whereas this closure is not a RTA language.
We have obtained some other results concerning the transformation of tree automata languages under various kind of rewriting systems. In [28], we show that the transformation of a tree automata language obtained by application shallow rewrite rules following an innermost strategy (such strategy corresponds to the call by value computation of programming languages) can be recognized by a tree automaton with equality and disequality constraints between brothers. This latter class of automata is another strict extension of tree automata, with the ability to perform some tests of isomorphism between subtree during computations. We have also considered the property of unique normalization (UN), which states that, starting from any tree and applying arbitrarily transformations defined by a given set of rewrite rules, one can reach at most one normal form (one tree which cannot be transformed). Using tree automata techniques, we have studied in [29] the decidability of this property for classes of rewrite rules defined by syntactic restrictions such as linearity (variables can occur only once in each side of the rules), flatness (sides of the rules have depth at most one) and shallowness (variables occur at depth at most one in the rules).
6. Other Grants and Activities
6.1. National collaborations
Dahu is currently participating in two ANR projects:
**ENUM** is a research project supported by the ANR blanche on algorithmic and complexity problems raised by enumerating solutions of a query. The goal is to provide formal methods to understand and compare the complexity of enumerations problems. The partners are University of Paris-7 (with Arnaud Durand), the project-team Mostrare at INRIA-Lille (with Joachim Niehren), the university of Caen (with Etienne Grandjean) and the university of Marseille (with Nadia Creignou). Dahu is involved in the ANR as part of the Paris-7 node.
**Averiss** is a research project supported by the ANR SETIN (ANR-06-SETI-001-02, 2007-2009) on the development of new techniques for automatic software verification taking into account complex features of modern programming languages, including infinite data domains and procedure calls. The partners are LIAFA, University of Paris-7 (with Ahmed Bouajjani), LABRI, Bordeaux (with Igor Walukiewicz) and LSV, ENS Cachan (with Philippe Schnoebelen). Dahu is involved in this project as part of the LSV node.
6.2. International collaborations
6.2.1. Cooperation within Europe
Dahu is involved into two major grants funded by EU:
**FOX** FoX is a FET-Open project funded within the FP7 framework. The objective of FoX is to study the fundamental issues necessary in order to make the data management over the internet more efficient and more reliable. The partners of Dahu in FoX are Thomas Schwentick at the university of Dortmund, Mikołaj Bojańczyk at the university of Warsaw, Leonid Libkin at the university of Edinburgh, Georg Gottlob at the university of Oxford, Frank Neven at the university of Hasselt and Maarten Marx at the university of Amsterdam. The project start on May 1st and will last three year. Luc Segoufin is the coordinator of this project.
**Webdam** Webdam is an ERC “Advanced investigators grants” obtained by Serge Abiteboul. It started in December 2008. The goal is to develop a formal model for Web data management. This model will open new horizons for the development of the Web in a well-principled way, enhancing its functionality, performance, and reliability. Specifically, the goal is to develop a universally accepted formal framework for describing complex and flexible interactingWeb applications featuring notably data exchange, sharing, integration, querying and updating. We also propose to develop formal foundations that will enable peers to concurrently reason about global data management activities, cooperate in solving specific tasks and support services with desired quality of service.
The Webdam project is shared between the Dahu and Gemo project-teams, both from INRIA Saclay.
6.2.2. Cooperation with Tunisia
Dahu is coordinator (on the French side) of a project INRIA-DGRSRT (Tunisian universities) on “automated verification of the conformance of firewall configurations to access-control policies”, since January 2008. The other partners of the project are the CASSIS team at INRIA Nancy-Grand-Est and the security team at Sup’Com Tunis. This year, DAHU has hosted in July and August the internship of Nihel Ben Youssef (PhD in Supcom Tunis). This internship has resulted in an implementation and the publication of [24].
6.2.3. Cooperation with North America
Close links also exist with UC San Diego and the database group of Victor Vianu.
7. Dissemination
7.1. Thesis
7.2. Participation in conferences organisation
Luc Segoufin is PC-chair of the Intl. Conf. on Database Theory (ICDT’10) to be held in Lausanne in 2010. Serge Abiteboul was General PC chair of the Very Large Database 2009 Conference.
Several members of the project have participated in program committees:
- C. Sirangelo: 12th International Conference on Extending Database Technology (EDBT), March 2009.
In May 2009 Luc Segoufin organized an international workshop in Cachan on foundation of semistructured documents.
In June 2009, Luc Segoufin organized the annual workshop of the working group “Complexité et Modèle Finis” of the GDR-IM in Cachan.
In July 2009, Balder ten Cate organized in Cachan a Workshop on Modal Logic.
### 7.3. Participation to symposia, seminars, invitations
Besides the presentations of our papers accepted to international conferences, the members of Dahu made the following keynote talks to international conferences or workshops.
Florent Jacquemard gave an invited presentation on “Rewrite based Verification of XML Updates” during the second Mini-Workshop on Rewriting Techniques, Nagoya, 2009.
Serge Abiteboul gave an invited presentation at Time’09 [19].
Balder ten Cate gave an invited presentation at Invited speaker at the Workshop on Automata and Algorithmic Logic, Stuttgart 2009.
### 7.4. Scientific Animations
- Stéphane Demri is a member of the steering committee of the Tableaux conference (Intl. Conf. Automated Theorem Proving with Analytic Tableaux and Related Methods).
- Stéphane Demri is member of the publication board of the review “Technique et Science Informatiques” (among 5 members).
- Florent Jacquemard is member of the board (general secretary) of the French Association for Information and Communication Systems (ASTI).
- Balder ten Cate is member of the board (secretary) of the Dutch Organization for Logic and Philosophy of the Exact Sciences (VvL).
7.5. Presentations to a larger public
S. Abiteboul participated to the Téléphone Sonne on France Inter. He published interviews or articles in Science et Vie, Le Nouvel Economiste and L’informaticien. He participated as panelist to the colloquium “Une histoire de DIM, Domaines d’Intérêt Majeur” organized par the Île-de-France région. He had presentations at the colloquium INRIA “Web et Industrie” in Lille, at “Demain, la République du Web : une utopie ?” at La Cantine, and at “Perspectives IT pour le Codir”, of the OLG Clubs, at La Vilette.
7.6. Teaching
As a Maître de conférence Cristina Sirangelo is teaching in the department of computer science of ENS de Cachan. Thomas Place and Camille Vacher are also teaching in this department as part of their allocation couplée.
In this department Stéphane Demri has been in charge of the second-year students for 2008/2009 and is also “délégué aux thèses” in computer science for the Ecole Doctorale Sciences Pratiques (EDSP), ENS Cachan.
In the Master Parisien de Recherche en Informatique (MPRI), Florent Jacquemard is teaching in the first year (M1), a course on Tree Automata Techniques and Applications and Luc Segoufin is teaching in the first year (M1) a course on advanced complexity.
Serge Abiteboul is teaching the database course, joint between ENS Cachan and ENS Ulm. He teaches distributed data management in the CS Master at Orsay University.
7.7. Thesis jury
Luc Segoufin was a reviewer and member of the jury for the thesis of Guillaume Bagan, Université de Caen and David Duris, Université of Paris 7.
Stéphane Demri was a reviewer for the thesis of Claire David (Université Paris 7) and Sergio Mera (Universidad de Buenos Aires et Université Henri Poincaré, Nancy 1).
Serge Abiteboul was a reviewer for the habilitation of Véronique Cortier (Nancy).
8. Bibliography
Major publications by the team in recent years
Year Publications
Doctoral Dissertations and Habilitation Theses
Articles in International Peer-Reviewed Journal
invited conferences
international peer-reviewed conference/proceedings
Scientific Books (or Scientific Book chapters)
|
{"Source-Url": "http://raweb.inria.fr/rapportsactivite/RA2009/dahu/dahu.pdf", "len_cl100k_base": 5879, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 33281, "total-output-tokens": 9532, "length": "2e12", "weborganizer": {"__label__adult": 0.0004658699035644531, "__label__art_design": 0.000873565673828125, "__label__crime_law": 0.0007333755493164062, "__label__education_jobs": 0.00921630859375, "__label__entertainment": 0.0002036094665527344, "__label__fashion_beauty": 0.00031876564025878906, "__label__finance_business": 0.0004813671112060547, "__label__food_dining": 0.0005974769592285156, "__label__games": 0.0009002685546875, "__label__hardware": 0.0010995864868164062, "__label__health": 0.0013179779052734375, "__label__history": 0.0007939338684082031, "__label__home_hobbies": 0.00023925304412841797, "__label__industrial": 0.0007963180541992188, "__label__literature": 0.000903606414794922, "__label__politics": 0.0007772445678710938, "__label__religion": 0.0008640289306640625, "__label__science_tech": 0.412109375, "__label__social_life": 0.0003941059112548828, "__label__software": 0.011199951171875, "__label__software_dev": 0.55419921875, "__label__sports_fitness": 0.0003600120544433594, "__label__transportation": 0.0008063316345214844, "__label__travel": 0.00023698806762695312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37519, 0.02816]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37519, 0.24492]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37519, 0.85443]], "google_gemma-3-12b-it_contains_pii": [[0, 126, false], [126, 126, null], [126, 2750, null], [2750, 2750, null], [2750, 5096, null], [5096, 8576, null], [8576, 13662, null], [13662, 18182, null], [18182, 22553, null], [22553, 25308, null], [25308, 28287, null], [28287, 31323, null], [31323, 34044, null], [34044, 36745, null], [36745, 37519, null]], "google_gemma-3-12b-it_is_public_document": [[0, 126, true], [126, 126, null], [126, 2750, null], [2750, 2750, null], [2750, 5096, null], [5096, 8576, null], [8576, 13662, null], [13662, 18182, null], [18182, 22553, null], [22553, 25308, null], [25308, 28287, null], [28287, 31323, null], [31323, 34044, null], [34044, 36745, null], [36745, 37519, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37519, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37519, null]], "pdf_page_numbers": [[0, 126, 1], [126, 126, 2], [126, 2750, 3], [2750, 2750, 4], [2750, 5096, 5], [5096, 8576, 6], [8576, 13662, 7], [13662, 18182, 8], [18182, 22553, 9], [22553, 25308, 10], [25308, 28287, 11], [28287, 31323, 12], [31323, 34044, 13], [34044, 36745, 14], [36745, 37519, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37519, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
b07a2e80eb532d2e2ba440cf3d52e4f0425b9919
|
Paper No.: 301
Paper Title: STATISTICAL METHODS
1. Introduction
2. Presentation of statistical data
2.1. Types of variables
2.2. Univariate, bivariate and multivariate data
2.3. Univariate and bivariate frequency distributions
3. Measure of central tendency-mean, median and mode
4. Measures of dispersion (absolute as well as relative)
4.1 Mean deviation
4.2 Standard deviation
4.3 Coefficient of mean deviation and coefficient of variation
5. Correlation
5.1 Introduction
5.2 Types of correlation and scatter diagrams
5.3 Rank correlation coefficient
6. Regression
6.1 Concept of dependent and independent variables
6.2 Introduction to linear regression
6.3 Line of regression (with one independent variable)
Methods should be explained conceptually and corresponding examples should be given. No proof should be given to any of the methods
Reference Books :
1. Introduction to mathematical statistics – Hogg RV & Cralg AL Tata McGraw Hill
Paper No.: 302
Paper Title: SOFTWARE ENGINEERING - I
1. Introduction
1.1. Software, Software characteristics, Applications, Myths.
1.2. Software Engineering , Generic View
1.3. Software Process models: Waterfall, Prototyping
2. Requirement analysis
2.1. Introduction
2.2. Current Application Analysis
2.3. Requirement gathering techniques & Fact Finding, Recording Outcome
2.4. DFD Data Dictionary and Process Specification
2.5. Importance of Requirement Specifications
3. System Design
3.1. Design model
3.2. Principal and Concepts
3.3. Functional Independence
3.4. Module & Sequence
3.5. Effective of Modular Design
3.6. Mapping of Requirements into Design
3.7. Design Documentation
Note: Case studies may be carried out at appropriate stages of the course.
Reference Books:
5. Software Engineering A Concise Study – Kelkar - PHI
7. Sstzinger, Jackson, Burd: System Analysis & Design in changing world
Paper No.: 303
Paper Title: RELATIONAL DATABASE MANAGEMENT SYSTEM
1. Codd’s Laws for Full Functional Relational Database Management System
2. Introduction to Oracle Tools
2.1. Oracle DBA
2.2. SQL Plus
3. Interactive SQL
3.1. Oracle Data Types
3.2. Oracle DDL & DML
(Create table, Alter Table, Update with multiple column, Updating to null values, Drop Table, Constraints like primary key, foreign key, muticolumn foreign key, foreign key restriction etc)
3.3. Operators
3.4. Oracle Functions
3.5. Range Searching
3.6. Pattern Matching
3.7. Manipulating Dates
3.8. Joins (joining tables through Referential integrity, Equi-Joins, Joins of two tables, joining a table itself
3.9. Sub Queries (DISTINCT with sub queries, predicates with sub queries, Aggregate function in sub queries, HAVING clause, EXISTS operator)
3.10. Using Union, Intersect and Minus Clause
3.11. Indexes (Create index, Drop Index, Types of Index)
3.12. Views (Updating views, Group Views, Views and Joins Vies and Sub Queries, Changing Values through view)
3.13. Sequences
4. PL/SQL
4.1. PL/SQL Block Structure
4.1.1. Using Variables, Constants and Data Type
4.1.2. User Defined Record
4.1.3. Assigning Values to Variables
4.1.4. Control Statements (IF…THEN statement, Loop, FOR...Loop, While Loop)
4.2. Oracle Transactions
4.3. Concurrency Control in Oracle
4.4. Cursor (Explicit, Implicit)
4.5. Error handling in PL/SQL
4.5.1. Exception
4.5.2. User Defined Exception
4.5.3. Unhandled Exception
4.5.4. Pragma Exception
5. Stored Procedures & Stored Functions
6. Database Triggers
Reference Books:
2. Oracle 8 PL/SQL Programming – Oracle Press
S. Y. B. C. A. Semester 3
Effective From: June-2012
Paper No.: 304
Paper Title: DATA STRUCTURES
1. Pointers
1.1. Pointers and memory storage
1.2. Operation on pointers
1.3. Arrays of pointers
1.4. Passing pointers to functions
2. Primitive Data Structures
3. Non-Primitive data structures
3.1. Arrays - its storage structures and operations
3.2. Stacks
3.2.1. Stack Operations
3.2.2. Applications of stack in Recursion and Polish Notations
3.3. Queues
3.3.1. Types of queues: Simple, Circular, Double-ended, Priority
3.3.2. Operations on queue
3.3.3. Application of queue
3.4. Linked list
3.4.1. Types of Limited Lists: Singly, Doubly, Circular
3.4.2. Operations on linked list
3.4.3. Applications Linked lists (Polynomial Manipulation)
4. Trees
4.1. Concept & Definitions
4.2. Types of Binary Tree
4.3. Operations on Binary Trees: Tree Traversals, Insertion & Deletion
4.4. Linked and Threaded Storage Representation of Binary Trees
4.5. Application of trees (Manipulation of Arithmetic Expression)
5. Sorting & Searching Techniques
5.1. Sorting
5.1.1. Insertion Sort
5.1.2. Selection Sort
5.1.3. Quick Sort
5.1.4. 2-way merges
5.1.5. Bubble Sort
5.2. Searching:- Sequential, Binary.
Reference Books:
1. An introduction to Data Structures with applications – Trembley – McGraw Hill
3. Data Structures – A Programming Approach with C, Dharmender Singh Kushwaha, Arun Kumar Misra - PHI
5. The art of Computer Programming, Vols, 1-2, Kunth D – Addision Wessley
6. Schaum’s Outline of Data Structure with C++, John R.H. –TMH
7. Expert Data structure with C-R. B. Patel, Khanna Publication
1. Principles of object oriented programming
1.1. Procedures oriented programming Vs object oriented programming
1.2. Basic concepts of object oriented programming (Encapsulation, Polymorphism etc)
1.3. Benefits of object oriented programming
1.4. Structure & Classes
1.5. Encapsulation and Data Hiding
1.6. Constructors
1.7. Friend Function
1.8. Inline Function
1.9. Dynamic Object Creation & destruction
1.10. Destructor
2. Object Oriented Properties
2.1. Introduction to Object Oriented Properties
2.2. Abstraction
2.3. Polymorphism
2.3.1. Operator Overloading
2.3.2. Function Overloading and Type Conversion
2.4. Inheritance
2.4.1. Type of Inheritance
2.4.2. Constructors and Destructor Calls during Inheritance
2.5. Dynamic Polymorphism
2.5.1. Overriding
2.5.2. Virtual Function
2.5.3. Abstract Class
3. Data Files
3.1. Manipulators (In-Built, User Defined)
3.2. File Modes
3.3. File Functions
3.4. Error Handling During File Operation
4. Exception Handling
4.1. Introduction to Exception
4.2. Try ... Catch
Reference Books:
1. Let us C++ by Yaswant Kanitkar TMH Publication
2. Programming with C++ by E Balaguruswamy BPB Publication
3. Herbert Schildt: The Complete Reference C++ TMH
4. Stroustrup : The C++ Programming Language – Addison Wesley
5. Robert Lofore OOP in Turbo C++ - Galgotia Publication
6. Lippman : C++ Primer – Addison Wesley
Paper No.: 306
Paper Title: Practical
All Students have to carry out practical work in Subjects – 303, 304 & 305
<table>
<thead>
<tr>
<th>No</th>
<th>Course Type</th>
<th>Subject</th>
<th>Credit</th>
<th>Hrs.</th>
<th>Internal Marks</th>
<th>External Marks</th>
<th>External Exam Duration</th>
<th>Total Marks</th>
</tr>
</thead>
<tbody>
<tr>
<td>301</td>
<td>Foundation compulsory</td>
<td>Statistical Methods</td>
<td>2</td>
<td>2</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>302</td>
<td>CORE Elective</td>
<td>Software Engineering-I</td>
<td>3</td>
<td>3</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>303</td>
<td>CORE</td>
<td>RDBMS</td>
<td>4</td>
<td>4</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>304</td>
<td>CORE</td>
<td>Data Structures</td>
<td>4</td>
<td>4</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>305</td>
<td>CORE</td>
<td>Object Oriented Programming</td>
<td>4</td>
<td>4</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>306</td>
<td>CORE</td>
<td>Practical</td>
<td>6</td>
<td>12</td>
<td>60</td>
<td>140</td>
<td>5 Hrs</td>
<td>200</td>
</tr>
<tr>
<td></td>
<td>Foundation Elective</td>
<td>To be Selected from the list (eg NCC/NSS/Saptthara)</td>
<td>2</td>
<td>2</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td><strong>TOTAL</strong></td>
<td></td>
<td>25</td>
<td>31</td>
<td>210</td>
<td>490</td>
<td></td>
<td>700</td>
</tr>
</tbody>
</table>
Paper No.: 401
Paper Title: INFORMATION SYSTEMS
1. **Introduction**
1.1. Data & Information
1.2. Information need and benefits
1.3. Input, Processing, Output and feedback
2. **Concepts of Systems**
2.1. Definition of system in an organization
2.2. Types of systems
2.2.1. Deterministic probabilistic systems
2.2.2. Open and close systems
3. **Introduction to various Information Systems**
3.1. Business Information Systems
3.1.1. ERP
3.2. Management Information Systems
3.2.1. Characteristics of MIS
3.2.2. Development process of MIS
3.3. Decision support systems
4. **Transaction Processing Systems**
4.1. Overview of Transaction Processing System
4.2. Transaction Processing methods & objectives
4.3. Transaction Processing Activities
4.3.1. Data Collection
4.3.2. Data Editing
4.3.3. Data correction
4.3.4. Data Manipulation
4.3.5. Data Storage
4.3.6. Document Production and Reports
4.4. Traditional transaction processing Applications
4.4.1. Order Processing Systems
4.4.2. Purchase Systems
4.4.3. Accounting Systems
4.5. Case Studies Based on TPS for Railway Reservation, Online Admission Process, Hospital Management and Hotel Management.
Reference Books:
1. Ralf M. Stair & George W. Reynolds - Principles of information system Thomson Learning Publisher.
2. NCC – Introduction to system analysis and Design – Galgotia Publications
3. CVS Murthy – Management information Systems – Text & Applications-HPH
S. Y. B. C. A. Semester 4
Effective From: June-2012
Paper No.: 402
Paper Title: SOFTWARE ENGINEERING - II
1. **Business Blue Print**
1.1. Flow Diagram Of Application
1.2. Output Design
1.3. Input Design
1.4. Freezing Business Blue Print
2. **Information Systems Development**
2.1. Code Design
2.2. Test Data Preparations
2.3. Module Testing
3. **Software Testing**
3.1. Testing Fundamentals
3.2. Functional and Structural Testing
3.3. Testing Process
4. **Application Change Over**
4.1. Integrated Testing
4.2. Data Creation & Conversion
4.3. Types of Changeover
4.4. User Training
5. **System Documentation And Maintenance**
5.1. Documentation Essentials
5.2. Documentation Methods
5.3. Developer and User Manuals
5.4. Review & monitoring Of Execution
5.5. Application Change Management
**Note:** Case studies may be carried out at appropriate stages of the course.
**Reference Books :**
5. Software Engineering A Concise Study – Kelkar - PHI
7. Sztzinger, Jackson, Burd: System Analysis & Design in changing world
1. Introduction to Java
1.1. Properties of Java
1.2. Comparison of java with C++
2. Java Developer’s Kit (JDK) and its uses
2.1. Java Compiler
2.2. Java Interpreter
2.3. Java Debugger
2.4. Applet Viewer
3. Basic Concepts
3.1. Identifier, Literals , Operators , Variables
3.2. Keywords
3.3. Data Types
4. Control Structures
4.1. Branching: If – Else, Switch
4.2. Looping : While, Do-while , For
5. Classes and Objects
5.1. Simple Class
5.2. Fields
5.3. Access Controls
5.4. Object Creation
5.5. Construction and Initialization
5.6. Methods
5.7. This
5.8. Overloading Methods
5.9. The main Method
6. Interfaces
6.1. Introduction to Interfaces
6.2. Interface Declaration
6.3. Inheriting and Hiding Constants
6.4. Inheriting, Overloading and Overriding Methods
6.5. Interfaces Implementations
7. Exceptions
7.1. Introduction to Exceptions
7.2. Creating Exception Types
7.3. Throw
7.4. Try, Catch and Finally
8. Threads
8.1. Introduction to Threads
8.2. Thread Model
8.3. Priority of Threads
8.4. Inter Thread Communication
8.5. Synchronization
9. Strings
9.1. Basic String Operations
9.2. String Comparison
9.3. String Buffer Class
10. Packages
10.1. Package Naming
10.2. Type Imports
10.3. Package Access
10.4. Package Contents
10.5. Package Object and Specification
11. The Applet Classes
11.1. Applet Basics
11.2. Applet Architecture
11.3. Applet skeleton
11.4. Applet Display Methods
11.5. HTML APPLET Tag (<APPLET>)
11.6. Passing Parameters to Applets
Reference Books:
4. Steven Haines – Java 2 From Scratch PHI.
5. E-Balaguruswamy – Programming in Java
6. Java : How to Program – Deitel & Deitel - PHI
Paper No.: 404
Paper Title: .NET PROGRAMMING
1. Overview of Microsoft .NET Framework
1.1. The .NET Framework
1.2. The Common Language Runtime (CLR)
1.3. The .NET Framework class Library
2. Visual Basic .NET Programming
2.1. Working with Tool Box Controls
2.1.1. Common controls – Label, Text Box, Button, Check Box, Radio Button, Date TimePicker, List Box, Combo box, Picture Box, Rich Text Box, Tree View, Tool Tip, Progress bar, Masked Text box, Notify Icon, Link Label, Checked List box
2.1.2. Container
2.1.3. Data – Data Set, Data Grid
2.1.4. Component – Image list, error provider, Help provider, Timer
2.2. Working with Menus and Dialogue Boxes
2.3. Exception Handling
2.3.1. Structured Error Handling
2.3.2. Unstructured Error Handling
2.4. Using Modules and Procedures
2.5. Using Arrays and Collections
3. Object Oriented Programming
3.1. Creating Classes, Object Construction & Destruction
3.2. Abstraction, Encapsulation & Polymorphism
3.3. Interfaces & Inheritance
4. Database access using ADO.NET
4.1. Visual Database Tools
4.2. ADO .NET Object Model
4.3. ADO .NET Programming
Reference Books:
1. Visual Basic .NET Programming (Black Book) - By Steven Son Holzner, DreamTech Publication
2. Mastering Visual Basic.NET by Evangelos Petroutsos BPB Publication
5. Database Programming with Visual Basic.NET and ADO.NET - by F.Scott Barker – Sams Publication
7. .NET – Complete Development Cycle - by G. Lenz, T. Moeller, Pearson Education
Paper No.: 405
Paper Title: WEB DESIGNING
1. Creating Web Sites
1.1. Using Front Page
1.2. Table
1.3. Form
1.4. Frame
1.5. Link Bars
1.6. Theme
1.7. Font
1.8. Picture
1.9. DHTML Effects
1.10. Styles
1.11. Publish
1.12. Using HTML
1.13. Structure
1.14. Text and Paragraph Formatting Tags
1.15. Headings
1.16. Lists
1.17. Links
1.18. Table
1.19. Form
1.20. Frame
1.21. Image Maps
1.22. Audio & Video Tags
1.23. CSS (Embedded & Importing)
1.24. Properties: Font, Text, Margin, Border, List, Color & Background, Box
2. DHTML & Java Script
2.1. Static, Dynamic and Active Page
2.2. DHTML Events
2.2.1. Window, Form, Keyboard, Mouse
2.3. Java Script
2.3.1. Overview of Client & Server Side Scripting
2.3.2. Structure of JavaScript
2.3.3. Basic Commands of JavaScript
2.3.3.1. Functions
2.3.3.2. Operators
2.3.3.3. Looping Statements
3. **Hosting Web Pages**
3.1. Domain Name System
3.2. Protocols
3.2.1. Window based FTP (Upload & Download)
3.3. Role of Web Server in Web Publishing
3.3.1. Communication between Web Server & Web Browser
4. **2D Animation (Using Flash 5.0)**
4.1. Introduction
4.2. Toolbox & Toolbars
4.3. Types of Animation
4.3.1. Key Frame
4.3.2. Tweening
4.3.2.1. Shape
4.3.2.2. Motion
4.4. Use of Movie Clips, Buttons, Graphics
4.5. Scripting
4.5.1. Basic Actions
4.5.1.1. Go To, Play, Stop, Get URL, FSCommand, LoadMovie
4.6. Layers
4.6.1. Concepts
4.6.2. Uses
4.6.3. Inserting and Deleting
4.6.4. Motion guide Layer
4.7. Publishing Animation
**Reference Books:**
3. Advanced HTML companion – Keith S. & Roberts _ AP Professional
4. Mastering Photoshop 6.0 – BPB publications Steve Romaniello
5. Flash Bible – IDG Books India Reinhardt, Robert
6. Flash: Magic – Techmedia Emberton, David J.
7. The Complete Reference HTML – TMH Powel, Thomas A.
8. HTML Unleashed – Techmedia Darnell Rick
9. Microsoft FrontPage 2002 24 Hours – Techmedia (SAMS), Rogers Cadenhead
10. Java Scripting Programming for Absolute Beginner- Harris PHI
11. JavaScript Step by Step – Suehring PHI
Paper No.: 406
Paper Title: Practical
All Students have to carry out practical work in Subjects – 403, 404 & 405
<table>
<thead>
<tr>
<th>No</th>
<th>Course Type</th>
<th>Subject</th>
<th>Credit</th>
<th>Hrs.</th>
<th>Internal Marks</th>
<th>External Marks</th>
<th>External Exam Duration</th>
<th>Total Marks</th>
</tr>
</thead>
<tbody>
<tr>
<td>401</td>
<td>Foundation compulsory</td>
<td>Information System</td>
<td>2</td>
<td>2</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>402</td>
<td>CORE Elective</td>
<td>Software Engineering – II</td>
<td>3</td>
<td>3</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>403</td>
<td>CORE</td>
<td>Java Programming</td>
<td>4</td>
<td>4</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>404</td>
<td>CORE</td>
<td>.Net Programming</td>
<td>4</td>
<td>4</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>405</td>
<td>CORE</td>
<td>Web Designing</td>
<td>4</td>
<td>4</td>
<td>30</td>
<td>70</td>
<td>3 Hrs</td>
<td>100</td>
</tr>
<tr>
<td>406</td>
<td>CORE</td>
<td>Practical</td>
<td>6</td>
<td>12</td>
<td>60</td>
<td>140</td>
<td>5 Hrs</td>
<td>200</td>
</tr>
<tr>
<td></td>
<td>Foundation Elective</td>
<td>To be Selected from the list (eg NCC/NSS/Saptdhara)</td>
<td>2</td>
<td>2</td>
<td>210</td>
<td>490</td>
<td>700</td>
<td></td>
</tr>
<tr>
<td>TOTAL</td>
<td></td>
<td></td>
<td>25</td>
<td>31</td>
<td>210</td>
<td>490</td>
<td>700</td>
<td></td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.vnsgu.ac.in/AutoIndex-2.2.4/Syllabus%20%282012-2013%29/Science/B.C.A%20%28CBCS%29/S.Y.B.C.A/S.Y.B.C.A.Sem.%20III%2C%20IV%20CBCS.pdf", "len_cl100k_base": 5826, "olmocr-version": "0.1.49", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 38647, "total-output-tokens": 7240, "length": "2e12", "weborganizer": {"__label__adult": 0.0008611679077148438, "__label__art_design": 0.0014333724975585938, "__label__crime_law": 0.0007915496826171875, "__label__education_jobs": 0.29443359375, "__label__entertainment": 0.00026869773864746094, "__label__fashion_beauty": 0.0004775524139404297, "__label__finance_business": 0.0013866424560546875, "__label__food_dining": 0.001010894775390625, "__label__games": 0.00124359130859375, "__label__hardware": 0.0014810562133789062, "__label__health": 0.0009207725524902344, "__label__history": 0.001186370849609375, "__label__home_hobbies": 0.0005965232849121094, "__label__industrial": 0.0013370513916015625, "__label__literature": 0.0016908645629882812, "__label__politics": 0.0005779266357421875, "__label__religion": 0.001129150390625, "__label__science_tech": 0.03802490234375, "__label__social_life": 0.0006580352783203125, "__label__software": 0.0098876953125, "__label__software_dev": 0.6376953125, "__label__sports_fitness": 0.0008602142333984375, "__label__transportation": 0.0013189315795898438, "__label__travel": 0.0006084442138671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20869, 0.06909]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20869, 0.26903]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20869, 0.61716]], "google_gemma-3-12b-it_contains_pii": [[0, 1081, false], [1081, 2540, null], [2540, 4088, null], [4088, 4419, null], [4419, 5786, null], [5786, 6479, null], [6479, 7600, null], [7600, 8021, null], [8021, 8135, null], [8135, 9679, null], [9679, 10968, null], [10968, 11602, null], [11602, 12886, null], [12886, 13142, null], [13142, 14011, null], [14011, 15216, null], [15216, 16843, null], [16843, 17111, null], [17111, 18142, null], [18142, 19593, null], [19593, 19707, null], [19707, 20869, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1081, true], [1081, 2540, null], [2540, 4088, null], [4088, 4419, null], [4419, 5786, null], [5786, 6479, null], [6479, 7600, null], [7600, 8021, null], [8021, 8135, null], [8135, 9679, null], [9679, 10968, null], [10968, 11602, null], [11602, 12886, null], [12886, 13142, null], [13142, 14011, null], [14011, 15216, null], [15216, 16843, null], [16843, 17111, null], [17111, 18142, null], [18142, 19593, null], [19593, 19707, null], [19707, 20869, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20869, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20869, null]], "pdf_page_numbers": [[0, 1081, 1], [1081, 2540, 2], [2540, 4088, 3], [4088, 4419, 4], [4419, 5786, 5], [5786, 6479, 6], [6479, 7600, 7], [7600, 8021, 8], [8021, 8135, 9], [8135, 9679, 10], [9679, 10968, 11], [10968, 11602, 12], [11602, 12886, 13], [12886, 13142, 14], [13142, 14011, 15], [14011, 15216, 16], [15216, 16843, 17], [16843, 17111, 18], [17111, 18142, 19], [18142, 19593, 20], [19593, 19707, 21], [19707, 20869, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20869, 0.04301]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
f15113e4bccd45f4fe95eb86af88e99f267fb168
|
Developing RDF-based Web services for supporting runtime matchmaking and invocation
How to cite:
© 2011 IEEE
Version: Accepted Manuscript
Link(s) to article on publisher’s website:
http://dx.doi.org/doi:10.1109/NWeSP.2011.6088211
http://www.mirlabs.org/nwesp11/
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
oro.open.ac.uk
Developing RDF-based Web Services for Supporting Runtime Matchmaking and Invocation
Hong Qing Yu, Dong Liu, Stefan Dietze and John Domingue
Knowledge Media Institute
The Open University
Milton Keynes, United Kingdom
Abstract—In our previous research, we proposed an Autonomous Matchmaking Web Services (AMWS) framework, in which Web services broker themselves to notify the service registry whether they are suitable to the service requests. This framework is designated to more efficiently work for dynamically assembling services at runtime in a massively distributed environment. The central point of the AMWS is to use RDF (Resource Description Framework) to carry all exchanging messages. However, the implementation detail has not been discussed yet. In this paper, we focus on showing the two most important implementation parts of (1) transforming existing services to become AMWS compliant services that can consume and produce RDF messages; (2) service semantics can be annotated and self-brokered by using our developed Development Time Semantic Annotation and Lowering (DtSAL) Library at programming time.
Keywords: Web services, Autonomous Matchmaking, Semantic Web, Semantic Web Services, RDF.
I. INTRODUCTION
Automatic service discovery, selection, orchestration and invocation are the main research topics in the fields of service oriented computing. The majority of research efforts towards service automation is based on Semantic Web Services and targets the issues at the semi-automatic level, in which each individual task (e.g. discovery, selection, orchestration and invocation) is completed automatically by machines but human coordinates the whole service consumption lifecycle. Figure 1 shows the common process of development lifecycle based on current ideas of Semantic Web Services. Services are annotated with semantics that are stored in the service semantic repository. When service users want to consume services, they usually first discover services through a semantic broker, then invoke the corresponding services.
Figure 1. The Current Semantic Web Service architecture.
The work at the semi-automatic level can be further divided into two categories A and B (see Figure 2).
Category A fully trusts and depends on the Web service broker. In this way, the client is thin by just sending service specification and waiting for the expected response. The broker is a fat by taking care of all the dynamic activities. The main research contributions in this category are WSMX architecture [1] (e.g. IRS-III [2]) and OWL-S [12] based architectures (e.g. service composition application [4] based on planning algorithms). However, to be able to use these architectures, the user has to understand the data model of the services that registered in the broker and the complicated OWL or WSMO ontologies in order to invoke the discovered services and represent the output data correctly. Due to these prerequisites, all proposed orchestration managers are pre-defined static orchestration rather than runtime configurable.
Category B uses registration style to dynamically discover suitable services but leave other dynamic activities to the client side. In this way, client side can control service orchestration or select business strategy. The most recent contribution is Linked Services Concept [18] that uses Linked Data [13] principles to publish service semantics in to the Linked Data cloud.
Figure 2. Two categories of semi-automatic approaches.
Application [15] and [19] are developed based on Linked Services.
Semi-automation benefits to system development at design time, but it is not suitable for runtime environment in which system context changes dynamically. Furthermore, the following two issues have to be addressed in order to achieve service consumption at full-automatic level, in which the whole service consumption process lifecycle is automatically completed by machines without any human interactions.
- **Hidden data representation model** which causes the problem of sharing the common understanding of the service input and output data representations. The major reason is the contradictions between semantic description/annotation and non-semantic Web service implementation. First of all, all current service description languages focus on describing service interface rather than services themselves [26]. Secondly, no matter how services are semantically described or annotated, the underlying service invocation and response messages are still based on non-semantic invocation messages, such as SOAP [8], XML or Json1. Therefore, manually setting up the invocation message to map the parameters is unavoidable [22].
- **Out-of-date semantics** which leads the invocation faults. Current Semantic Web Service standards require service providers or users to register service semantics once service is implemented or used to a third party controlled service repository. However, the changes of service cannot be automatically reflected, especially for users who registered the service semantics but cannot know any changes on service side.
The main conceptual frameworks and specifications for semantically describing services (e.g. WSMO, OWL-S and SAWSDL which derive from WSDL-S [16]) are very comprehensive. Most SWS initiatives were built upon the enrichment of SOAP-based Web services to have semantics. Although some tools have been developed to foster to use these standards such as WSMO studio [20], these comprehensive semantic standards are too heavy to only show interface semantics of the service and still not describing the important part of the service – data representation model.
It is only most recently that lightweight services (e.g. Web APIs and RESTful services) and service annotations have been researched. The main results of these recent studies are SAREST [16], hRESTs [25], WSMO-Lite [14], MicroWSMO [17] and MSM (Minimal Service Model) [21]. These standards are easily to be understood and adopted. However, current processes to use these lightweight semantics are still focusing on service annotations for implementing a big middle broker layer rather than thinking of adding semantic values directly to services themselves (e.g. iServe platform [21]). Therefore, the issues of runtime service invocation and out of date semantics are still remaining.
An Autonomous Matchmaking Web Services (AMWS) framework proposed in our previous work [9] aims to tackle the above issues and introduces a semantic message (RDF) based Web Service standard. In this paper, we move one step forward to discuss how semantic message based autonomous matchmaking Web services can be implemented. A development-time semantic annotation and lowering library is implemented to minimize the out-of-date semantics.
The remainder of this paper is organized into four sections. Section II plains the background and our motivation. Section III introduces the details of the service development. Section IV discusses the related work. Section V draws a conclusion and discusses our future research directions.
II. OVERVIEW OF THE AUTONOMOUS MATCHMAKING WEB SERVICE FRAMEWORK
Our initial work [9] has introduced the fundamental concepts of the Autonomous Matchmaking Web Service (AMWS) framework (see Figure 3). Comparing to traditional Semantic Web Services approach, there are two key changes:
- using RDF-based semantic message exchanging protocols rather than syntax-based message protocols (e.g. SOAP);
- introducing semantic query endpoint for the service.
The invocation endpoint is as same as normal service invocation endpoint but only consumes and produces RDF messages. The semantic query endpoint takes RDF service request message as inputs and responses to the user by dynamically checking whether its own semantics satisfies the request. In this way, service registry only charges to broadcast the service requests in the suitable service category and let services themselves to decide if they are the matched services.

The AMWS framework brings at least two benefits for service-oriented computing.
- **Facilitating the full-automation of the service consumption process**: all information and communication messages are semantically understandable by using unified RDF data structure and LOD semantics. As result, the data structure and semantics are known at the same time, which is a fundamental requirement to enable services to be automatically assembled and invoked.
Balancing workload among services, requester and registry: each part of the three takes their own responsibilities to efficiently finish the service consumption life-cycle. Therefore, the AMWS framework is suitable for the large scale distributed applications.
However, current Web services standards are not support the semantic query endpoint and only have invocation endpoint. Therefore, converting existing services to enable implementing Autonomous Matchmaking Web Service Framework is the core challenge. The following section will give details of our current approach to transforming existing web services to the AMWS-compliant services.
III. IMPLEMENTING AMWS-COMPLIANT WEB SERVICES
This paper focuses on transforming existing services to become Semantic-Message-based Web services. There are following three key steps.
(1) Transforming communication protocol to RDF: this is done by wrapping the existing Web service functions to have only one input parameter that is a RDF message string and only one output parameter that is a RDF message string too (see Subsection A for details).
(2) Semantic annotating services while developing: Unlike Semantic Web Service technology, our annotation work is added while developers is wrapping the function or developing a new function. Therefore, the published semantics keeps with the latest updated functional information (see Subsection B for details).
(3) Developing an extra function to be invoked as the semantic query endpoint: this function implements service matchmaking algorithms and determines whether the service is matched for the request (see Subsection C for details).
A. An illustration example: Wrapping DBpedia Web service
We use the DBpedia SPARQL query RESTful service as an illustration example to show the wrapping process. The detailed service information is listed in Table I.
<table>
<thead>
<tr>
<th>Service properties</th>
<th>Property values</th>
</tr>
</thead>
<tbody>
<tr>
<td>Endpoint</td>
<td><a href="http://dbpedia.org/sparql">http://dbpedia.org/sparql</a></td>
</tr>
<tr>
<td>HTTP method</td>
<td>GET</td>
</tr>
<tr>
<td>Parameters</td>
<td></td>
</tr>
<tr>
<td>format</td>
<td>rdf/xml, text</td>
</tr>
<tr>
<td>query</td>
<td>Your sparql</td>
</tr>
<tr>
<td>debug</td>
<td>on, off</td>
</tr>
<tr>
<td>timeout</td>
<td>time duration (e.g. 30seconds)</td>
</tr>
</tbody>
</table>
| TABLE I. THE SPECIFICATION FOR DBPEDIA QUERY FUNCTION |
Figure 4 shows one instance of the RDF input message based on the defined RDF schema. It contains all the corresponding parameters for lowering.
Figure 5 shows the lowering technique used to translate the DBpedia SPARQL query function call from RDF request to the actual service request. HTTP GET method is used to invoke the function. The RDFInputParser is a key function from DiSAL (see subsection B) for lowering input RDF message based on a SPARQL statement. The fixed parameter value format="rdf/xml" is used to create response in RDF format. Other parameter values are defined by parsing the input RDF message. After the service call, the results from the response are translated back to RDF format.
Figure 4.
Figure 5.
B. Development-time Semantic Annotation and Lowering Library
To address the out-of-date semantic usages issue, we propose to add semantic annotations inside the code while developing or modifying service functions. To achieve this, we develop a Development-time Semantic Annotation and...
Lowering (DiSAL) Java Library, whose implementation structure is shown in Figure 6. The DiSAL uses openRDF\(^3\) and RDF2Go\(^4\) Java library to create RDF statements. A set of service description ontologies is applied to enable developer to choose differently according to the type of wrapped service (e.g. SOAP or RESTful). In the current implementation, the service description ontologies include hRESTs, WSMO-lite, microWSMO and MSM. Moreover, DiSAL supports to lower the RDF document by giving a SPARQL query. The lowering function foster to both Semantic Message based Web services wrapping and new development processes.

Figure 6. Implementation structure of the DiSAL Java Library
Figure 7. illustrates the annotation code for the DBpedia SPARQL query RESTful service. The annotation mainly includes four parts: (1) constructing the namespace, service name, operation name, and invocation endpoint; (2) categorising service; (3) declaring HTTP invocation method; (4) defining input parameters and their model-references; (5) defining output parameters and their model-references.
```java
RestAnnotation ra = new RestAnnotation("http://www.testnamespace.com/", "DBpediaSparql/", "sparql\Query", "http://dbpedia.org/sparql/");
ra.setCategory("http://www.example/service\category\information\knowledge/");
ra.setMethod("http://www.w3.org/2005/sparql\select");
inputElement1.put("message\#1", "query");
inputElement1.put("message\#1\mapping", "http://www.example/service\input/" + "ontology\Query\Endpoint");
inputElement2.put("message\#1\", "debug");
inputElement3.put("message\#1\", "http://www.example/service\input/" + "ontology\Optimal\" );
inputElement3.put("message\#1\", "debug");
inputElement4.put("message\#1\", "timeout");
inputElement4.put("message\#1\mapping", "http://www.example/service\input/" + "ontology\Invocation\Timeout");
inputElement5.put("message\#1\", "timeout");
inputElement5.put("message\#1\mapping", "http://www.example/service\input/" + "ontology\Invocation\Timeout");
outputElement.put("message\#1\", "result");
outputElement.put("message\#1\mapping", "http://www.w3.org/2005/sparql\result\set");
outputElement.put("message\#1\", "http://www.w3.org/2005/sparql\result\set");
outputElement.add(inputElement1); outputElement.add(inputElement2); outputElement.add(inputElement3);
outputElement.add(inputElement4); outputElement.add(inputElement5); ra.setParameters(outputElement);
output.add(outputElement); ra.setOutput(output);
```
Figure 7. An hRESTs service annotation example
Figure 8 shows the DBpedia SPARQL query RESTful service annotation RDF result using hRESTs. Firstly, DBpediaSparql is defined as hrests:Service (see F.1). Secondly, sparqlQuery function is defined as an hrests:Operation of the DBpediaSparql (see F.2). Thirdly, the sparqlQuery function is categorized as knowledge that is sub-class of information (see F.3). Fourthly, hrests:Operation properties have been described including invocation method (hrests:hasMethod), invocation endpoint (hrests:hasAddress), output message (hrests:hasOutputMessage) and input message (hrests:hasInputMessage) (see F.4). Finally, all the hrests:ModelProperty are added to specify more accurate and specific semantic to the required domain. For instance, two model references can be used to tell that the input message includes an optional debug parameter (see F.5).
C. Adding Extra Functions to Support AMWS Framework
By using DiSAL library, each wrapped service has its semantic to be computed in order to check whether the service can do the job according to the RDF based semantic service request message. Therefore, an extra function (semantic check endpoint) should be added to the service for answer the YES/NO to the registry by parsing the RDF request semantics and comparing to its own semantics if the service tries to support AMWS framework (see Figure 2). The response message of the semantic query endpoint follows AMWS Confirmation Response Message (CRM) standard that is represented in Figure 9. The CRM is composed by two parts of yes/no confirmation and runtime service invocation information.

Figure 9. CRM ontology that is introduced in AMWS framework.
We implement an example of the semantic query function by using RDF2Go to parse the request semantics and checking the service semantic capability. The checking workflow is illustrated in Figure 10.
The matching algorithm used in the last step of the semantic query workflow is
\(^3\) http://www.openrdf.org/
\(^4\) http://semanticweb.org/wiki/RDF2Go
Since different kinds of services have heterogeneous business logics and concerns, the implementation of the semantic checking workflow and matchmaking workflow should be heterogeneously designed for their own purposes. Therefore, the checking workflow and algorithm illustrated here only show a possible example of the semantic checking implementation.
With both service invocation endpoint and semantic checking endpoint, the example of DBpediaSparql service is compliant to the AMWS framework to enable to be asked in broadcast way and invoked via RDF based invocation messages.
IV. RELATED WORK
The Linked Services [21] and Linked Open Services (LOS) [22] approaches both use Linked Data theory as their technical foundations and agree that service should communicate using RDF for supporting automatic web service consumption life cycle.
The Linked Services approach focuses on annotating and publishing existing Web services semantics to the Linked Data cloud in order to discovery services using Linked Data theory. Based on Linked Service proposal, service annotation model (e.g. MSN), service annotation tools (e.g. Sweet [23] and SmartLink [24]) and semantic description repository (e.g. iServe) have been developed. Although, a preliminary service invocation engine has been developed to use the semantic data of the published services, it is still working on dynamic service invocation level because it is very handful to setting up the invocation RDF to match exactly the service invocation lowering annotations. Moreover, two more issues remains: (1) the underlying services are not added semantic values to themselves as the semantics are stored in third party repository that need to be manually updated when changing takes place; (2) centralised service semantic repository leads to service discovery and invocation are also through the centralised environment, therefore, scalability issue cannot be resolved easily.
The LOS approach focuses on wrapping existing RESTful services or SPARQL endpoints to manipulate services that can consume and produce RDF messages by using SPARQL language such as Ask or Construct to support dynamic lowering and composition. Therefore, LOS service semantic model mainly described SPARQL graph pattern of input and output. As far as our best knowledge, LOS does not support to publish service semantics and dynamic service discovery methodology. Moreover, LOS again only supports dynamic service invocation rather than automatic because, the service users need to know the lowering schema first in order to enable LOS understand their RDF invocation requests.
V. CONCLUSION AND FUTURE WORK
In our previous research work, AMWS framework is proposed to fully use Semantic Web technology to exchanging messages between Web service communication protocols. The advantages of AMWS are fully supporting automatic service discovery and invocation at runtime. However, the feasibility and implementation details have not been investigated. In this paper, we introduce an implementation process to wrap existing Web services (DBpedia SPARQL query RESTful service in our case) to become an AMWS framework compliant semantic message based Web service. The wrapping process includes (1) reengineering Web service to enable receiving and sending RDF semantic input and output messages; (2) annotating services on Development Time using DiSAL library and (3) developing and adding the service semantic checking function to the service.
To fully support the whole working process to the AMWS framework, many researches and implementations are remained. We list three priorities in below:
- investigating the optimization algorithm to deal with the service selection issue caused by broadcasting discovery methodology.
- developing an automatic mediation engine to mediate different service annotation referenced terms from different annotation ontologies. One possible way is to enrich the Service Searching Message (SSM) with the medication results.
developing different DTSAL libraries to different kinds of current Web service programming language (e.g., C#
ACKNOWLEDGMENT
The authors thank to EU funded Multi-type Content Repurposing and Sharing in Medical Education (mEucator) ECP2008EDU/418006 project for supporting this work.
REFERENCES
|
{"Source-Url": "http://oro.open.ac.uk/30059/5/camera-ready.pdf", "len_cl100k_base": 4318, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22087, "total-output-tokens": 6593, "length": "2e12", "weborganizer": {"__label__adult": 0.00032520294189453125, "__label__art_design": 0.0002627372741699219, "__label__crime_law": 0.0003345012664794922, "__label__education_jobs": 0.0005059242248535156, "__label__entertainment": 7.021427154541016e-05, "__label__fashion_beauty": 0.0001513957977294922, "__label__finance_business": 0.00026154518127441406, "__label__food_dining": 0.0003566741943359375, "__label__games": 0.0003285408020019531, "__label__hardware": 0.0006895065307617188, "__label__health": 0.0006084442138671875, "__label__history": 0.00023996829986572263, "__label__home_hobbies": 6.604194641113281e-05, "__label__industrial": 0.0003185272216796875, "__label__literature": 0.0003113746643066406, "__label__politics": 0.00027298927307128906, "__label__religion": 0.0004553794860839844, "__label__science_tech": 0.030426025390625, "__label__social_life": 0.00011044740676879884, "__label__software": 0.00923919677734375, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.0002498626708984375, "__label__transportation": 0.0004835128784179687, "__label__travel": 0.00021505355834960935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26955, 0.02744]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26955, 0.11081]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26955, 0.81881]], "google_gemma-3-12b-it_contains_pii": [[0, 804, false], [804, 4295, null], [4295, 9356, null], [9356, 12770, null], [12770, 17368, null], [17368, 21365, null], [21365, 26955, null]], "google_gemma-3-12b-it_is_public_document": [[0, 804, true], [804, 4295, null], [4295, 9356, null], [9356, 12770, null], [12770, 17368, null], [17368, 21365, null], [21365, 26955, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26955, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26955, null]], "pdf_page_numbers": [[0, 804, 1], [804, 4295, 2], [4295, 9356, 3], [9356, 12770, 4], [12770, 17368, 5], [17368, 21365, 6], [21365, 26955, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26955, 0.07143]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
764cab2de91eab0b5fe9d406318cc687b480c430
|
CS 106AX Midterm Solution
The course staff spent several hours grading your midterms over the weekend, so I’m happy to report they’ve been graded, and your graded midterms will be published via Gradescope ahead of today’s lecture. The exam was intended to be challenging, but many of you did brilliantly, and most of you did well beyond what I expected I’m happy to go with a traditional curve for an accelerated course, where I set the median grade to sit just above the A-/B+ border.
The complete histogram of grades is presented below, where each dot represents a single exam score (technically out of 70 points, with space for extra credit of up to 4 points).

You can determine your letter grade by looking up your score in the following table:
<table>
<thead>
<tr>
<th>Range</th>
<th>Grade</th>
<th>N</th>
</tr>
</thead>
<tbody>
<tr>
<td>71–74</td>
<td>A+</td>
<td>4</td>
</tr>
<tr>
<td>65–70</td>
<td>A</td>
<td>13</td>
</tr>
<tr>
<td>60–64</td>
<td>A−</td>
<td>15</td>
</tr>
<tr>
<td>55–59</td>
<td>B+</td>
<td>8</td>
</tr>
<tr>
<td>47–54</td>
<td>B</td>
<td>5</td>
</tr>
<tr>
<td>40–46</td>
<td>B−</td>
<td>2</td>
</tr>
<tr>
<td>32–39</td>
<td>C+</td>
<td>1</td>
</tr>
<tr>
<td>21–31</td>
<td>C</td>
<td>0</td>
</tr>
<tr>
<td>13–20</td>
<td>C−</td>
<td>2</td>
</tr>
<tr>
<td>00–12</td>
<td>D</td>
<td>0</td>
</tr>
</tbody>
</table>
Median = 62 (85.7%)
Solution 1: Simple JavaScript expressions and methods [10 points/70 total]
(1a) [3 points] Compute the value of each of the following JavaScript expressions:
<table>
<thead>
<tr>
<th>Expression</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>$1 + 2 \times 3 % 5$</td>
<td>2</td>
</tr>
<tr>
<td>"C" === "CC"</td>
<td></td>
</tr>
<tr>
<td>$106 + "AX" + 1 + 2 + 3$</td>
<td>106AX123</td>
</tr>
</tbody>
</table>
(1b) [3 points] Assume that the method `halloween` has been defined as follows:
```javascript
function halloween(word) {
let result = "";
while (word.length > 1) {
if (word.charAt(0) > word.charAt(1)) {
word = word.substring(0, Math.floor(word.length/2));
} else {
word = word.substring(Math.floor(word.length/2));
}
result += word;
}
return result;
}
```
What is the value returned by `halloween("hocuspocus")`?
Answer to problem 1b: pocuspop
(1c) [4 points] What output is printed by the following Problem1c program?
```
function Problem1c() {
let doris = "bostonterrier";
let boston = function(s, y) {
return doris.substring(0, y) + s.substring(y);
};
let terrier = function(s, x) {
let toy = boston(s, x);
toy += String.fromCharCode("4".charCodeAt(0) + 5);
return toy;
};
doris = dogpark(boston, terrier);
doris.concat("play");
console.log(doris);
}
function dogpark(f, g) {
return f("livertreat", 6) + " " + g("puppyfood", 6);
}
```
Answer to problem 1c:
```
bostonreat bostonood9
```
Solution 2: Graphics, Callbacks, & Animation [15 points/70 total]
In this problem, you’ll be implementing the controls for a miniature game of Snake, called “ByteSnake”!
You will start with a small square, as below. You’ll animate it so that every TIME_STEP, it moves by a pre-determined amount; and then, you’ll allow the player to change the direction it moves. We’ll build up the solution step by step!
You can control the snake in the demos using the W (to go up), A (to go left), S (to go down), and D (to go right) keys.
Part 1: Animating the Player
We’ve created the player’s cube for you– your job is to make it move! We’ll start by having it always move to the right– marching infinitely past the edges of the screen. We expect you to write about 6 lines of code for this problem. You'll find the SPEED constant helpful.
Part 2: Controlling the Player
Now that the player is animated, let’s create the ability to control it! We expect you to write 20 or so lines of code for this problem.
For this problem, you'll be using the keydown event, just like in Wordle. We provide the same getKeystrokeLetter you used on that assignment.
You will need to create an event listener for the keydown event. When the event fires, you should use the getKeystrokeLetter function to get which key was pressed.
- For W, the snake should move up
- For A, the snake should move left
- For S, the snake should move down
- For D, the snake should move right
- In any other case, simply ignore the keystroke.
Once you figure out what direction you want the snake to be moving, you should change the X and Y velocities to move the snake in that direction.
(space for the answer to problem #2 appears on the next page)
Answer to problem #2:
```javascript
function animatePlayer(gw) {
// This code is given, and correct
let player = GRect(GWINDOW_WIDTH / 2 - PLAYER_SIZE,
GWINDOW_HEIGHT / 2 - PLAYER_SIZE, PLAYER_SIZE, PLAYER_SIZE);
gw.add(player);
//**************************
* PART 1 *
//**************************
// Step 1. Create variables to keep track of the X and Y velocities. (We expect 2 lines of code.)
let vx = SPEED;
let vy = 0;
// Step 2. Every TIME_STEP, your player should move by the current X and Y velocity.
setInterval(step, TIME_STEP);
function step() {
player.move(vx, vy);
}
//**************************
* PART 2 *
//**************************
// Step 1. Create your keydown callback function.
function onKeydown(e) {
let letter = getKeystrokeLetter(e);
if (letter === "w") {
vx = 0;
vy = -SPEED;
} else if (letter === "a") {
vx = -SPEED;
vy = 0;
} else if (letter === "s") {
vx = 0;
vy = SPEED;
} else if (letter === "d") {
vx = SPEED;
vy = 0;
}
}
// Step 2. Attach your callback function to gw as event listeners for the keydown event.
gw.addEventListener("keydown", onKeydown);
}
```
Part 3: Wrap-around (EXTRA CREDIT)
Now that you’ve implemented player control, the last thing to do is make it so that the
player wraps around, rather than falling off the sides! We expect you to write about 15 lines of code for this part.
**This is an extra credit problem. Come back to it at the end if you have time to complete it.**
For this problem, you’ll be modifying the function you created in part 1. You may assume that `PLAYER_SIZE`, `GWINDOW_HEIGHT`, `GWINDOW_WIDTH`, and `SPEED` are all set so you’ll never have to worry about the player being halfway offscreen in any direction.
The player should wrap around on all sides of the GWindow; if they are past the bottom, they should wrap around to the top, and vice versa; and if they are past the right of the screen, they should wrap around to the left, and vice versa.
```javascript
// The step function above changes to read as so:
function step() {
player.move(vx, vy);
//**********************************
// PART 3 - EXTRA CREDIT
//**********************************
if (player.getX() >= GWINDOW_WIDTH) {
player.setLocation(0, player.getY());
} else if (player.getX() <= -PLAYER_SIZE) {
player.setLocation(GWINDOW_WIDTH - PLAYER_SIZE, player.getY());
} else if (player.getY() >= GWINDOW_HEIGHT) {
player.setLocation(player.getX(), 0);
} else if (player.getY() <= -PLAYER_SIZE) {
player.setLocation(player.getX(), GWINDOW_HEIGHT - PLAYER_SIZE);
}
}
```
Solution 3: Strings [15 points/70 total]
For this problem, you’ll leverage your understanding of strings to generate a random reflector that you might use in your implementation of Assignment 4’s Enigma.
Recall that the reflector encrypts each letter of the alphabet to some other letter, and that all reflector encryptions are reversible. Thus, if "A" is encoded as "P", then "P" must be encoded as an "A".
The permutation for the reflector used in the Assignment 4 specification was:
"IXUFEZDAOMTKQJWNSRLCPBG"
Note that "A" is encoded as "I", "B" is encoded as "X", "C" is encoded as "U", and so forth. And because the permutation is a reflector, it must be the case that "I" maps back to "A", "X" maps back to "B", "U" maps back to "C", and so on. No character in the above permutation occupies its normal spot in the alphabet, since that would imply that character is its own encryption, and that’s not permitted.
For this problem, your job is to implement the buildReflector function, which constructs and returns a valid reflector permutation. Because this function is algorithmically complex, we give you the general algorithm in pseudocode:
```javascript
const ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const STARTER = "--------------------------";
function buildReflector() {
let reflector = STARTER;
let candidates = ALPHABET;
while (candidates.length > 0) {
select a random position from the string of candidates, and remember the character there
remove the selected character from the string of candidates
select a random position from the updated string, and remember the character there
remove the selected character from the string of candidates
update the reflector so that each of the two selected characters occupies the other’s
spot in the alphabet
}
return reflector;
}
```
Note that two characters are deleted from candidates on each iteration, which means that the body of the loop will always execute 13 times. And recall the randomInteger function from Assignment 1, where a call to randomInteger(a, b) for integers a and b is equally likely to return any integer between a and b, inclusive.
(space for the answer to problem #3 appears on the next page)
Answer to problem #3:
```javascript
const ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const STARTER = "-------------------------";
function buildReflector() {
let reflector = STARTER;
let candidates = ALPHABET;
while (candidates.length > 0) {
let pos1 = randomInteger(0, candidates.length - 1);
let ch1 = candidates.charAt(pos1);
candidates =
candidates.substring(0, pos1) + candidates.substring(pos1 + 1);
let pos2 = randomInteger(0, candidates.length - 1);
let ch2 = candidates.charAt(pos2);
candidates =
candidates.substring(0, pos2) + candidates.substring(pos2 + 1);
pos1 = ch2.charCodeAt(0) - "A".charCodeAt(0);
pos2 = ch1.charCodeAt(0) - "A".charCodeAt(0);
reflector =
reflector.substring(0, pos1) + ch1 + reflector.substring(pos1 + 1);
reflector =
reflector.substring(0, pos2) + ch2 + reflector.substring(pos2 + 1);
}
return reflector;
}
```
**Solution 4: Arrays [15 points/70 total]**
Consider the following permutation:
<table>
<thead>
<tr>
<th>6</th>
<th>3</th>
<th>8</th>
<th>5</th>
<th>4</th>
<th>1</th>
<th>0</th>
<th>7</th>
<th>2</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
<td>8</td>
</tr>
</tbody>
</table>
The above sequence is a permutation of (that is, an arbitrary reordering of) the numbers 0 to 8, inclusive. What’s interesting is that we can identify **cycles** in the above permutation (and indeed, any permutation), by following the numbers around the array until we reach our starting point again. For example, the permutation above has 5 cycles as follows:
6 → 0, 3 → 5 → 1, 8 → 2, 4, and 7
The 3 → 5 → 1 cycle is implied by the permutation, because the 5 is at index 3 and therefore follows the 3, the 1 is at index 5 and therefore follows the 5, and the 3 is at index 1 and therefore follows the 1. Specifically, every number \( k \) in the permutation is part of exactly one cycle, and \( k \)’s successor in the cycle is at index \( k \).
Note that 8 and 2 are mutual successors of one another, since the 2 is at index 8 and the 8 is at index 2. That means 2 follows 8 follows 2 in another cycle separate from 3 → 5 → 1. And 4 is in its own cycle since it resides at index 4 and is its own successor. Cycles of size 1 are completely legit.
Each cycle can be represented as an integer array, and the collection of cycles for any given permutation can be expressed as an array of integer arrays. Hence, given a permutation
\[
[6, 3, 8, 5, 4, 1, 0, 7, 2]
\]
we can express that permutation’s partition into cycles as
\[
[[6, 0], [3, 5, 1], [8, 2], [4], [7]]
\]
(Note that all rotations of a cycle are equivalent, so that \([3, 5, 1]\) could have been \([5, 1, 3]\) or \([1, 3, 5]\) instead and all would have been good.)
Implement the **permutationToCycles** function, which accepts a valid permutation and returns a partition of that permutation as an array of integer arrays. You can trust that the input is a valid permutation, though you shouldn’t make any assumptions about the permutation’s length, as it could be a permutation of the numbers 0 through 8 as above, or it could be a permutation of the numbers 0 through 45, or 0 through 1234.
Implementation hint: For any permutation of size \( n \), maintain an array of \( n \) Booleans to track whether a particular integer in the permutation has already been processed.
*(space for the answer to problem #4 appears on the next page)*
Answer to problem #4:
```javascript
/*
* Function: permutationsToCycles
* -----------------------------
* Accepts a valid permutations on the first n natural numbers and returns the collection of cycles implied by it, as per the problem statement.
*
* permutationsToCycles([6, 3, 8, 5, 4, 1, 0, 7, 2])
* -> [[6, 0], [3, 5, 1], [8, 2], [4], [7]]
* permutationsToCycles([0, 2, 3, 1, 5, 4])
* -> [[0], [2, 3, 1], [5, 4]]
* permutationsToCycles([7, 0, 1, 2, 3, 4, 5, 6])
* -> [[7, 6, 5, 4, 3, 2, 1, 0]]
*/
function permutationsToCycles(perm) {
let used = [];
for (let i = 0; i < perm.length; i++) used.push(false);
let cycles = [];
for (let pos = 0; pos < perm.length; pos++) {
if (!used[pos]) {
let cycle = [];
let curr = pos;
while (!used[curr]) {
cycle.push(perm[curr]);
used[curr] = true;
curr = perm[curr];
}
cycles.push(cycle);
}
}
return cycles;
}
```
Solution 5: Working with data structures [15 points / 70 total]
By now, you’re all quite familiar with the game of Wordle, even if you’d somehow missed news of it prior to CS106AX’s Assignment 3. Of course, the goal is to uncover a secret, five-letter English word via a series of six or fewer educated guesses. With each guess, the game identifies correctly placed letters by shading them green and further identifies correctly guessed but incorrectly placed letters by shading them yellow.
A JavaScript object modeling a successful game of Wordle might look like this:
```javascript
let game1 = {
secret: "rerun",
guesses: [
{ guess: "reach",
green: [0, 1],
yellow: []
},
{ guess: "refer",
green: [0, 1],
yellow: [4]
},
{ guess: "rears",
green: [0, 1],
yellow: [3]
},
{ guess: "rerun",
green: [0, 1, 2, 3, 4],
yellow: []
}
]
};
```
The above includes the secret word and the series of guesses that led to the win. The sequence of guesses is itself modeled as an array—keyed by guesses—where the first guess occupies position 0, the second guess occupies position 1, and so forth. Each guess is represented as a smaller object with three keys: the guess, the indices of the correctly placed green letters, and the indices of the correctly guessed, incorrectly placed, yellow letters.
Some versions of Wordle require that a correctly placed letter never be moved in subsequent guesses. As it turns out, the game depicted by game1 respects this requirement: The leading "re" appears in every guess until the game is over. Restated, all letters, once shaded green, remain green for the lifetime of the game.
This is an example of what we’ll call perfect play.
Contrast the above to the game modeled here:
```javascript
let game2 = {
secret: "tower",
guesses: [
{
guess: "mouth",
green: [1],
yellow: [3]
},
{
guess: "torch",
green: [0, 1],
yellow: [2]
},
{
guess: "toner",
green: [0, 1, 3, 4],
yellow: []
},
{
guess: "tours",
green: [0, 1],
yellow: [3]
},
{
guess: "tower",
green: [0, 1, 2, 3, 4],
yellow: [2, 4]
}
]
};
```
Here, the player seemingly goofed when guessing "tours". This mishap is obvious from the green properties in the objects modeling the two relevant guesses—the 3 and 4 that appear in `game2.guesses[2].green` are missing from `game2.guesses[3].green`.
For this problem, you’re to implement a function called `playedPerfectly`, which accepts a game object like those structured above and returns `true` if the game was played perfectly, and `false` otherwise. (You needn’t involve the yellow fields here. They’re simply included because it would have been strange to omit them from the discussion. A more sophisticated definition of perfect play might involve the yellow arrays, but in the interest of time, we won’t be that sophisticated.)
Your `playedPerfectly` implementation should examine the green arrays and confirm that once an index appears in some green array, it must appear in all subsequent ones. If your implementation detects a violation, it should return `false` without continuing. If no violations are discovered anywhere, you should return `true`.
Place your implementation on the next page.
function playedPerfectly(game) {
for (let i = 0; i < game.guesses.length - 1; i++) {
for (let j = 0; j < game.guesses[i].green.length; j++) {
let pos = game.guesses[i].green[j];
if (game.guesses[i + 1].green.indexOf(pos) === -1) {
return false;
}
}
}
return true;
}
|
{"Source-Url": "http://web.stanford.edu/class/cs106ax/res/handouts/31-CS106AX-Midterm-Solution", "len_cl100k_base": 4876, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 27325, "total-output-tokens": 5505, "length": "2e12", "weborganizer": {"__label__adult": 0.0006546974182128906, "__label__art_design": 0.000904560089111328, "__label__crime_law": 0.0005917549133300781, "__label__education_jobs": 0.030364990234375, "__label__entertainment": 0.00028204917907714844, "__label__fashion_beauty": 0.00033736228942871094, "__label__finance_business": 0.000293731689453125, "__label__food_dining": 0.0010118484497070312, "__label__games": 0.0036773681640625, "__label__hardware": 0.0013360977172851562, "__label__health": 0.0005908012390136719, "__label__history": 0.0005435943603515625, "__label__home_hobbies": 0.0002484321594238281, "__label__industrial": 0.0005202293395996094, "__label__literature": 0.0007538795471191406, "__label__politics": 0.0004017353057861328, "__label__religion": 0.0008234977722167969, "__label__science_tech": 0.0107421875, "__label__social_life": 0.00034689903259277344, "__label__software": 0.008331298828125, "__label__software_dev": 0.935546875, "__label__sports_fitness": 0.0007433891296386719, "__label__transportation": 0.0006747245788574219, "__label__travel": 0.0003979206085205078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17436, 0.0303]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17436, 0.42878]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17436, 0.84831]], "google_gemma-3-12b-it_contains_pii": [[0, 1081, false], [1081, 2022, null], [2022, 2640, null], [2640, 4357, null], [4357, 5740, null], [5740, 7110, null], [7110, 9365, null], [9365, 10361, null], [10361, 12767, null], [12767, 13699, null], [13699, 15519, null], [15519, 17131, null], [17131, 17436, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1081, true], [1081, 2022, null], [2022, 2640, null], [2640, 4357, null], [4357, 5740, null], [5740, 7110, null], [7110, 9365, null], [9365, 10361, null], [10361, 12767, null], [12767, 13699, null], [13699, 15519, null], [15519, 17131, null], [17131, 17436, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 17436, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17436, null]], "pdf_page_numbers": [[0, 1081, 1], [1081, 2022, 2], [2022, 2640, 3], [2640, 4357, 4], [4357, 5740, 5], [5740, 7110, 6], [7110, 9365, 7], [9365, 10361, 8], [10361, 12767, 9], [12767, 13699, 10], [13699, 15519, 11], [15519, 17131, 12], [17131, 17436, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17436, 0.06024]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
05c61431ef5265e4124f5f0b75047fc3a39aa5a3
|
Visual Dynamic Environment for Distributed Systems
Vito Di Gesù, Francesco Isgrò, Biagio Lenzitti, Domenico Tegolo
*Dipartimento di Matematica ed Applicazioni, University of Palermo, ITALY
Abstract. Algorithms, based on information fusion, are often embodied in visual perception systems. Distributed architectures have been recently proposed to perform integrated computation. The complexity of distributed systems regards both their design, and the software environment to develop applications. Visual and iconic programming style intends to provide expressive tools to implement, to debug, and to execute programs in distributed environment. Multi-layers graphs languages seem suitable to handle such complexity. This paper describes the design of a visual dynamic environment (VDE), which is based on a graph-grammar. A new class of dynamic visual interfaces is also introduced, and its properties are described. The proposed VDE has been implemented on the first emulated version of the machine M-VIF (Machine Vision based on Information Fusion).
1. Introduction
Humans interact with the real environment through their senses, they react and make decisions depending on the result of such interaction. This simple consideration has suggested most of the interactive computer systems based on virtual reality [1] and multi-media [2]. Vision play a relevant role among human senses, many efforts have been done in the last decade to improve the design of visual interfaces for computer systems (CAD/CAM, windows, X-windows).
The performance of a visual system lies on the ability to focus the areas of interest, by maximizing a given costs/benefit utility criterion [3,4]. The selection of interesting regions is relevant; in fact the ability to select salient features is the basic question of intelligence, both artificial and natural. Moreover, visual perceptual systems should be able to adapt their behaviour depending on the current goal and the nature of the input data. Such performance can be obtained in systems able to interact dynamically with the environment.
Information-fusion techniques, implemented on distributed systems (DS), are suitable to develop goals-oriented strategies [5]. In fact, the computation can be driven by complementary information sources, and it may evolve on the basis of adaptive internal models and environment transformations. Moreover, results of several processing elements can be integrated to find an optimal solution.
Distributed systems are characterized by an huge amount of states and parameters, that are distributed through several elementary sub-system units. Moreover, functional dependencies exist between states and parameters. Automatic control assessment and motion detection in risky environments are examples of distributed systems. In these cases, visual data are usually collected from multi-sensors, and their elaborations are carried out on local processing units, which are logically interconnected to share and interchange knowledge (models, data and algorithms).
Visual dynamic environment musts provide a synthetic view of a distributed system behaviour, and guides to understand local and/or global computation phases. In fact, the design and the implementation of algorithms on multi-processors machine depends on the distribution of data and processes among the processing units, therefore the dynamic control is relevant to optimize and to tune the execution of processes. Dynamic visual tools allow to realize such user/machine interaction in a natural and efficient way.
This paper describes the design of a VDE, which is based on a multi-layers graph-grammar. A new class of dynamic visual interfaces is introduced, and its properties are described. The proposed VDE has been implemented on the first emulated version of the machine M-VIF [6].
The concept of dynamic icon, as it has been introduced in DIVA (Dynamic Interface for Visual Applications) [6], will be used to extend visual interfaces on DSs.
In Section 2 the definition of VDE, and related properties are given. Section 3 shortly describes the M-VIF machine. Section 4 describes the implementation of VDE to program M-VIF. An example of distributed computation using the proposed VDE is shown in session 5. Concluding remarks are given in Section 6.
Informally, a VDE is an iconic system that allows both to build an user view of an underlying DS, and to allocate and to control the related co-operating processes. Bricks of a VDE are both conventional and Dynamic Icons (DI) [7]. The semantic values of a DI depends on the evolution of the distributed processes. In order to define more
formally VDE's, it is necessary to introduce some notations and definitions about distributed systems.
**Definition 1:** A DS is a 5-tuple <S,I,O,A,W>, where S is the set of states, IcS is the set of inputs, OcS is the set of output. Transitions among states of a DS are represented by the function Δ:S→S, where Δ:S→S is a set of parts of S. W is a mask function W:S→{0,1}.
On the other hand, visual computation, based on information fusion, can be formulated in terms of five functional modules: Observe, Process, World Model, Choose Next, and Action. Sensor data are provided to the Observe module that performs early vision tasks; the information flows to the Process module, processes of which (algorithms executed on proper hardware) are selected by the Choose-next module, the processes are also driven by the World-Model. The outputs of the computation are directed both to Action, that operates on the environment, and to Observe module, that drives further sensor-explorations. The World represents the environment on which a DS operates. Within the system, information flows in a continuous active feedback loop (see Figure 1). Now, the states of DS can be defined as:
\[ S = D \cup M \cup A \cup P \]
where: D represents input/output data, collected by sensors from the world or produced as results of a given computation, M represents models (for example relations between objects, sensors characteristics, environment), P is the set of distributed processes, and A is the set of actions that modify the world (open/close a door, activate an alert system,...). Moreover, IcD is the set of input states, OcDUA is the set of output states. Here, the term process has a wide meaning, it could be a single algorithm or a sequence of processes associated to a set of processing units (PU) connected in a given topology.
The transition function is defined as follows:
\[
\begin{align*}
\Delta_1 &: \mathcal{P}(D) \to \mathcal{P}(P) & \text{data / processes} \\
\Delta_2 &: \mathcal{P}(P) \to \mathcal{P}(D) & \text{processes / data} \\
\Delta_3 &: \mathcal{P}(M) \to \mathcal{P}(P) & \text{models / processes} \\
\Delta_4 &: \mathcal{P}(P) \to \mathcal{P}(M) & \text{processes / models} \\
\Delta_5 &: \mathcal{P}(P) \to \mathcal{P}(P) & \text{processes / processes}
\end{align*}
\]
Each function, \(\Delta_i\), defines logical (or physical) links among elements of S. The computation will evolve on the basis of both transitions and mask functions. A transition is active if all its input links have mask value equal 1. When a transition \(\mathcal{P}(X) \to \mathcal{P}(Y)\) is active then information (data, model and tasks) can flow from X to Y. The nature of the information flow depends on the sets X and Y. For example, if Xє \(\mathcal{P}(D)\) and Yє \(\mathcal{P}(P)\) data are sent from X to Y; if X,Yє \(\mathcal{P}(P)\) both data and tasks could be sent from X to Y.
**Figure 1. A vision system based on information integration.**
In Figure 2 a DS is sketched, note that the arcs are labelled, and the information flow is enabled only if the labels \(w_i\) are "on".
**Figure 2. Automata for a vision system**
**Definition 2:** A dynamic icon \(I\in I\) is a correspondence between a set of metaphors, M, and a set of icons, I; where, metaphors have a perceptive meaning. For example, they may represent visual patterns as well acoustic signals or a combination of both.
**Definition 3:** A VDE is a 6-tuple <DS, DI, M_m, M_p, M_A, M_W>, where:
a) \( M_m : M \to P \) (metaphor - process)
b) \( M_p : PXM \to DI \) ((process, metaphor) - icon)
c) \( M_A : DI \to \Delta \) ((icon,icon)-transition functions)
d) \( M_W : DBDI \to \{0,1\} \) (mask on the relational arcs)
\( M_m \) is a function that assigns a metaphor to a process, it is many to one because the corresponding process may assume different status conditions during the evolution of the DS; \( M_p \) is one to one in order to avoid
ambiguities, it is responsible for the assignment of a \( \gamma \) a pair \((P_i, m_k)\) determines the current value of the icon \( \gamma_i^{(k)} \). \( M_\Delta \) defines the relations between \( \gamma \)’s; this function has been introduced to handle the evolution of a DS at a visual level; the function \( M_w \) assigns a mask flag value to each relational arc. The mask flag is useful to visualize the information flow in the DS. Figure 3 shows the diagram of the functions above introduced.
The previous definition can be very useful to represent the evolution of a process in a DS. In these cases, a process, \( P_i \), will be defined as a sequence of virtual processes \((P_i^{(k)} \mid k=1,2,\ldots,N)\) corresponding to a sequence of dynamic icons \((\gamma_i^{(k)} \mid k=1,2,\ldots,N)\), where:
\[
\gamma_i^{(k)} = M_\Delta(P_i^{(k)}, m^{(k)})
\]
for \( k=1,2,\ldots,N \)
**Figure 3. Functions describing the relations between the sets DI, P, and M.**
DS’s can be represented by direct weighted graphs, on the other hand their complexity (number of nodes and arcs) makes difficult their design and programming.
Visually, VDE can be also represented by direct graphs, \( G \), nodes of which are \( \gamma \)’s and the labelled arcs are determined by the function \( M_\Delta \), the label values, \( \lambda \), are determined by the kind of \( \Delta \) function (\( \lambda = 1,2,3,4,5 \)), the mask value \( m \) is fixed by \( M_m \). Moreover, a dynamic icon \( \gamma \) may represent recursively a sub-set of the VDE, named VDE\( \gamma \). Such kind of dynamic icon is named compound.
Two compound, \( \gamma_1 \) and \( \gamma_2 \), are linked by a compound arc \((M_\Delta(\gamma_1, \gamma_2))\) iff a subset of arcs, with equal label, exist between the corresponding sub-graphs VDE\( \gamma_1 \) and VDE\( \gamma_2 \). The mask flag of a compound arc is set on the basis of AND-OR rules applied to the mask-flags of the corresponding arcs. Dynamic icons and arcs, which are not compound, are said primitive.
The introduction of compound and primitive dynamic icons makes possible to organize a VDE in a hierarchical way (Figure 4), and this approach is useful to handle a DS at different levels of refinement. The hierarchical structure of the VDE, allows to develop and update the visual design of a DS, and to focus the attention easily where errors and bugs occur. Moreover, a VDE controls the evolution of a DS at different levels of detail (from compound to primitive).
The evolution and the programming of a DS can be exploited via VDE by testing the syntactic correctness of visual graphs. Visual parsing is recursively applied to compound-elements, until primitive-elements are reached. The parsing phase is given by a graph-grammars [8].
A graph representing a VDE, must satisfy the following properties, that are used during the parsing phase:
a) The graph, \( G \), of a VDE is connected;
b) \( G \) has labelled input/output arcs;
c) valid sub-graphs of \( G \) have input/output arcs labelled with the same label;
d) direct paths exist from input to output nodes of \( G \) and its valid sub-graphs.
**Figure 4. Conceptual organization of a VDE.**
The automatic inspection of a VDE may detect syntactic errors during the visual definition of a DS, for example the proper direction of an arc can be easily tested; moreover, during the visual editing some semantic inconsistencies can be discovered, for example is not allowed to input or output processes, data and models that do not match correct prototypes.
The whole syntactic correctness of a DS is then tested during the parsing phase of the VDE. The parsing consists on a match-merge procedure applied to the graph, \( G \), that can be considered as a visual program. The match step tests the consistency of sub graphs \( G' \) of \( G \); for this purpose standard graph-matching algorithms can be used; their computational complexity is strongly reduced.
to \( O(L) \), where \( L \) is the number of arcs, because of the constrains imposed by the graph grammar. The merge step creates a super-node, input (output) arcs of which are the input (output) arcs of \( G' \). The parsing is successfully if a single super-node is obtained, at the end of this phase; the merge step depends on the number of nodes in \( G \). The consistency rule depends on the graph grammar, and how it has been defined in the construction of the visual program.
3. The M-VIF machine.
The first prototype of the VDE, above described, has been implemented on the M-VIF machine, as an example of DS oriented to vision problems. Its architecture is based on a Compound Node (CN), which is composed of 4 functional modules (see Figure 5):
- the C-module is the controller of the CN, it manages the evolution of the computation in M-VIF;
- the modules H's are dedicated to data processing;
- the modules IP provide for the input/output data management;
- the module LN (Link Network) is dedicated to the interconnection of CN's, in order to realise several reconfigurable network topologies.
The emulation of M-VIF has been carried out on the reconfigurable and heterogeneous architecture based on the HERMIA machine. It includes 16 general purpose PU's (INMOS T800) and a bank of 6 Digital Signal Processors (INMOS A110) .
The system operates in pipeline modality. For our purposes we use only one compound node, where \( H_1 \) and \( H_2 \) have four processing units and they are dedicated to the segmentation and matching phases respectively. \( H_3 \) performs the integration and decision phase and it requires only one processing unit.
The image I/O is handles by the controller, that loads the data in the share memory, a Broadcasting/Multiplexer Data Unit units directs the data and intermediate results to the appropriate processes in each \( H_i \).
The evolution of a distributed process is based on firing conditions, which must be verified at the input of each state \( x \). Two sets are introduced for each \( x \) to define firing rules:
\[
IN(x) = \{(s,x) \mid s \in S\}
\]
\[
OUT(x) = \{(x,s) \mid s \in S\}
\]
Moreover each input is partitioned:
\[
IN(x) = I_1(x) \cup I_2(x) \cup I_3(x) \cup I_4(x) = \phi\] for \( i \neq j \)
Each element of \( I_i(x) \) is univocally determined by an integer index \( i_j \) for \( j = 1, 2, ..., k_i \). To each \( I_i(x) \) a logic function, \( f_i(x) \), is associated:
\[
f_i(x) = \bigwedge_{j \in I_i(x)} w(i_j, x) \bigvee \bigwedge_{r \in I_i(x)} w(r_j, x)
\]
The introduction of the mask function \( f_i \) and the partition induced by \( IN(x) \) and \( OUT(x) \), allows to implement the \( \{ A_i \} \) functions holding in the DS.
The computation performed by M-VIF depends on the values of the mask function, \( w_0 \), at the starting time, \( t_0 \), i.e. for each \( x \), the values \( w(i_j, x) \) are set initially to "0" and "1". The updating of the mask values is determined by those elements of \( S \) that, firing at a given time, assign new mask values to the elements of \( OUT(x) \). Therefore the value \( w_{i+1} \) at time \( t_{i+1} \), depends on both the values \( w_i \) and the results at time \( t_i \). The computation end as soon as the condition of firing is false for all the elements of \( S \). Note that the evolution of the whole mask values is determined by the deterministic tasks running in each state \( x \).
In the following, the VDS design for the M-VIF machine is described. It is based on three set of visual elements: Metabase (Metaphor database), Icons, and Arcs.
- **Metabase.** Is a database of metaphors which contains visual, acoustic and text patterns. The dynamic evolution of a process is represented by three colours, default values are:
- green for active
- orange for wait
\[
\begin{align*}
\text{Figure 5. The architecture of the CN-node.}
\end{align*}
\]
Here \( w(i_j, x) \) is the mask value of the transition \( (i_j, x) \). From the previous definition follows that \( f_i(x) \) is "1" if and only if each transition \( j \in I_i(x) \) is "1" and each \( f_r(x) \) with \( r \neq i \) is "0". The firing rule can be now stated as follows:
\[
\text{fire } x \text{ iff } \exists I_i(x) \text{ such that } f_i(x) = 1
\]
The introduction of the mask function \( f_i \) and the partition induced by \( IN(x) \) and \( OUT(x) \), allows to implement the \( \{ A_i \} \) functions holding in the DS.
The computation performed by M-VIF depends on the values of the mask function, \( w_0 \), at the starting time, \( t_0 \), i.e. for each \( x \), the values \( w(i_j, x) \) are set initially to "0" and "1". The updating of the mask values is determined by those elements of \( S \) that, firing at a given time, assign new mask values to the elements of \( OUT(x) \). Therefore the value \( w_{i+1} \) at time \( t_{i+1} \), depends on both the values \( w_i \) and the results at time \( t_i \). The computation end as soon as the condition of firing is false for all the elements of \( S \). Note that the evolution of the whole mask values is determined by the deterministic tasks running in each state \( x \).
Icons. Dynamic Icons, representing processes, are defined by the pair (colour, pattern). All icons (both compound and primitive) have the same visual organization (see Figure 6). The body contains a visual pattern and an expansion button, the colour of this button indicates also the current status (active, wait, stopped) of the icon. Primitive icons have yellow background, compound icons blue one. Six Input/Output channels are foreseen to implement the IN, OUT lines of each node. The expansion of a compound icon is performed by pushing its button. The expansion of a primitive icon depends on the kind of the information connected. For example, if an icon represents a process it returns the source code in I-PICL language [9], if it represents a kernel its expansion returns the kernel-values. The distribution of the resources (processors, sensors, data knowledge) are handled by the user. This is obtained by creating a link between the appropriate dynamic icon and the corresponding resource.
A first prototype of VDS is under development under Windows3.1 and C++ object oriented language.
5. An example of distributed computation.
In the following, the implementation under VDS of an information fusion technique to retrieve pictorial data is described. It is based on the integration of different data type and co-operating segmentation algorithms. Two, distance functions (Euclidean, Hausdorff) are evaluated and used to reach the best matching.
In the first phase the input image is segmented by four segmentation algorithms (Hierarchical Single Link Clustering (HSLC), Hierarchical Histogram Partition Clustering (HHPC), Hierarchical ISODATA (HISO), Two Phases Clustering (TPC) [10,11], the results are analysed by the corresponding matching modules (MM₁,...,MM₄). Each module MMᵢ has two different input data: metrics (M₁,M₂), the segmented image, and the target image (P), and it provides an ordered list of the retrieved objects (candidates). The ordering is performed on an evaluation parameter (qᵢ), ranging in the interval [0,1], related to each candidate.
The last module (Integrated Decision) performs the best retrieval on the basis of the results of the modules MMᵢ's, which are combined by using several decision functions; in our case have used alternatively the mean value, the maximum value, and the vote technique.
Figure 7 shows the visual organization of the proposed retrieval system. Figure 7a shows the tools used to draw the iconic algorithm, and to distribute the resources. It also shows the upper level of the retrieval algorithm, sketched above. Three compound icons represent the main phases of the retrieval procedure (modules: Segm, MM's, and ID).
Figure 7b shows the expansion of the Segm icon. Figure 7c shows the expansion of the MM icon, the primitive icon Prot contains the prototype database, the compound icon Mod represent the distance functions, and criteria to be used in the matching phase. Figure 7d shows the expansion of the primitive icon HISO.
An example of resource distribution management
Figure 6. Dynamic Icons: a) Process; b) Model; c) Compound; d) Input data; e) Output data.
is given in Figure 8. In this example, the user selects the compound icon Segm and assigns the processes associated with this icon to one of the computing site (the HERMIA-machine).
The tools windows includes also the following buttons:
- the magnifying glass performs debug of an iconic program;
- the clock starts the execution of an iconic program;
- the hand allows to navigate into the working window;
- the scissors and the magic wand are used to perform icon cut and paste respectively;
- the button "T" allows to include comments into the icon body.
The paper shows the general definition of Visual Dynamic Environment to develop distributed applications on a DS. A preliminary version of VDS for the M-VIF machine is also described. Preliminary tests show that VDS may improve the quality of the implementation of distributed algorithms; substantial performance has been also observed in software productivity.
At the present status of its implementation, VDS allows to develop algorithms on an emulated version. Further developments foreseen the use of VDE in a DS, which includes different platforms (UNIX workstations, HERMIA machine) and different sensors (CCD camera, infrared and acoustic sensors) distributed on a local ethernet network.
REFERENCES
Figure 7. An example of distributed algorithm: (a) main algorithm level; (b) expansion of the Segm icon; (c) expansion of the MM icon; (d) I-PICL HISIO program.
Figure 8. An example of resource distribution management.
|
{"Source-Url": "http://math.unipa.it/~lenzitti/papers/camp95.pdf", "len_cl100k_base": 5346, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 21963, "total-output-tokens": 6564, "length": "2e12", "weborganizer": {"__label__adult": 0.0004372596740722656, "__label__art_design": 0.0015697479248046875, "__label__crime_law": 0.0004591941833496094, "__label__education_jobs": 0.0009746551513671876, "__label__entertainment": 0.00014460086822509766, "__label__fashion_beauty": 0.00021898746490478516, "__label__finance_business": 0.00030732154846191406, "__label__food_dining": 0.0004143714904785156, "__label__games": 0.0005636215209960938, "__label__hardware": 0.0030612945556640625, "__label__health": 0.0008382797241210938, "__label__history": 0.0005087852478027344, "__label__home_hobbies": 0.00013625621795654297, "__label__industrial": 0.0009622573852539062, "__label__literature": 0.0004875659942626953, "__label__politics": 0.0003483295440673828, "__label__religion": 0.000766754150390625, "__label__science_tech": 0.31982421875, "__label__social_life": 0.00011265277862548828, "__label__software": 0.0156097412109375, "__label__software_dev": 0.65087890625, "__label__sports_fitness": 0.00028252601623535156, "__label__transportation": 0.0007524490356445312, "__label__travel": 0.00026607513427734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24416, 0.01205]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24416, 0.76508]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24416, 0.86255]], "google_gemma-3-12b-it_contains_pii": [[0, 4700, false], [4700, 8632, null], [8632, 12602, null], [12602, 17824, null], [17824, 20974, null], [20974, 24197, null], [24197, 24197, null], [24197, 24416, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4700, true], [4700, 8632, null], [8632, 12602, null], [12602, 17824, null], [17824, 20974, null], [20974, 24197, null], [24197, 24197, null], [24197, 24416, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24416, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24416, null]], "pdf_page_numbers": [[0, 4700, 1], [4700, 8632, 2], [8632, 12602, 3], [12602, 17824, 4], [17824, 20974, 5], [20974, 24197, 6], [24197, 24197, 7], [24197, 24416, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24416, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
4db3c5a2d55c23a157647563800d83ce4e7142cb
|
Sense before syntax: a path to a deeper understanding of objects
How to cite:
For guidance on citations see FAQs.
© [not recorded]
Version: Accepted Manuscript
Link(s) to article on publisher’s website:
http://dx.doi.org/doi:10.11120/ital.2007.06040125
http://journals.heacademy.ac.uk/doi/abs/10.11120/ital.2007.06040125
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
Sense before syntax: a path to a deeper understanding of objects
Rob Griffiths, Simon Holland, Marion Edwards
Computing Department, The Open University, Walton Hall, Milton Keynes, England MK7 6AA
r.w.griffiths@open.ac.uk, s.holland@open.ac.uk, medwards@nildram.co.uk
Abstract
This paper describes how we have successfully adapted a principled pedagogy of objects first and progressive disclosure, originally developed for teaching objects concepts through the vehicle of a pure object language, to the teaching of object concepts using Java. We employ a cognitive science viewpoint to distinguish between, and sequence accordingly, two different aspects of learning Java. We focus initially on fundamental aspects of the object model of computation, which are simple, consistent, meaningful, and hence relatively stable in memory. Aspects of the Java syntax and semantics which are contingent or arbitrary, and hence unstable in long-term memory, are deferred until after students have acquired a secure conceptual model. We use three principal techniques to assist students in acquiring programming experience of fundamental concepts relatively un-distracted by contingent detail. These measures are: interactive microworlds that allow accurate visualisation of central object concepts; a Java scripting environment that minimises the amount of syntax required, but which allows students to interact with and inspect 'live' objects in the microworlds; and an explicitly object-oriented (if verbose) programming style that reinforces object-oriented concepts. Dealing with Java-specific design peculiarities is thus deferred until students have a stable conceptual model on which to scaffold a deeper understanding of objects.
Keywords
Java, microworld, objects first, progressive disclosure, OUWorkspace, BlueJ, scripting environment, object-oriented, Smalltalk, cognitive science.
1. Introduction
After thirty or more years experience, it may sometimes appear that there is very little genuinely new to be said about teaching object concepts to undergraduates. We argue that, to the contrary, there is plenty of room for new teaching insights to arise, for example by the application of new findings from areas such as cognitive science.
Equally, it is sometimes assumed that, for purposes of teaching object concepts, differences between object-oriented languages are minimal. Again, we will argue that this is not the case. Pure object languages such as Smalltalk that use a single consistent conceptual metaphor for computation can be understood using much simpler cognitive structures than hybrid languages such as Java, which mix several conceptual metaphors inconsistently (Mortensen, 2001). Such simplicity makes simpler teaching strategies possible and makes it relatively straightforward to focus on fundamental concepts. The purpose of this paper is to investigate the degree to which some, if not all, of the pedagogical benefits afforded by pure object languages can be retained when teaching object concepts using hybrid languages, given an appropriately designed teaching strategy.
2. Background
The institutional backdrop to this work is the replacement of The Open University's highly successful course, M206: Computing an Object-oriented Approach (Woodman et al, 1998; Holland et al, 1997). This 60 point Smalltalk-based course won a prestigious BCS IT Award, was recognised for its innovation by attaining Design Council Millennium Product status, and
attracted some 35,000 students in its presentation lifetime. With the introduction by the Open
University of named degrees and with this previous course coming to the end of its
presentation lifetime (2005) it was decided to replace it with two 30 point courses, one to
teach Object-oriented Analysis and design (designated M256) and one to teach fundamental
object-oriented programming principles (designated M255).
This latter course, M255, is the subject of this paper. Initially it was planned that M255 should
continue the strategy of teaching object-oriented concepts using Smalltalk, but early in its
design, a Departmental level decision based principally on marketing factors was taken to
switch the main computing language for undergraduate teaching from Smalltalk to Java, in
order to better meet issues such as name recognition by students, and to more directly
address student perception of employability issues. This posed the development team with a
substantial problem; how to retain as many as possible benefits of a carefully designed and
proven teaching strategy based on simplicity, consistency, and a clear conceptual model of
computation, when switching to a hybrid language such as Java which implements objects in
a partial and irregular way (Bates, 2004).
As the purpose of the course is not to teach the minutiae of any particular language but rather
to teach fundamental object-oriented programming concepts and skills transferable to any
object-oriented language, we looked for ways to focus on fundamental aspects of the object
model of computation, which are simple, consistent and meaningful, while deferring an
emphasis on syntactic detail until students had a stable conceptual model against which the
detail could be related. We found three principal measures to facilitate this in Java. The three
measures were:
- Open-ended interactive microworlds that allow accurate visualisation of object
references, message sending, state change and specialisation.
- A scripting environment for Java that minimises the amount of syntax that students
initially need, but which allows them to create, interact with and inspect the state of
‘live’ objects that are automatically displayed in a graphical window.
- An explicitly object-oriented (if verbose) programming style that reinforces object-
oriented concepts.
We will now deal with these measures in turn.
3. Microworlds
To provide a way of visualising, interacting with, and reasoning about concrete examples of
object concepts, we designed a series of graphical microworlds concerning frogs and other
amphibians. These microworlds allow the visible actions and state of amphibians to be
controlled in two parallel ways – on the one hand via buttons and menus, and in parallel by
sending messages to the amphibians using Java statements via a code pane. This duality
reflects in a concrete form the heart of the object model of computation, which may be viewed
as being based on a metaphor between objects and computers, and a recursion on this
metaphor, viewing computation as built from networks of simpler computations collaborating
together (Kay, 1993).
In particular, the amphibian microworlds model the behaviour of instances of the classes
Frog and Toad and of a subclass of Frog, HoverFrog. As the name ‘hoverfrog’ implies, the
classes are deliberately designed to be cartoon-like rather than realistic, and to be both
visually and conceptually memorable. So for example, in the cartoon-like amphibian
microworld, hoverfrogs may be positioned by students at arbitrary heights on the y axis,
whereas simpler amphibians such as frogs, may be asked to hop only from stone to stone
along the x-axis. This playful approach to abstracting state and behaviour is intended to help
demystify the processes of abstraction and modelling. The simplicity and memorability is
intended to give students a reference set of easy-to-memorise and eventually fully analysed
examples to use as a portable personal resource throughout the course, able to illustrate the
full range of object concepts.
Students interact with these microworlds at the very beginning of the course before they have seen any Java code. As already outlined, the microworlds are concrete cartoon-like worlds consisting of frogs and various other amphibians (two variations of the microworlds are shown in figures 1 and 2). For the purposes of the microworlds, frogs can be made to move their position and change their colour.


Via buttons, students can look at the state of frogs, send messages to them, see how they behave in response, see how this affects their state and look at how a message to one frog may in some cases cause a frog to send a message to another frog (sameColourAs() button in figure 2). As the students progress through the microworlds, more of the protocol of the amphibian objects, and the mechanisms used in their interactions are progressively exposed.
These microwords are also the vehicle by which students learn the syntax for writing message-sends (method invocations). Each microworld has a Code Pane in which they can write and execute Java statements (as shown in figure 2). By opening up a microworld’s code pane they can write statements that can do everything that pressing buttons can do such as
frog1.right(); and frog3.sameColourAs(hoverFrog2);
As already touched on, these microworlds have been devised to reveal fundamental object concepts including object reference, state change, polymorphism, specialisation and abstraction. The microworlds that students encounter are already populated with existing amphibians, but later in the course, students create new amphibians of various kinds which can be displayed in a graphical window.
Moving on to the interactive visualisation of object state and behaviour, figure 1 above shows a microworld which contains objects of the classes Frog and Toad. These two classes have identical attributes – position and colour – and identical message protocols, such as
green(), brown(), home(), right() and left(), which respectively set the receiving object’s colour to green, or brown, change its position to the “home” position and move left or right. Students select references to any of the objects in the microworld from a regular scrolling list and use buttons to send the corresponding messages. This simple user interface not only allows straightforward message sending to be visualized, it allows more abstract notions such as polymorphism to demonstrated; for example, when a frog is selected and the home() button is clicked (resulting in the message home() being sent) the receiving frog moves to the leftmost position, but if a toad has been selected, and so receives the message home(), it moves to the rightmost position – the “home” position for toads. This microworld also allows us to introduce the notion of class; frogs and toads do not behave identically to the same protocol, leading students to notions of different classes and different interfaces.

Figure 1
Figure 3 shows a microworld with a Frog object and a Hoverfrog object where students discover that instances of Hoverfrog understand all the messages sent by all the buttons in the microworld. The same is not true of frogs and toads. When the messages up() or down() are sent to the frog object, the Display Pane opens up and a message informing the user that an error has occurred is displayed. Further inspection of Frog and Hoverfrog objects (figures 4 & 5) reveals that Hoverfrog objects have an additional instance variable – height.

Figure 3
Through this exploration students are guided to discover that a hoverfrog has everything a frog object has but an extra attribute and an extended protocol – conceptually setting the scene to explore the fact that the HoverFrog class is a subclass of the Frog class. Once inheritance has been explicitly taught, students redesign these classes (Frog, HoverFrog and Toad) to be concrete subclasses of an abstract class Amphibian.
In figure 5 the inspector shows the state of a HoverFrog object. The inspector for an object always has three columns that list: the object's attributes, the types of those attributes and the values of those attributes. The inspectors are diving inspectors, therefore double-clicking on the colour row will reveal the state of the OUColour object as shown in Figure 6.
This, although not made explicit until later, reveals that fundamentally all objects (in Java at least) are composed of primitive types.
In the very initial stages, through exploration of these microworlds students quickly learn the following key ideas before getting to grips with the Java language:
- **Messages** – the only way to get an object to do anything is to send it a message
- **References** – to send a message to some object you need a way to refer to it.
- **Attributes** – by observing the results of sending messages to amphibian objects students discover that frogs and toads have the attributes of colour and position and hoverfrogs have the additional attribute height.
- **Class** – objects of the same class have the same attributes and the same behaviour
- **Inheritance** – objects that can do everything that another object can do – and them some more, are likely to be a instances of some subclass.
After learning the basic ideas about objects through exploring the microworlds, students move on to using a Java IDE. The IDE chosen was BlueJ. We chose this IDE as it has an extremely simple user interface, was specifically developed for teaching Java and is platform independent. Excellent though BlueJ is, we required a more flexible and expressive parallelism between interactively interpreted Java and graphical windows than was available in BlueJ. For this reason, we developed an extension to the environment called the
OUWorkspace – were, very shortly, the same key ideas bulleted above are then explored in detail using sequences of messages executed in the OUWorkspace. This we describe in the next section.
4. The OUWorkspace
In a traditional Java course the very first thing that a student does is to write (or more probably copy) a completely static class as shown below:
```java
public class HelloWorld
{
public static void main(String[] args)
{
System.out.println("Hello World!");
}
}
```
Straight away students are faced with understanding (or perhaps not) the structure of a class file, the delimiters '{' and '}', the purpose and structure of the `main()` method, how to declare an array of strings and the reserved words: public, static and void – which at such a point in their study is information overload. They then have to compile the class and then finally execute the program (probably from the command line). More importantly the code has very little to do with objects. The only object created by the program is the literal string "Hello World!" and the only message in the code is println() sent to out. The BlueJ IDE (Kölling et al., 2003) does much better than this, however from our experience of developing an integrated Smalltalk learning environment (Woodman et al., 1999) we wished to develop a simpler solution better suited for distance learning where students have limited contact with tutors, and better suited to the teaching strategy outlined above. This involved developing the OUWorkspace.
The OUWorkspace is a scripting environment for Java built as an extension to BlueJ. It is opened from within BlueJ by selecting Tools | OUWorkspace. When opened it is configured to work with the currently open BlueJ project allowing the creation and manipulation of instances of the classes defined in that project. In addition the OUWorkspace has access to many of the standard Java classes and the classes in the course supplied OU Class Library. If no BlueJ project is open the OUWorkspace only has access to the standard Java classes and the classes in the OU Class Library. The fact that all these classes are in scope to the OUWorkspace means that we can defer another bit of syntax: the import statement.
The OUWorkspace (see figure 7) contains three panes labelled 'Code Pane', 'Display Pane' and 'Variables'. The Code Pane is used to declare variables, enter and execute Java statements. To execute the statements the user must first highlight them and then select the Action | Execute Selected menu option or the Execute Selected option on the Code Pane’s popup menu. The Display Pane is where any textual output relating to those executed statements, including error messages, is displayed. The list pane labelled Variables holds a list of the currently declared variables in this case `hoppyHeight` and `hoppy`.
If an error is detected when the selected code is executed an error message will be shown in the Display Pane. An error message is identified as a syntax error, a semantic error or an exception. If more than one line of code has been executed the error message includes the line number of the code containing the error. This line number is relative to the highlighted code rather than all the code currently in the Code Pane.
With the Show Results check box checked and if the last expression in a statement returns a value (either an object or a primitive) the textual representation of that value will be displayed in the Display Pane (as shown in figure 7). If the Show Results check box is not checked only the results of System.out.println() statements are shown in the Display Pane (figure 8).
If the currently opened BlueJ project includes classes whose instances can be displayed graphically (at present we support amphibians and shape classes), then a Graphical Display menu appears in the OUWorkspace’s menu bar from which a graphical window can be opened. Figure 9 shows BlueJ with an open project that contains all the classes in the Amphibian hierarchy, the OUWorkspace and a graphical window capable of displaying amphibians.
Any Amphibian object created in the OUWorkspace and assigned to a variable will immediately appear in the graphical window, as the domain of that window is the pool of variables declared in the OUWorkspace and any variables that reference objects of the correct type will have their graphical representation displayed. Any message-sends to amphibian objects in the OUWorkspace will therefore be visually demonstrated. Students subclass the existing classes in the Amphibian hierarchy and any objects of these subclasses that they create and assign to a variable in the OUWorkspace will automatically be visible in the graphical window too, exhibiting whatever behaviour students choose to give them.
Objects created in the OUWorkspace may be given multiple references to allow concrete and visible experimentation with reference semantics, and to emphasis the fact that a reference can be a many-to-one relationship. To ensure that object destruction is interactively visualised, if the sole variable holding an object is assigned null in the OUWorkspace the object will be visibly garbage collected and the graphical representation of the object will disappear from the graphical window. Further concepts, such as refactoring, interfaces (which are taught very early), broadcast dependency and simple coding patterns are explored in similar ways.
5. Coding style
Of course, students move on from writing snippets of code in the OUWorkspace to modifying methods of existing classes before moving on to develop classes of their own. In writing code we enforce a verbose coding style that reinforces object ideas. We insist that within methods an object's own instance variables are always qualified by this and that class variables are always qualified by the class name – we do this because we want to make clear the distinction between object and class and also to avoid any confusion with similarly named class or local variables. Similarly messages within a method to the object executing that method are always qualified by this (or of course super); to miss out the qualifier is make the message-send look like a procedure call and we wish to reinforce that most of the processing in an OO program involves sending messages to objects. Note, in the context of objects we always talk in terms of sending messages to objects, not invoking methods. Messages are polymorphic, methods are not, the decision on which method to invoke is not determined at compile time but at run time by the JVM depending on the class of the object. However with static (class) methods we do talk in term of method invocation as method resolution can be determined at compile time. Instance variables are invariably made private to enforce data hiding and where necessary public accessor methods are written.
6. Evidence of the effects of the approach
The primary aim of this paper has been to describe and analyse a teaching approach and its systematic basis in a set of principles. It is not primarily about an empirical examination of the effects. However, there are some sources of evidence available that have some general bearing on the effects of the teaching approach on students and teachers, which we will now consider.
The first source of evidence comes from the routine student surveys that the Open University carries out for all courses. These surveys present the opportunity to compare students’ general opinions of M255 with a pre-existing course that took a far more conventional approach to teaching Java. More specifically, prior to M255 the only 2nd level course to teach Java was the 20pt course M254, which had four presentations between 2004 and 2006. This course was traditional in its approach, for example, starting off with \(\text{main()}\) to print a string to the standard output, teaching loops and iteration before addressing objects. The students on both M255 and M254 were surveyed in the autumn of 2006 by the University's Institute of Educational Technology as part of a survey of all our faculty's courses. In the survey students were asked to rate their extent of agreement to a number of statements (table 1). The results are indirectly relevant to our claims in that they afforded an opportunity to refute or weaken the claim that our approach is beneficial to students.
<table>
<thead>
<tr>
<th></th>
<th>M254</th>
<th>M255</th>
</tr>
</thead>
<tbody>
<tr>
<td>The course was more difficult than I expected.</td>
<td>Definitely or mostly agree</td>
<td>59.1%</td>
</tr>
<tr>
<td>The course met my expectations.</td>
<td>Definitely or mostly agree</td>
<td>79.4%</td>
</tr>
<tr>
<td>Overall I was satisfied with the teaching materials provided on this course. (For example printed text; CD ROMs; DVDs; online materials.)</td>
<td>Definitely or mostly agree</td>
<td>80.4%</td>
</tr>
<tr>
<td>I enjoyed studying this course.</td>
<td>Definitely or mostly agree</td>
<td>82.2%</td>
</tr>
<tr>
<td>I would recommend this course to other students.</td>
<td>Definitely or mostly agree</td>
<td>72.9%</td>
</tr>
<tr>
<td>The course met its stated learning outcomes.</td>
<td>Definitely or mostly agree</td>
<td>84.1%</td>
</tr>
<tr>
<td>The course provided good value for money.</td>
<td>Definitely or mostly agree</td>
<td>67.0%</td>
</tr>
<tr>
<td>Overall I am satisfied with my study experience.</td>
<td>Definitely or mostly agree</td>
<td>79.4%</td>
</tr>
<tr>
<td>Overall I am satisfied with the quality of this course.</td>
<td>Definitely or mostly agree</td>
<td>79.4%</td>
</tr>
</tbody>
</table>
Table 1
The simplest relevant observation from this data is that for all nine questions, students expressed more positive opinions about M255 in comparison to the more conventional M254.
Another source of feedback comes from the Open University's Course Reviews web site where students are encouraged to comment on any course they have studied (http://www3.open.ac.uk/coursereviews/). Two students commented specifically on the object-oriented nature of the course, as follows.
“A very enjoyable course. I have done some programming before, but had never really got my head around Object-Oriented Programming - until this course. The course content kept
me interested and explained everything ever so clearly. I'm now really looking forward to, and am confident about, studying the higher level courses in this area."
“This was a truly excellent course that really does get you started in OO programming and Java. It does exactly what it says on the tin and actually helped me a great deal in moving to a new job where I am programming in C++ (a similar language to Java.) The course materials were great and the software equally good (apart from a few bugs in the OUWorkspace which will hopefully be ironed out in future presentations.) 10/10 "
A third indirect source of evidence about the effects of M255's approach is the figures for success on the course compared with its more conventional predecessor (tables 2 & 3).
<table>
<thead>
<tr>
<th>M255 (Oct '06)</th>
<th>HEFC Return</th>
<th>Percentage of students included in HEFC returns who sat the exam</th>
</tr>
</thead>
<tbody>
<tr>
<td>Total</td>
<td>1409</td>
<td>63</td>
</tr>
<tr>
<td>New students</td>
<td>131</td>
<td>63</td>
</tr>
<tr>
<td>Continuing students</td>
<td>1278</td>
<td>63</td>
</tr>
</tbody>
</table>
Table 2
<table>
<thead>
<tr>
<th>M254 (Oct '06)</th>
<th>HEFC Return</th>
<th>Percentage of students included in HEFC returns who sat the exam</th>
</tr>
</thead>
<tbody>
<tr>
<td>Total</td>
<td>309</td>
<td>60</td>
</tr>
<tr>
<td>New students</td>
<td>16</td>
<td>56</td>
</tr>
<tr>
<td>Continuing students</td>
<td>293</td>
<td>60</td>
</tr>
</tbody>
</table>
Table 3
Perhaps the most interesting observation here is that the retention of new students was significantly increased, while more generally, retention was up slightly. Evidence of this kind bears only obliquely on our assertions, however, again it did at least afford an opportunity to rebut our claims.
The fourth source of evidence we shall consider comes from a small opportunistic poll of tutors who had taught on both courses (table 4). The sample is opportunistic in that all nineteen tutors were polled, but only some were able to respond in the limited time available. The sample is not statistically significant (six tutors), although extremely similar results were obtained from slightly larger sample of eight tutors on the course, by including responses from two of the authors of this paper. However, we will limit our comments here to the responses of the group uninvolved in this paper.
Tutors at the Open University are frequently asked their opinions about courses they teach, and the institutional culture is such that critical opinions are freely and routinely given. The questionnaire covered three of the most salient features of the course, and considered ten aspects of each of these features. Tutors were asked to respond on a five-point Lickert scale as follows: Definitely agree = 2, Mostly agree = 1, Neither agree nor disagree = 0, Mostly disagree = -1 and Definitely disagree = -2. The results of the questionnaire are shown below (table 4). Entries in the table indicate the proportion of the six tutors mostly agreeing or agreeing strongly with the statements as applied to the different features of the course. Some key observations are that the sample were unanimous that all three selected features of the course benefited students. Interestingly, there was less unanimity about benefits to tutors. However, it is worth noting (not shown in the table) that none of the sample of tutors mostly disagreed or definitely disagreed with any of the statements about any of the features. In other words, the least positive opinions expressed in response to any question were neutral – there were no negative responses to any question. However, when the sample was expanded
to eight course teachers (not shown in the table) by including the authors, some negative opinions were recorded. This was due to the fact that one author considered one aspect of the object oriented programming style (a stress on accessing instance variables via accessor methods, rather than directly) to add one more element of verbosity to an already relatively verbose programming language. However in all other respects, results from the slightly larger group were very similar.
<table>
<thead>
<tr>
<th>Interaction with objects via memorable microworlds in M255</th>
<th>The use of the OUWorkspace in M255 to interact with live objects</th>
<th>The explicitly object-oriented programming style of M255</th>
</tr>
</thead>
<tbody>
<tr>
<td>Benefits students</td>
<td>6/6</td>
<td>6/6</td>
</tr>
<tr>
<td>Benefits tutors</td>
<td>3/6</td>
<td>4/6</td>
</tr>
<tr>
<td>Helps students to visualise object concepts</td>
<td>6/6</td>
<td>6/6</td>
</tr>
<tr>
<td>Helps students to grasp object concepts quickly</td>
<td>6/6</td>
<td>6/6</td>
</tr>
<tr>
<td>Helps students to focus on object fundamentals rather than syntactic detail</td>
<td>6/6</td>
<td>5/6</td>
</tr>
<tr>
<td>Helps students to form a clear conceptual model</td>
<td>6/6</td>
<td>5/6</td>
</tr>
<tr>
<td>Helps students to remember object fundamentals</td>
<td>6/6</td>
<td>5/6</td>
</tr>
<tr>
<td>Helps students to explore the syntax and semantics of Java</td>
<td>4/6</td>
<td>4/6</td>
</tr>
<tr>
<td>Makes the course more interesting</td>
<td>6/6</td>
<td>5/6</td>
</tr>
<tr>
<td>Makes the course more fun</td>
<td>6/6</td>
<td>3/6</td>
</tr>
</tbody>
</table>
Table 4
Tutors were also given the opportunity to contribute free form comments on any issues raised by the questionnaire. Principal issues were raised as follows.
Several tutors commented on a specific technical limitation of the OUWorkspace (it is currently unable to deal with generic collections as introduced in Java 1.5), which means that it cannot be used directly to manipulate such collections. Some tutors commented on the usefulness of the OUWorkspace to tutors as well as to students.
"The OUWorkspace is wonderful – I found it very useful when writing my own code and in preparing examples (although there are things that you can't do with it)."
"The workspace used an old version of the JDK so not all Java syntax could be explored interactively which was frustrating for student/tutor. Otherwise, it was an excellent course, taking and adapting the first half of M206."
One tutor commented on neglected opportunities.
"I do think it would have been useful to use the BlueJ facility that lets you create an object and send messages to it by clicking on its representation in the BlueJ desktop. I think it better connects classes and objects than the OU workspace. I also think it would help to teach them to use the interactive debugger."
Some tutors noted the benefits to students and tutors of interweaving early coding with memorable microworlds, and the extent to which this encouraged confidence.
"It allows me as tutor at tutorials to talk in more concrete terms."
"My only other comparison with another OU course in Java is M257 [a follow on Java course], but for the initial hands on approach, the model borrowed from M206 seems to allow progress at an early stage to confidence that is crucial to good success. This appears to be true for both experienced students and those just starting."
This view of the microworlds was not universal.
"My only concern is with the microworlds which some students (and tutors) find irritating. I don't have this view - I think they are very helpful."
Some comments concerned the explicit object-oriented style of coding in M255.
"The OO style isn't just preferable, it's essential! – although students who then go on to M257 [a follow on Java course] seem to get upset that they aren't required to stick to the same rules there. Again, I don't have a problem with this – we don't live in an ideal world, and the sooner they get used to having to do things differently on different occasions, the better."
Some tutors commented on the course’s foregrounding of object-oriented concepts over syntactic detail.
"I think the course does a good job in abstracting the essential concepts of OOP before they get bogged down in the complex syntax and semantics. The rapid progress of M257 students is a good sign that we are getting it right."
Each category of evidence that we have considered is only weakly indicative in terms of strict relevance to our claims. Still, each category did at least offer an opportunity to rebut our claims, and in each case, to the limited extent that the evidence is able to afford relevant support, relatively clear support was given.
6. Conclusion
In terms of the goals that we set ourselves for the course, namely to teach object concepts through the vehicle of Java while approaching as closely as possible the clarity with which we were able to teach them using a pure object language, we believe we have had a reasonable degree of success, but it is open to more rigorous empirical evaluation to determine exactly to what degree, and in what respects, we have been successful.
The need to deal with the large number of irregularities, inconsistencies and special cases in Java curtailed the breadth of detail we were able to cover compared with the previous course using a pure object language (M206). For example in M206, students with no previous experience of programming gained firm grasp not only of constructing and modifying MVC user interfaces, but also extensive detail of the separable interface architecture and the mechanisms used, as well as quite complex forms of object-oriented iteration (Griffiths et al, 1999).
We believe that teaching fundamental object concepts lucidly, over and above the teaching of skills in particular programming languages, is not an optional goal – it is vital. We recommend consideration of the strategies outlined in this paper to teach object concepts effectively, whatever language is used as a vehicle.
References
|
{"Source-Url": "http://libeprints.open.ac.uk/9978/8/ItalicsPaper-SH-RWG.pdf", "len_cl100k_base": 7209, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 32389, "total-output-tokens": 8228, "length": "2e12", "weborganizer": {"__label__adult": 0.0008869171142578125, "__label__art_design": 0.0012693405151367188, "__label__crime_law": 0.0006766319274902344, "__label__education_jobs": 0.1217041015625, "__label__entertainment": 0.00018918514251708984, "__label__fashion_beauty": 0.0004189014434814453, "__label__finance_business": 0.0006890296936035156, "__label__food_dining": 0.00102996826171875, "__label__games": 0.0010280609130859375, "__label__hardware": 0.0011568069458007812, "__label__health": 0.0008935928344726562, "__label__history": 0.0007524490356445312, "__label__home_hobbies": 0.00032329559326171875, "__label__industrial": 0.0008392333984375, "__label__literature": 0.0012636184692382812, "__label__politics": 0.0006227493286132812, "__label__religion": 0.00144195556640625, "__label__science_tech": 0.01258087158203125, "__label__social_life": 0.0004911422729492188, "__label__software": 0.007411956787109375, "__label__software_dev": 0.841796875, "__label__sports_fitness": 0.0006823539733886719, "__label__transportation": 0.0014944076538085938, "__label__travel": 0.0005235671997070312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37002, 0.02634]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37002, 0.61965]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37002, 0.93118]], "google_gemma-3-12b-it_contains_pii": [[0, 786, false], [786, 4268, null], [4268, 8392, null], [8392, 9298, null], [9298, 11983, null], [11983, 14237, null], [14237, 17521, null], [17521, 18336, null], [18336, 21130, null], [21130, 24346, null], [24346, 28298, null], [28298, 32291, null], [32291, 35887, null], [35887, 37002, null]], "google_gemma-3-12b-it_is_public_document": [[0, 786, true], [786, 4268, null], [4268, 8392, null], [8392, 9298, null], [9298, 11983, null], [11983, 14237, null], [14237, 17521, null], [17521, 18336, null], [18336, 21130, null], [21130, 24346, null], [24346, 28298, null], [28298, 32291, null], [32291, 35887, null], [35887, 37002, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37002, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37002, null]], "pdf_page_numbers": [[0, 786, 1], [786, 4268, 2], [4268, 8392, 3], [8392, 9298, 4], [9298, 11983, 5], [11983, 14237, 6], [14237, 17521, 7], [17521, 18336, 8], [18336, 21130, 9], [21130, 24346, 10], [24346, 28298, 11], [28298, 32291, 12], [32291, 35887, 13], [35887, 37002, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37002, 0.16583]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
5b503ff033f8347d9fbf25e1b71b40f82360d4ef
|
Provenance Support for Rework
Xiang Zhao
*University of Massachusetts Amherst*
Leon J. Osterweil
*University of Massachusetts Amherst*
Barbara Staudt Lerner
*Mount Holyoke College*
Emery R. Boose
*Harvard University*
Aaron M. Ellison
*Harvard University*
Abstract
Rework occurs commonly in software development. This paper describes a simple rework example, namely the code refactoring process. We show that contextual information is central to supporting such rework, and we present an artifact provenance support approach that can help developers keep track of previous decisions to improve their effectiveness in rework.
1 Introduction
Rework [5, 6] is a pervasive activity in creative processes such as software development and scientific data analysis. Our notion of rework is that it is the repeating of activities in new contexts when new information indicates that revising the work is desirable. Such situations arise quite often in software development. For example, a design that responds to a requirement specification may suggest that the requirement specification was inconsistent or incomplete, leading to revision of the requirement specification. This reconsideration, elucidated by new understandings derived from design considerations, is a simple example of rework. Further, modifying the requirement specification may then trigger further rework to deal with the effects of the modifications on design and perhaps code as well, possibly involving multiple rounds of rework. Indeed it is widely believed that developers typically spend much of their time doing rework. It is important to note that rework is inevitable, since, as work progresses, the problems being addressed become better understood and actions taken with earlier, less complete knowledge often need to be reviewed and revised. Since rework is inevitable it is important to find ways to make it more efficient and effective.
This paper uses articulate descriptions of artifact provenance to create context information that can improve the effectiveness of rework. Section 2 presents an example based on refactoring of an Object-Oriented (OO) program and discusses the role of software artifact provenance in creating context information that supports rework. Section 3 describes how we capture and use provenance through a structure that we call a Data Derivation Graph (DDG). Section 4 describes some related work. Appendix A presents a second rework example based on scientific processes.
2 Modeling Rework in Code Refactoring
Refactoring is an important activity that is carried out frequently in the course of OO software development. Refactoring an OO software product changes the product’s internal structure without changing its external behaviors. Its goal is to improve such program characteristics as efficiency, readability, or evolvability. While there are many different kinds of refactoring (e.g. see [7]), in this paper we use the refactoring technique called *separating query from modifier*, that improves a badly designed method that is supposed to be used to query an object but has undesired side effects on the object state. The technique splits the method into one query and one modifier to eliminate the side effects, providing a query that is safer. We will demonstrate how this refactoring process incorporates multiple instances of rework, and indicate how using appropriate provenance data can support the creation of context information that helps users to be more effective with this kind of rework.
The *separating query from modifier* form of rework is described in [7] as follows: To begin, the (human) refactoring creates a query method that returns the same value as the original method. Next, the refactoring replaces the original method to return the result of a call to the query. Then, for each reference to the original method, the refactoring replaces that reference with a call to the query preceding a call to the modified method. Finally, the original method is assigned a void return type.
To accommodate the possibility of errors, compilations and unit tests are interspersed between the ma-
jor phases of this refactoring process to check that each phase has been done correctly. A more complete and realistic refactoring process further indicates the rework that must be done if a compilation or unit test fails. This typically involves revisiting the work that has been done using an understanding of why that work failed to come up with another attempt that will hopefully succeed. The process specification must also accommodate the possibility that additional errors may be made in attempting to fix earlier errors, requiring more rework which may entail examining a lengthy history of previous attempts to fix the error. In addition, multiple errors may need to be addressed in parallel, etc. This brief explanation should suggest how provenance data can be useful as the source of relevant history, and how presentation of this data could comprise context information that could help guide the efforts of the refactoring.
We now provide a detailed specification of some key parts of this refactoring process, indicating where and how rework occurs, and how provenance data can facilitate these parts. We use Little-JIL, a process definition language. The salient features of Little-JIL are described in Appendix B and in [17, 18].
Figure 1 shows a high level Little-JIL definition of the second step of separating query from modifier, namely Modify Original Method. The step is decomposed into three substeps: making the change, compiling the changed code and rerunning a regression test set. Each of the last two steps throws a typed exception if the step uncovers an error, with each exception handled by a child of the Modify Original Method step. Yellow “post-it” notes document the flow of process artifacts (e.g. sourcefilename, the source file being modified) between process steps. Thus, for example, Figure 1 shows that after changes are made in Change return statement, sourcefilecontent is sent to the parent step, which passes it to the compile and unit test steps, which could then throw either the CompilationFailureException or UnitTestFailureException exception. Figure 2 shows the UnitTestFailureException handler, which has previously performed substeps (e.g. Change return statement and Compile) that can throw exceptions. These will be exception instances that are different from those thrown before, necessitating different rounds of rework aimed at fixing different aspects of the artifacts. Because of this refactoring, refactorers have to make decisions in ever-deeper contexts as these artifacts evolve, making their correction increasingly difficult to understand. For example, Figure 2 shows how Change return statement could be executed several times but each time in a different context. The refactoring will then be faced with questions like: How did I get here? Why did previous fixes not work? How will my changes affect other artifacts? Appropriate contextual information can help answer these questions and support better decision-making. For example, the evolution history of the sourcefilecontent artifact could remind the refactoring of previous changes, thus helping the refactoring to avoid repeating a previous mistake, and suggesting a more suitable correction.
3 Provenance Support for Rework
We consider context to be the collection of all information about previous and current process execution states. We collect and store this information in a Data Derivation Graph (DDG) [9], which is an execution trace that records the data-flows and control-flows in a Little-JIL process as the process executes. Specifically, it records the step by which each artifact instance (including exceptions) is produced and consumed, the sequence of steps executed, the artifact values associated with each step execution, and the scopes within which step instance was executed. Figure 3 shows the DDG generated by executing a small portion of the refactoring process described in Section 2. Ovals represent step instance execution stages and rectangles represent artifact instances. A step’s start and finish stages are separated to show how parent steps create scopes for their descendants (if any). Exception objects are shown in brown to distinguish them from other data objects. There are three types of edges, depicting data derivation, control flow and artifact versions. An arrow from an artifact instance to a step stage instance represents the derivation of that artifact instance from the execution of that step instance. An arrow from a step stage instance to an artifact instance indicates that
the step derived its output artifact(s) using the artifact instance(s) being pointed to. For example, the fact that a `sourcefilecontent` instance points to the `Change return statement` step instance indicates that `sourcefilecontent` is derived from the step that modifies the source method to return a call to the created query method. Derivation edges related to exceptions are shown in red to distinguish them more clearly. The DDG also contains control flow edges, which represent the execution order between two steps. Version edges indicate the update series for some particular artifact. They can be traversed to provide a sense of the artifact’s derivation history. Version edges are not generated in the DDG currently, but will be incorporated in future work.
Figure 3 corresponds to part of the rework process starting from the completion of the Handle Compilation Failure step in Figure 1. This example illustrates the result of running unit tests after the `UnitTestFailureException` is thrown, and as defined in Figure 2, the results from the refactorer’s reconsideration of previous decisions and repeating of the `Change return statement` and `Create query method` steps. In the scope of Handle Unit Test Failure, compilation fails because the change made most recently fails triggering the `UnitTestCompilationFailureException` and causing another round of rework. The process definition indicates how the exception handlers are nested and how the rework activities are as well, thereby providing the basis for an accurate presentation of the histories of derivations of all variables comprising current context.
Our provenance support for rework automatically generates and maintains the DDG dynamically, making it accessible from all steps. Data objects in the DDG are linked to their actual values. These links and values are omitted from Figure 3 for simplicity. To suggest additional ways the DDG can aid rework, we incorporated a text-diff tool that records differences between DDG artifacts, which is particularly useful for comparing different versions of an artifact (found by traversing the `version` edges in Figure 3).
Our experience in modeling and executing this simple refactoring process suggests that the kind of provenance support we propose here provides useful artifact management and context information assistance to reworkers. This assistance becomes increasingly useful as the rework activity and associated contextual information become more complex, in particular in supporting rework processes in which modifications result in conflicts with each other, creating complex ripple effects that propagate through the artifact space.
4 Related Work
Rework in the form of iterative artifact development is central in the Spiral Model [1] and the Incremental Commitment Model [2]. Cass et al. [6] proposed initial approaches to formalizing rework processes, and later characterized a rework pattern [5] as being triggered by exception instances and fixed by revisiting previous steps.
Other technologies exist for capturing data provenance during workflow execution. VisTrails[8] tracks changes to data and constructs a history tree to capture provenance. Callahan et al. [4] incorporated this approach in a process setting and proposed a uniform environment. Kepler [3] provides a mechanism for integrating a broad range of supporting tools for specification, execution, and visualization of scientific data processes, and builds a provenance data store incrementally as is done by our
DDG. Some of the other approaches to provenance are summarized in [16]. We argue that exception handling and recursion are key features of Little-JIL that are missing from workflow languages and that enable creation of data provenance structures with semantic features essential to the effective support of rework.
5 Future Work
We will continue to improve our provenance support for the software refactoring processes. For example, we will consider how to properly place and show the version edges shown in Figure 3 in the actual DDG to help the developers to better understand the derivation history of the specific artifacts they are interested in. We are also building an interface in the Little-JIL step definition to invoke filter mechanisms for the DDG in order to provide more fine-grained contextual information per users' queries, which at the same time could be used to deal with the privacy issues related to the process by hiding sensitive private data.
6 Acknowledgments
The authors thank Sandy Wise for many conversations leading to important insights and ideas about how to design our refactoring process, Lori Clarke for numerous discussions about DDGs and their relationship to Little-JIL, and the students who have worked on the Harvard Forest project over the years: Cori Teshera Sterne, Morgan Vigil, Sofiya Taskova, Garrett Rosenblatt and Andy Kaldunski. We also thank the National Science Foundation for its support of this research through grants #CCF-0905530, CCF-0905530, DBI 04-52254 and DBI 10-03938. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation.
References
A Scientific Dataset Rework Example
A central aspect of science is the development of datasets that represent current or previous states of the world. Datasets include data collected by trained observers and remote, unsupervised sensors, both of which have varying degrees of accuracy or precision. Datasets are more than simply records of observations. Invariably, some observations will appear to be anomalous; on further inspection they may turn out to be accurate or inaccurate. Other observations may be completely missing due to such problems as sensor failure or communications difficulties. Inaccurate measurements may be adjusted based on auxiliary information or rejected outright (and converted to missing values). Missing data, whether screened as outliers or missing because of instrument or observer error may be replaced by modeled values. In sum, different values in a typical scientific dataset will have been arrived at by different means: observed, adjusted, or modeled. Scientists who access and use such datasets typically need to know the ways in which the different values have been obtained; this information is called provenance. Many other investigators have made these observations and developed a wide variety of approaches to documenting provenance [3, 8, 12, 13].
Scientists typically regard the development of datasets as an ongoing evolutionary process. Often there are additional processes that are applied to datasets iteratively and over longer time periods. For example, the ways in which initial data values were screened and the modeled values used to replace them were calculated may be reviewed and reassessed many times, not only by the originator of the dataset but also by other individuals or groups. Such reassessment and reanalysis often result in the replacement of an earlier version of the dataset with a newer one. Revised datasets are common. Their associated provenances may be large and complex, reflecting not only variations in the ways in which initial data values were created but also the history of how individual data values have evolved as a result of multiple revisions.
The replacement of one dataset by another is typically determined, at least in part, by careful consideration of the factors that drove the generation of previous versions of the dataset. Here we explore the importance of making available to dataset evolvers the provenance information that documents how previous datasets have been generated and have been replaced by newer versions. We think of provenance information as first-class data that is part of the rework process undertaken by scientists who examine both the data and their provenance when making decisions about rework. The result of the rework consists of a new version of the dataset and also an extension to the provenance reflecting the rework process itself.
A.1 Modeling the Scientific Rework Process
Figure 4 shows a simple scientific process written in Little-JIL to collect data from a sensor. This process repeatedly gets sensor readings and saves the values in a database. If a sensor reading is unavailable, an NA value is written to the database.
The Get Data process is completely automated. Later, the raw data are reviewed, either by a scientist or a software system, who (which) replaces missing values (NAs) or outliers with modeled values. Figure 5 shows these activities as the Fill Gaps and Replace with Modeled Value steps and their substeps. Of particular interest is the Insert Modeled Value substep of the Fill Gaps step. This step first evaluates the available models, noting what has previously been tried, and selects a model to apply. When applying the model, the scientist may determine that the model yields unsatisfactory results, leading to creation and application of new models.
Updating the Modeling Technique is the third substep of Do Post-Processing in Figure 5. This step first finds the values that were modeled with the technique the scientist wishes to replace and then repeats the Insert Modeled Value activity. This recursive use of Insert Modeled Value and Update Modeling Technique captures the notion of rework in the scientific process.
A.2 Provenance as First-Class Data
Figure 6 shows a portion of a DDG created during the execution of the Fill Gaps process. In this example, the Find Gaps step takes Sensor Data as input and produces Gap Locations (values = NA) as output. The Analyze History step takes the history of sensor values from the DDG as input and finds the models that were used to create the current version of the data. In this simplified example, the database contains just sensor readings and
NAs, so the output is that there are no previous models applied to the data. Also note that we have used an alternative view that omits the non-leaf nodes in order to present a more compact representation.
Figure 7 shows how the DDG can be used and enhanced during rework. This DDG shows an Unsatisfactory Result exception being thrown when the model is applied. This leads to the rework step of Updating Modeling Technique. The Find Values Modeled with Old Technique and Analyze History both use the History of Sensor Values extracted from the DDG as their input. This history also includes the result of Apply Model that just failed. As the process executes, the DDG is continuously updated and immediately available for examination within the process itself.
Figure 7 shows additional features associated with DDGs that describe rework. In addition to the control flow and data flow edges in Figure 6, edges also correspond to versioning and object equivalence. Specifically, the Sensor Data that is initially contained in the database may be replaced with new values when the Apply Model step is executed. One of the inputs of Apply Model is a Sensor Data object; it outputs a modified Sensor Data object. A version number on the node label distinguishes these objects. In Figure 7, different versions of Sensor Data are connected with double-headed edges; we can follow an edge from Sensor Data v3 to v2 to v1. Models used to produce those values also are connected with double-headed versioning edges.
Equivalence edges illustrate that multiple nodes can correspond to the same data. Data can enter the process either by being generated by the process directly or by being looked up in a database. If a data value is calculated during the execution of the process, it will appear as an output from the step that calculated it. If it is passed as a parameter to another step, a data flow edge represents that. If the value becomes persistent, either because it is written to a database or becomes part of the DDG, it can re-enter the process as the result of a
Figure 7: DDG of the Scientific Rework Process
database query or a DDG query. The DDG shows this by connecting such nodes with an edge consisting of double lines to indicate that the data values are equivalent, but that the data did not flow directly from the step that output them to the later step that uses them. In Figure 7 an equivalence edge connects the Selected Model v1 node that is retrieved from the persistent DDG by the Analyze History step and the Selected Model v1 node that is the output of the Select Model in the top of the DDG to indicate that those represent the same model, even though the model did not flow directly between the steps involved.
A.3 Related Work
Provenance has been an important feature of scientific workflow systems for some time [3, 8, 11, 13] Our contribution is the use of provenance data as first class data available for examination by the scientist while carrying out scientific work. This use of provenance data as first-class data is beginning to appear in other aspects of provenance research as well. Zeng et al. [19] mine provenance data and event logs to create more complex workflows. Missier [10] uses provenance data to learn and guide automated decision making in workflows that require thousands of iterations. Muniswamy-Reddy and Seltzer [14] use provenance data to optimize cloud storage.
While scientists generally acknowledge that the type of rework we describe here is common, there has also been little work in modeling the larger scientific process, particularly including rework. Oliveira et al. [15] organize related, perhaps reworked, scientific processes using process families. In contrast, we keep the process itself fairly high-level and think of some details, such as the modeling technique being used as a parameter that is evaluated during execution of the process.
B Little-JIL
Little-JIL is a graphical process definition language particularly suited for defining processes that require the coordination of multiple human and computational agents. Its semantics are precisely defined using finite-state automata. Among its distinguishing features are its use of scoping to make clear the identity of input and output datasets, its facilities for specifying parallel processing and for defining the handling of exceptional conditions, and the clarity with which iteration can be specified and controlled. A process is defined in Little-JIL using hierarchically decomposed steps.
A Little-JIL process definition consists of three main components: artifact space, resource repository, and coordination definitions. The coordination definitions include a collection of steps or activities that different agents are assigned to perform during process execution, and describe the coordination among the artifacts, activities, resources, and agents (which are treated as special kinds of resource). A Little-JIL coordination definition has a visual representation that is comprised of steps, which are hierarchically decomposed to the level of details (leaf steps) as users desire. Figure 8 shows the iconic representation of a single step. A Little-JIL step represents a task to be done by an assigned agent, and it can communicate with its parent steps and substeps through copy-in and copy-out parameter bindings of the artifacts. Each step has a sequencing badge to represent the type of control flow among its substeps, an interface to specify its input/output artifacts and resources, a prerequisite to be checked against before the step starts, a postrequisite to be checked against before the step reaches successful completion, and handlers for exceptions. A Little-JIL step also specifies how it should respond to events that may occur during execution and other features such as cardinality.
The rigorous and articulate data-flow and control-flow specifications in Little-JIL set up the basis for our provenance support. The complete specifications of Little-JIL
can be obtained in [17]; we highlight some important features here.
- **Step sequencing.** Every non-leaf step has a sequencing badge (an icon embedded in the left portion of the step bar), which defines the order in which its sub-steps execute. Besides the sequential step shown in Figure 8, Little-JIL also supports concurrency, ordered choices, and unordered choices.
- **Data artifacts and data flows.** Each step declares the data that it creates and uses, while annotations on the edges (not shown in Figure 8) indicate how the data flows from one activity to another. As is shown in Figure 1, the Change return statement step declares `sourcefilename` as the input parameter and `sourcefilecontent` as the output parameter. The `sourcefilename` will be passed into its scope when the step is posted and ready to start, and the `sourcefilecontent` will be copied out once the step completes.
- **Requisites.** A Little-JIL step optionally can be preceded and/or succeeded by a step executed before and/or after (respectively) the execution of the step’s main body. Requisites enable the checking of a specified condition either as a pre-condition for step execution or as a post-condition to assure that the execution has been acceptable. If a requisite fails, an exception is triggered to allow the error to be handled. The compilation and unit testing steps can also be implemented as post-requisites in our refactoring process definitions.
- **Exception Handling.** A step in Little-JIL can signal the occurrence of exceptional conditions when there are aspects of its execution that fail (such as violation of one of the step’s requisites). These are important to allow for deviations in the execution of the process due to errors or unusual conditions. Our current process model treats the rework process as being triggered by exception instances, and the exception handler, as is defined in Figure 2, elaborates

|
{"Source-Url": "https://people.cs.umass.edu/~xiang/papers/tapp12.pdf", "len_cl100k_base": 5149, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23274, "total-output-tokens": 6941, "length": "2e12", "weborganizer": {"__label__adult": 0.00028967857360839844, "__label__art_design": 0.0002791881561279297, "__label__crime_law": 0.00022280216217041016, "__label__education_jobs": 0.0006833076477050781, "__label__entertainment": 4.553794860839844e-05, "__label__fashion_beauty": 0.0001266002655029297, "__label__finance_business": 0.0001633167266845703, "__label__food_dining": 0.00027108192443847656, "__label__games": 0.0003426074981689453, "__label__hardware": 0.0004835128784179687, "__label__health": 0.00033545494079589844, "__label__history": 0.00016891956329345703, "__label__home_hobbies": 6.771087646484375e-05, "__label__industrial": 0.0002624988555908203, "__label__literature": 0.0002319812774658203, "__label__politics": 0.00017642974853515625, "__label__religion": 0.00035309791564941406, "__label__science_tech": 0.00787353515625, "__label__social_life": 8.362531661987305e-05, "__label__software": 0.0044708251953125, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.00023376941680908203, "__label__transportation": 0.0003643035888671875, "__label__travel": 0.00017023086547851562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30806, 0.02059]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30806, 0.76152]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30806, 0.91246]], "google_gemma-3-12b-it_contains_pii": [[0, 4153, false], [4153, 8709, null], [8709, 12240, null], [12240, 18152, null], [18152, 22818, null], [22818, 24887, null], [24887, 28838, null], [28838, 30806, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4153, true], [4153, 8709, null], [8709, 12240, null], [12240, 18152, null], [18152, 22818, null], [22818, 24887, null], [24887, 28838, null], [28838, 30806, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30806, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30806, null]], "pdf_page_numbers": [[0, 4153, 1], [4153, 8709, 2], [8709, 12240, 3], [12240, 18152, 4], [18152, 22818, 5], [22818, 24887, 6], [24887, 28838, 7], [28838, 30806, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30806, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
7c451b020be745cdf4bf56f0463c2d2914d32f17
|
Researchers in computational biology today make use of a large number of different software packages for modeling, analysis, and data manipulation and visualization. In this paper, we describe the ERATO Systems Biology Workbench (SBW), a software framework that allows these heterogeneous application components—written in diverse programming languages and running on different platforms—to communicate and use each others' data and algorithmic capabilities. Our goal is to create a simple, open-source software infrastructure which is effective, easy to implement and easy to understand. SBW uses a broker-based architecture and enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe the SBW architecture and the current set of modules, as well as alternative implementation technologies.
1 Introduction
The ERATO Systems Biology Workbench (SBW) is a framework for allowing both legacy and new application resources to share data and algorithmic capabilities. Our target audience is the computational biology community whose interest lies in simulation and numerical analysis of biological systems. Our work has been motivated by the desire to achieve interoperability between a set of tools developed by our collaborators: BioSpice\(^1\), DBsolve\(^2\), E-Cell\(^3\), Gepasi\(^4\), ProMoT/DIVA\(^5\), Jarnac\(^6\), StochSim\(^7\), and Virtual Cell\(^8\). Since
these applications are written in a variety of languages and run on a variety of platforms, it was essential not to limit integration capabilities to resources implemented in a single language or platform. SBW allows communication between processes potentially located across a network on different hardware and operating systems. SBW currently has bindings to C, C++, Java, Delphi and Python, with more planned for in the future, and it is portable to both Windows and Linux.
We are aware that our target community is largely not composed of professional software programmers. Any software development carried out in this community tends to be secondary to the main research effort. As a result, we have endeavored to make integration of software components into SBW as straightforward as possible. SBW is also an open-source framework, to allow the community to evolve and grow SBW with their changing needs. In addition, many laboratories have budgetary constraints. Unlike the licensing terms of a number of other frameworks, our use of an open-source license (GNU Lesser General Public License, LGPL) guarantees that SBW will remain available at no cost indefinitely, while simultaneously allowing developers the freedom to release closed-source modules that work with SBW.
SBW does not attempt to be more than a mechanism to enable the integration of applications. The architecture of SBW does not exclude its integration with other frameworks and integration technologies. In fact, we hope to integrate other frameworks to extend the functionality available to users and developers. In many configurations, SBW will be a small component in a larger system (size being quantified as CPU, disk and memory usage).
We begin by describing SBW from the user’s perspective in Section 2, then from the developer’s perspective in Section 3. In Section 4, we go on to compare its features to those of other tools for building interoperable software. Finally, in Section 5 we describe the various modules made available in the initial beta release of SBW in November 2001.
2 SBW from the User’s Perspective
When an application has been modified to interact with SBW, we describe it as being SBW-enabled. This means the application can interact with other SBW-enabled applications. The kinds of possible interactions depend on the facilities that have been exposed to SBW by the applications’ programmers. Typical SBW-enabled applications also provide for ways of exchanging models and data using an XML-based common representation format, the Systems Biology Markup Language (SBML). SBW is not a controller in the system—the flow of control is entirely determined by what the individual modules and the user
do. SBW doesn’t define any particular type of user interaction with modules: the user can control modules from either script language interpreter, GUIs or some hybrid of GUI and interpreter. The interpreter approach is inevitably more flexible enabling access to all modules in the SBW environment in a single application environment. In the remainder of this section, we present a scenario where the user is controlling events using via GUIs only.
A user will typically start up the first SBW-enabled application as they would any other program. The user doesn’t need to do anything specific to start SBW itself. Figure 1 shows an example of using a collection of SBW-enabled software modules. The upper left-hand area in the figure (partly covered by other windows) shows an SBW-enabled version of JDesigner, a visual biochemical network layout tool. This module’s appearance is nearly identical to that of its original non-SBW-enabled counterpart, except for the presence of a new item in the menu bar called SBW. This is typical of SBW-enabled programs: the SBW approach strives to be minimally intrusive. In this example, the user has created a network model in JDesigner, then has decided to run a time-series simulation of the model. To do this, the user has pulled down the SBW menu and selected one of the options listed, Jarnac Analysis, to invoke the SBW-enabled simulation program Jarnac. This has brought forth a control GUI, shown underneath the plot window in the lower right-hand area of Figure 1; the user has then input the necessary parameters into the control GUI to set up the time-series simulation, and has finally clicked the Run button in the GUI to start the simulation.
In this example, the control GUI used SBW calls to instruct the simulation module (Jarnac) to run with the given parameters and send the results back to the controlling GUI module, which then sent the results to a plotting module. This example scenario illustrates the interactions involved in using SBW and four modules: the visual JDesigner, the computational module Jarnac, a time-series simulation control GUI, and a plotting module.
3 SBW from the Developer’s Perspective
SBW uses a broker-based, message-passing architecture that allows dynamic extensibility and configurability. As mentioned above, software modules in SBW can interact with each other as peers in the overall framework. Modules are started on demand through user requests or program commands. Modules are executables which have their own event loops. All remote calls run in their own threads. As shown in Fig. 2, interactions are mediated through the SBW Broker, a small program running on a user’s computer; the Broker enables locating and starting other modules and establishing communications links
between them. Communications are implemented using a fast, lightweight system with a straightforward programming interface.
Broker-based architectures are a common software pattern. They are a means of structuring a distributed software system with decoupled components that interact by remote service invocations. In SBW, the remote service invocations are implemented using message passing, another tried and proven approach. Because interactions in a message-passing framework are defined at the level of messages and protocols for their exchange, it is easier to make the framework neutral with respect to implementation languages and platforms: modules can be written in any language, as long as they can send, receive and process appropriately-structured messages using agreed-upon conventions. The dynamic extensibility and configurability quality of SBW is that components—i.e., SBW modules—can be easily exchanged, added or removed, even at run-time, under user or program control.
From the application programmer’s point of view, it is preferable to isolate communications details from application details. For this reason, we provide an
Application Programming Interface (API)\textsuperscript{15} that hides the details of constructing and sending messages and provides ways for methods in an application to be “hooked into” the messaging framework.
We strove to develop an API for SBW that provides a natural and easy-to-use interface in each of the different languages for which we have implemented libraries. By “natural”, we mean that it uses a style and features that programmers accustomed to that language would find familiar. For example, in Java, the high-level API is oriented around providing SBW clients with proxy objects whose methods implement the operations that another application exposes through SBW.
An SBW module provides one or more interfaces or \textit{services}. Each service provides one or more methods. Modules register the services they provide with the SBW Broker. The module optionally places each service it provides into a \textit{category}. By convention, a \textit{category} is a group of services from one or more modules that have a common set of methods.
As an example of how simple the high-level API is to use in practice, the following is Java code demonstrating how one might invoke a simulator from a hypothetical module:
\begin{verbatim}
Application Programming Interface (API)\textsuperscript{15} that hides the details of constructing and sending messages and provides ways for methods in an application to be “hooked into” the messaging framework.
We strove to develop an API for SBW that provides a natural and easy-to-use interface in each of the different languages for which we have implemented libraries. By “natural”, we mean that it uses a style and features that programmers accustomed to that language would find familiar. For example, in Java, the high-level API is oriented around providing SBW clients with proxy objects whose methods implement the operations that another application exposes through SBW.
An SBW module provides one or more interfaces or \textit{services}. Each service provides one or more methods. Modules register the services they provide with the SBW Broker. The module optionally places each service it provides into a \textit{category}. By convention, a \textit{category} is a group of services from one or more modules that have a common set of methods.
As an example of how simple the high-level API is to use in practice, the following is Java code demonstrating how one might invoke a simulator from a hypothetical module:
\begin{verbatim}
\end{verbatim}
// Define the interface for the Java compiler.
interface Simulator
{
void loadSBML(string);
void setTimeStart(double);
void setTimeEnd(double);
void setNumPoints(integer);
double[] simulate();
}
double[] runSimulation(String modelDefinition, double startTime,
double endTime, integer numPoints)
{
try
{
// Start a new instance of the simulator module.
Module module = SBW.getModuleInstance("edu.caltech.simulator");
// Locate the service we want to call in the module.
Service srv = module.findServiceByName("simulation");
Simulator simulator = (Simulator) srv.getServiceObject(Simulator.class);
// Send the model to the simulator and set parameters.
simulator.loadSBML(modelDefinition);
simulator.setTimeStart(startTime);
simulator.setTimeEnd(endTime);
simulator.setNumPoints(numPoints);
// Run the simulation and return the result.
return simulator.simulate();
} catch (SBWException e) {
// Handle problems here.
}
}
As the example above shows, using an SBW-enabled resource involves getting a reference to the module that implements a desired service and invoking methods on that service.
4 Comparison to Related Efforts
The idea of creating a framework that enables the integration of disparate software packages is not new. When we began this project, we considered using an existing framework and simply augmenting it with additional facilities. But after examining a number of other options, we were forced to conclude that none of the existing systems provided an adequate combination of sim-
plicity, support for major programming and scripting languages, support for dynamically querying modules for services they offer, support for distributed computing on Windows and Linux (with a clear ability to be ported to other platforms), and free availability of open-source implementations for Windows and Linux.
4.1 Frameworks for Computational Biology
One of the projects most similar to SBW is ISYS. This system provides a generalized platform into which components may be added in whatever combination the user desires. The system provides a bus-based communications framework that allows components to interoperate without direct knowledge of each other, by using a publish-and-subscribe approach in which components place data on the bus and other components can listen for and extract the data when it appears. ISYS components include graphical visualization tools and database access interfaces. This style of interoperability is an alternative to the more direct communications in SBW, but could be used to the same ends.
The main drawbacks of ISYS for our goals is that it is not freely available and distributable. Moreover, it is largely Java-based and does not offer direct support for components written in other languages.
4.2 General-Purpose High-Level Frameworks
In terms of communications frameworks, SBW has many similarities to Java RMI and CORBA. Both of the latter technologies enable a programmer to tie together separate applications potentially running on different computers, and both offer directory services so that modules can dynamically query and discover the services being made available by other modules. Unfortunately, Java RMI is only truly practical when all applications are written in Java, conflicting with our goal of supporting as many languages as possible. Although RMI-over-IIOP is an option, this simply means that the non-Java components would have to use CORBA.
CORBA is the industry standard for broker-based application object integration. We decided against using CORBA as the basis of SBW primarily because of issues of standards compliance, complexity and maintenance. CORBA is a large and complicated standard and has a steep learning curve. We felt it would have been too much to ask of most researchers, whose time is limited and main goals are in developing domain-specific applications, to acquire CORBA development skills. Further, there are no open-source, standard-compliant implementations of CORBA that support sufficiently many languages in the same implementation. The implication is that SBW modules written in dif-
different languages would have to interact with CORBA implementations from different open-source projects. We were concerned about the difficulties of managing not only compatibility of different CORBA packages, but also the installation process, user documentation, and long-term maintenance.
Notwithstanding these issues, we are not in principle opposed to using CORBA. Indeed, we plan to design an interface that will provide a CORBA bridge to SBW for those developers who prefer to use this technology.
4.3 Low-Level Communications Frameworks
SBW uses a custom message-passing communications layer with a simple tagged data representation format and a specialized protocol layered on top of TCP/IP sockets. We examined several alternatives before implementing the scheme used in SBW.
Two attractive, recent alternatives were SOAP\textsuperscript{20} and XML-RPC\textsuperscript{21}. The latter is essentially a much-simplified version of the former; both provide remote procedure calling facilities that use HTTP as the protocol and XML as the message encoding. We performed an in-depth comparison of XML-RPC and SBW’s messaging protocols\textsuperscript{22} and concluded that XML-RPC and SOAP would not work for the goals of SBW. The HTTP and XML layers impose a performance penalty not present in SBW’s simpler protocol and encoding scheme. Further, the HTTP protocol is not bidirectional: HTTP is oriented towards client-server applications in which a client initiates a connection to a server listening on a designated TCP/IP port. The implication of using XML-RPC for SBW is that each module would have to listen on a different TCP/IP port. This would add needless complexity to SBW.
Another alternative for the message-passing functionality in SBW is MPI\textsuperscript{14}. We declined using MPI primarily because at this time there does not appear to be a standard Java interface, and because MPI is considerably more complex than the simple message-passing scheme used in SBW. However, MPI remains an option for reimplementing the communications facility in SBW if it proves useful to do so in the future.
5 SBW Modules
In this section, we describe a variety of different modules that we have implemented and released with the SBW beta release in November 2001.
5.1 **Inspector Module**
The inspector module is a GUI based tool which allows a user to explore the SBW environment. It enables other modules and their services and methods to be inspected. In the future we hope to extend the inspector module to enable individual methods of a module service to be executed, in which case the inspector will provide an excellent tool for testing new modules.
5.2 **JDesigner**
JDesigner, developed by Herbert Sauro, allows users to draw biochemical networks on screen. It can save models in SBML format. We provide an SBW interface to JDesigner which allows other modules connected to SBW to gain access to the functionality of JDesigner. In particular, it is possible for a remote module to request SBML code from JDesigner. In addition, we also provide an interface which allows remote modules to control many details of JDesigner, for example providing the ability to rearrange the network on-screen.
JDesigner has a menu option “SBW” which lists the services registered with the SBW Broker in the “Analysis” category (e.g. the MATLAB Model Generator, described below). JDesigner passes the SBML representing the drawn model to the selected service.
5.3 **Network Object Model**
The most frequently requested module is some means of parsing and interpreting SBML. SBML is defined in terms of XML and for many developers in our community it is a non-trivial task to code a parser for SBML. We have therefore written a module, called the Network Object Model or NOM with methods that load and generate SBML as well as methods for accessing and modifying the object model constructed from the loaded SBML. The NOM module can be used as a SBML clipboard for moving data between applications.
5.4 **Optimization Module**
A frequently need in modeling is the ability to fit parameters to a model. This problem is recast as the minimization of some predefined fitting function by adjusting model parameters. We have collaborated with Pedro Mendes to allow user access to the extensive optimization algorithms in Gepasi.
5.5 Plotting Module
This module provides a 2D graph plotting service.
5.6 MATLAB Model Generator
This translation module creates either ODE or Simulink models for MATLAB\textsuperscript{23} from SBML. This modules provides services in the “Analysis” category and thus can be invoked from JDesigner or similar modules. We anticipate integrating the MATLAB application itself as a module in later releases enabling SBW modules to be invoked from the MATLAB command line and scripts.
5.7 Simulation Control GUI
The simulation control GUI is a non-scripting interface to a simulation server such as Jarnac. The simulation control GUI service is in the “Analysis” category. The GUI enables users to set up simulation runs, edit parameters or variables and plot the resulting runs. In addition the GUI can also be used to compute the steady state and carry out metabolic control analysis. Any simulator in the “Simulation” category can be controlled from this GUI interface. The Gillespie, Gibson and Jarnac modules (see below) provide services in that category.
5.8 Gillespie Stochastic Simulator
The stochastic simulator module is based on the Gillespie algorithm\textsuperscript{24}. The code which forms the basis of this module was provided by Baltazar Aguda\textsuperscript{25}. Once a model is loaded, the module allows a user to change parameters and variables in addition to graphing of results and collection of run data.
5.9 Gibson Stochastic Simulator
This stochastic simulator uses an algorithm\textsuperscript{26}, developed by Gibson and Bruck, based on the Gillespie algorithm which includes optimizations to reduce simulation run times.
5.10 Jarnac Simulator
Jarnac\textsuperscript{6} is an ODE-based biochemical network simulator. Simulations are controlled via a scripting language. Services supported by Jarnac include matrix manipulation, time-course simulation, steady-state analysis and metabolic
control analysis. Adding an SBW interface to Jarnac permits two types of interaction. In one mode, Jarnac can act as a server for carrying out simulations. This allows users access to the capabilities of Jarnac without having to interact with a scripting interface.
The second mode of operation is from the scripting interface itself. In this mode, the user is able to explore and use SBW modules from a command line. Interaction is achieved by requesting Jarnac to create a Jarnac object interface to the desired module. This allows a user to use a module as if it were part of Jarnac itself.
6 Summary
The SBW is a very flexible and straightforward system to integrate a range of heterogeneous software components written in a variety of languages and running on a variety of platforms. At the time of this writing, we have completed the implementation of the SBW Broker and the libraries that implement the SBW protocol in Delphi, C, C++, and Java. Full documentation of the SBW design is available from the project web site. A beta release of the SBW software and several sample modules was made in November, 2001, and is available from the project web site.
Acknowledgments
This work has been funded by the Japan Science and Technology Corporation under the ERATO Kitano Systems Biology Project. The Systems Biology Workbench has benefitted from the input of many people. We wish to acknowledge in particular the authors of BioSpice, DBsolve, Cellerator, E-Cell, Gepasi, ProMoT/DIVA, StochSim, and Virtual Cell, and the members of the sysbio mailing list. We also thank Mark Borisuk, Mineo Morohashi and Tau-Mu Yi for support, comments and advice.
References
25. Baltazar Aguda, personal communication.
|
{"Source-Url": "http://sbw.sourceforge.net/sbw/docs/psb2002/mhucka-distrib.pdf", "len_cl100k_base": 4568, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23957, "total-output-tokens": 6136, "length": "2e12", "weborganizer": {"__label__adult": 0.00031948089599609375, "__label__art_design": 0.00034117698669433594, "__label__crime_law": 0.0003523826599121094, "__label__education_jobs": 0.0018053054809570312, "__label__entertainment": 0.00013506412506103516, "__label__fashion_beauty": 0.00019431114196777344, "__label__finance_business": 0.0004730224609375, "__label__food_dining": 0.0005369186401367188, "__label__games": 0.0006299018859863281, "__label__hardware": 0.0016326904296875, "__label__health": 0.0011234283447265625, "__label__history": 0.0003731250762939453, "__label__home_hobbies": 0.00016355514526367188, "__label__industrial": 0.0007576942443847656, "__label__literature": 0.0002930164337158203, "__label__politics": 0.0002758502960205078, "__label__religion": 0.0005340576171875, "__label__science_tech": 0.25341796875, "__label__social_life": 0.00017380714416503906, "__label__software": 0.029754638671875, "__label__software_dev": 0.70556640625, "__label__sports_fitness": 0.0003883838653564453, "__label__transportation": 0.0005731582641601562, "__label__travel": 0.00023984909057617188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25617, 0.0322]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25617, 0.35107]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25617, 0.8743]], "google_gemma-3-12b-it_contains_pii": [[0, 1570, false], [1570, 4279, null], [4279, 7057, null], [7057, 8207, null], [8207, 10721, null], [10721, 12393, null], [12393, 14985, null], [14985, 17269, null], [17269, 19327, null], [19327, 21252, null], [21252, 23328, null], [23328, 25617, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1570, true], [1570, 4279, null], [4279, 7057, null], [7057, 8207, null], [8207, 10721, null], [10721, 12393, null], [12393, 14985, null], [14985, 17269, null], [17269, 19327, null], [19327, 21252, null], [21252, 23328, null], [23328, 25617, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25617, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25617, null]], "pdf_page_numbers": [[0, 1570, 1], [1570, 4279, 2], [4279, 7057, 3], [7057, 8207, 4], [8207, 10721, 5], [10721, 12393, 6], [12393, 14985, 7], [14985, 17269, 8], [17269, 19327, 9], [19327, 21252, 10], [21252, 23328, 11], [23328, 25617, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25617, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
e011fdd87bc7a8c8a087759f610f740cd885ecd9
|
BiRewire
Andrea Gobbi, Francesco Iorio
Contents
1 Overview 1
2 Installation 3
3 Package Dependencies 3
4 Notation 3
5 Directed Signed Network 4
6 Function Description 4
6.1 biRewire.analysis.bipartite and .undirected . . . . . . . . . . . . . . . . . . . 5
6.2 biRewire.rewire.bipartite ................................. 6
6.3 biRewire.rewire.undirected ............................... 6
6.4 biRewire.similarity .................................... 6
6.5 biRewire.rewire.bipartite.and.projections .............. 6
6.6 biRewire.sampler.bipartite .............................. 7
6.7 biRewire.visual.monitoring.bipartite and .undirected ... 7
6.8 biRewire.load.dsg and biRewire.save.dsg ..................... 7
6.9 biRewire.induced.bipartite and biRewire.build.dsg ....... 7
6.10 biRewire.rewire.dsg .................................... 8
6.11 biRewire.similarity.dsg .................................. 8
6.12 biRewire.sampler.dsg .................................... 8
7 Directed graphs 8
8 Example 9
1 Overview
BiRewire is an R package implementing high-performing routines for the randomisation of bipartite graphs preserving their node degrees (i.e. Network Rewiring), through the Switching Algorithm (SA) [5].
This package is particularly useful for the randomisation of '0-1' tables (or presence-absence matrices) in which the distributions of non-null entries (i.e. presence distributions) must be preserved both across rows and columns. By considering these tables as incidence matrices of bipartite graphs then this problem reduces to bipartite network rewiring.
For example, by modeling a genomic dataset as a binary event matrix (BEM),
in which rows correspond to samples, columns correspond to genes and the
\((i, j)\) entry is non-null if the \(i\)-th sample harbours a mutation in the \(j\)-th gene,
then with BiRewire is possible to randomise the dataset preserving its mutation
rates both across samples and genes. This is crucial to preserve tumour
specific alterations, dependencies between gene-mutations and heterogeneity in
mutation/copy-number-alteration rates across patients.
Large collections of such randomised tables can be then used to approximate
samples from the uniform distribution of all the possible genomic datasets with
the same mutation-rates of the initial one. Finally this data can be used as null
model to test the statistical significance of several combinatorial properties of
the original dataset: for example the tendency of a group of genes to be co-
or mutually-mutated [7].
Moreover, with se same routines, it is possible to generate a rewired version
of a given directed signed network (DSG), encoding for example a patway or
a signaling network (for details see section 5 and [2]). Similar procedures have
been implemented in order to manage undirected networks. Since version 3.6.0,
the SA can be performed also on matrices containing NAs. In this case the
SA works as usual but the position of NA will be preserved. This feature is
available if the graph is encoded with its incidence/adjacence matrix and not
in the case of DSG.
Specifically, with BiRewire users can:
1. create bipartite graphs from genomic BEMs (or, generally, from any kind
of presence-absence matrix);
2. perform an analysis, which consists of studying the trend of Jaccard Similarity
between the original network and its rewired versions across the
switching steps (by using a user-defined sampling time), and analytically
estimating the number of steps at which this similarity reaches a plateau
(i.e. the maximal level of randomness is achieved) according to the lower
bound derived in [1];
3. generate rewired versions of a bipartite graph with the analytically derived
bound as number of switching steps or a user-defined one;
4. derive projections of the starting network and its rewired version and
perform different graph-theory analysis on them;
5. generate a set of networks correctly drawn from the suitable null-model
starting from the initial BEM;
6. monitoring the behaviour of the Markov chain underlying the SA
7. perform the same analysis described in point 1,2,3,5 and 6 for undirected
graphs and directed signed graphs (DGS).
All the functions of the package are written in C-code and R-wrapped. A
reduced version of the packages has been also implemented in Python https: //github.com/andreagobbi/pyBiRewire.
2 Installation
It is possible to download the package from http://www.ebi.ac.uk/~iorio/BiRewire and install it with the shell-command:
```r
R CMD INSTALL BiRewire_xx.yy.zz.tar.gz
```
or with BiocManager::install() directly in R:
```r
if (!requireNamespace("BiocManager", quietly=TRUE))
install.packages("BiocManager")
BiocManager::install("BiRewire")
```
Moreover, the sources of the development version are available here http://www.bioconductor.org/packages/devel/bioc/html/BiRewire.html. Alternatively, the source files can be cloned from the github repositories: https://github.com/andreagobbi/BiRewire and https://github.com/andreagobbi/BiRewire--release using the command
```bash
git clone git@github.com:andreagobbi/BiRewire--release.git
git clone git@github.com:andreagobbi/BiRewire.git
```
We suggest to use the BiocManager::install() function from R in order to have the last working release package (build and check procedure).
To load BiRewire use the following commands:
```r
> library(BiRewire)
```
3 Package Dependencies
4 Notation
Let $G$ be a bipartite graph, i.e. a graph containing two classes of nodes $V_r$ and $V_c$ such that every edge $e \in E$ connects one node in the first class to a node in the second class.
Let $B$ be the incidence matrix of $G$, i.e. the $|V_r| \times |V_c|$ binary matrix whose generic entry $m_{i,j}$ is not null if and only if $(i, j) \in E$.
The number of edges is indicated with $e = |E|$ and the edge density with $d = e / \frac{|V_r| |V_c|}$.
The SA performs $N$ Switching Steps (SSs), in which:
1. two edges $(a, b)$ and $(c, d)$ both $\in E$ are randomly selected,
2. if $a \neq c$, $b \neq d$, $(a, d) \notin E$ and $(b, d) \notin E$ then:
a. the edges $(a, d)$ and $(b, d)$ are added to $E$ and
b. the edges $(a, b)$ and $(c, d)$ are removed from $E$.
3
Notice that we count a SS only if it is successfully performed.
The Jaccard Index (JI, [10]) is used to quantify the similarity between the original graph and its rewired version at the $k$-th SS. Since the SA preserves the degree distribution and does not alter the number of nodes, the JI, indicated with $s^{(k)}$, can be computed as
$$s^{(k)} = \frac{x^{(k)}}{2e - x^{(k)}}$$
where $x^{(k)}$ is the number of edges in common between the two graphs. Fixed a small error $\delta$, the number $N$ of SSs providing the rewired version of a network with the maximally achievable level of randomness (in terms of average dissimilarity from the original network) is asymptotically equal to
$$\frac{e(1-d)}{2} \ln \frac{1-d}{\delta}.$$
More detailed, we analytically derived the fixed point of the underlying Markov chain $\bar{x}$; fixed a $\delta$ we can estimate the distance of the current state of the chain respect to this fixed point in terms of fraction of edges $\delta$. For large network we can assume that a distance less than one edge is satisfiable (see [7]), so the bound reads:
$$\frac{e(1-d)}{2} \ln e(1-d),$$
but in order to manage smaller network the bound with the parameter $\delta$ results to be more general. This bound is much lower than the empirical one proposed in [5] (see Reference for details).
5 Directed Signed Network
A directed signed network (DSG) $G$ is a directed network in which the edges are encoded with a triplet $(a, b, \star)$ where $a$ denotes the source node, $b$ the target node and $\star$ the sign of the relation (the sign of the edge). In our case (pathways and signalling) $\star$ can be positive + and negative -. In [2] we show how to create a correspondence between a DSG and a couple $(B^+, B^-)$ of bipartite networks. This correspondence $f$, and its inverse $f^{-1}$ are useful for the creation of a rewired version of $G$: we rewire independently $B^+$ and $B^-$ and we rebuild the final DSG $G^*$ using $f^{-1}$. A DGS is usually encoded in a SIF format (Simple Interaction File see http://wiki.cytoscape.org/Cytoscape_User_Manual/Network_Formats for more informations). In the case of DSG a suitable SIF file has 3 columns: the first encodes the source nodes, the second the sign and the last the source nodes.
6 Function Description
In this section all the functions implemented in BiRewire are described with a simple practical example in which a real breast cancer dataset is modeled as a bipartite network, and randomised preserving the mutation-rate both across
samples and genes (i.e. the corresponding bipartite network is rewired). In each of the following functions it is possible to perform $N$ successful switching steps (see [1] for more details about this more general bound) using the flag `exact=TRUE`. To prevent a possible infinite loop, the program performs at maximum `MAXITER_MUL*max.iter` iterations.
### 6.1 `birewire.analysis.bipartite and.undirected`
First of all, we create a bipartite network modeling a genomic breast cancer dataset downloaded from the Cancer Genome Atlas (TCGA) projects data portal [http://tcga.cancer.gov/dataportal/], used in [1]. From this dataset germline mutations were filtered out with state-of-the-art softwares; synonymous mutations and mutations identified as benign and tolerated were also removed. The resulting bipartite graph has $n_r = 757$ nodes (corresponding to samples), $n_c = 9,757$ nodes (corresponding to genes), and $e = 19,758$ edges connecting a node in $n_r$ to a node in $n_c$ if the gene corresponding to the node in $n_r$ is mutated to the samples corresponding to the node in $n_C$. The edge density of this network is 0.27%.
The genomic dataset (in the form of a binary matrix in which rows correspond to samples, columns correspond to genes and the $(i, j)$ entry is non null if the $i$-th sample harbours a mutation in the $j$-th gene) can be loaded and modeled as a bipartite graph, with the following commands:
```r
> data(BRCA_binary_matrix)##loads an binary genomic event matrix for the
> ##breast cancer dataset
> g=birewire.bipartite.from.incidence(BRCA_binary_matrix)##models the dataset
> ## as igraph bipartite graph
```
Once the bipartite graph is created it is possible to conduct the analysis by calling the `birewire.analysis.bipartite` function, using the following commands:
```r
> step=5000
> max=100*sum(BRCA_binary_matrix)
> scores<-birewire.analysis.bipartite(BRCA_binary_matrix,step,
+ verbose=FALSE,max.iter=max,n.networks=5,display=F)
```
The function `birewire.analysis.bipartite` returns the Jaccard similarity sampled every `step` SSs (in the example above step is equal to 5000). The SA is independently applied on the initial data for `n.networks` times in order to estimate the mean value of the JI and the relative CI (as $1.96 \pm \sigma/\sqrt{n.networks}$). A plot such information is displayed if the parameter `ndisplay` is set to true. The routine returns a list of two element: `$N$ is the analytically derived bound and $\$data$ the similarity score table.
The same analysis can be performed on general undirected networks.
```r
> g.und<-erdos.renyi.game(directed=F,loops=F,n=1000,p.or.m=0.01)
> m.und<-get.adjacency(g.und,sparse=FALSE)
> step=100
```
> max=100*length(E(g.und))
> scores.und<birewire.analysis.undirected(m.und,step=step,
+ verbose=FALSE,max.iter=max,n.networks=5)
>
6.2 birewire.rewire.bipartite
To rewire a bipartite graph two modalities are available. Both of them can be used with the analytical bound \( N \) as number of switching steps or with a user defined value. The function takes in input an incidence matrix \( B \) or the an igraph bipartite graph.
> m2<birewire.rewire.bipartite(BRCA_binary_matrix,verbose=FALSE)
> g2<birewire.rewire.bipartite(g,verbose=FALSE)
The first function recives in output the incidence matrix of the rewired graph while the second one a bipartite igraph graph. See documentation for further details.
6.3 birewire.rewire.undirected
To rewire a general undirected graph the following functions can be used:
> m2.und<birewire.rewire.undirected(m.und,verbose=FALSE)
> g2.und<birewire.rewire.undirected(g.und,verbose=FALSE)
6.4 birewire.similarity
This function computes the Jaccard index between two incidence matrices with same dimensions and node degrees. It is possible also to use directly two suitable graphs.
> sc<birewire.similarity(BRCA_binary_matrix,m2)
> sc<birewire.similarity(BRCA_binary_matrix,t(m2))#also works
6.5 birewire.rewire.bipartite.and.projections
The following functions execute the Switching Algorithm and computes similarity trends across its switching steps for the two natural projections of the starting bipartite graph.
> #use a smaller graph!
> gg < graph.bipartite( rep(0:1,length=10), c(1:10))
> result<birewire.rewire.bipartite.and.projections(gg,step=10,
+ max.iter="n",accuracy=0.00005,verbose=FALSE)
> plot(result$similarity_scores.proj2,type='l',col='red',ylim=c(0,1))
> lines(result$similarity_scores.proj1,type='1',col='blue')
> legend("top",1, c("Proj2","Proj1"), cex=0.9, col=c("blue","red"), lty=1:1,lwd=3)
6.6 birewire.sampler.bipartite
This function uses the SA to generate a set of $K$ bipartite networks drawn from the null model given by an initial bipartite graph. The function creates a main folder (path input parameter) and a set of subfolders in order to have maximum 1000 files per folder. Notice that the initial graph is used only for the first rewiring process, and the output of the first process is used as input for the second and so on.
> # use a smaller graph!
> gg <- graph.bipartite(rep(0:1,length=10), c(1:10))
> ## NOT RUN
> ## birewire.sampler.bipartite(get.incidence(g), K=10, path = 'TESTBIREWIRE', verbose = F)
> ## unlink('TESTBIREWIRE', recursive = T)
6.7 birewire.visual.monitoring.bipartite and .undirected
These functions allow to visualize the Markov Chain underlying the SA. More in detail, given a sequence of steps to test, we sample from the SA each indicated step generating a bunch of networks. We compute the pairwise Jaccard distance among them, i.e. the Jaccard index is defined as 1 minus the Jaccard similarity. Then we perform a dimensional scaling using Rtsne [12] and plot the result.
> ggg <- graph.bipartite(rep(0:1,length=10), c(1:10))
> tsne = birewire.visual.monitoring.bipartite(ggg, display = F, n.networks = 10, perplexity = 2)
[1] "K = 1"
[1] "K = 5"
[1] "K = 100"
[1] "K = n"
> g <- erdos.renyi.game(1000, 0.1)
> tsne = birewire.visual.monitoring.undirected(g, display = F, n.networks = 10, perplexity = 2)
[1] "K = 1"
[1] "K = 5"
[1] "K = 100"
[1] "K = n"
6.8 birewire.load.dsg and birewire.save.dsg
The first function reads a SIF DSG from a given path and the second writes a DSG in a specific path.
6.9 birewire.induced.bipartite and birewire.build.dsg
These two functions encode the correspondence between a DSG and a ordered couple of bipartite graphs ($B^+, B^-$). The first takes a SIF object, loaded with birewire.load.dsg, and produces a list with the positive and negative bipartite graph, the second one build a SIF object starting from a list of two bipartite networks.
```r
> data(test_dsg)
> dsg=birewire.induced.bipartite(test_dsg,delimiters=list(negative='-',positive='+'))
> tmp=birewire.build.dsg(dsg,delimiters=list(negative='-',positive='+'))
```
### 6.10 birewire.rewire.dsg
Function for generating a rewired version of a given DSG $G$. The parameters are quite the same of `birewire.rewire.bipartite`: in this case it is possible to control the number of SS independently for the positive `max.iter.pos` and negative `max.iter.neg` part of $G$.
```r
> dsg2=birewire.rewire.dsg(dsg=dsg)
```
DONE in 0 seconds
DONE in 0 seconds
```r
> tmp=birewire.build.dsg(dsg2,delimiters=list(negative='-',positive='+'))
```
### 6.11 birewire.similarity.dsg
Computes the Jaccard index between two DSGs.
```r
> birewire.similarity.dsg(dsg,dsg2)
```
[1] 0.1578947
### 6.12 birewire.sampler.dsg
This function uses the SA to generate a set of $K$ DSG in SIF format drawn from the null model given by an initial DSG. The function creates a main folder (`path` input parameter) and a set of subfolders in order to have maximum 1000 files per folder. See `birewire.sampler.bipartite` for more details.
```r
> ##NOT RUN
> ##birewire.sampler.dsg(dsg,K=10,path='TESTBIREWIREDSG',verbose=F,
> > ## delimiters=list(negative='-',positive='+'))
> ##unlink('TESTBIREWIREDSG',recursive = T)
```
Since version 3.27.1 it is possible to add a flag in order to not generate positive and negative loops between two nodes. Be aware that the process could be slower since the generated dsg is checked after the rewiring procedure and saved only if it does not contain positive-negative simultaneous loops.
```r
> ##NOT RUN
> ##birewire.sampler.dsg(dsg,K=10,path='TESTBIREWIREDSG',verbose=F,
> > ## delimiters=list(negative='-',positive='+'),check_pos_neg=T)
> ##unlink('TESTBIREWIREDSG',recursive = T)
```
### 7 Directed graphs
Notice that a directed graph can be encoded as a DSG with only positive (negative) part. All the routines involving DSG could be used also for directed graphs building a DSN as a R list with just one element named `positive`.
8
8 Example
Here we collect the functionalities of the package in a single example. The output plot of the analysis is showed in Fig. 1 on the left side and the output of the monitoring procedure is displayed in Fig. 1 on the right side.
> # NOT RUN
> # ggg <- bipartite.random.game(n1=100,n2=40,p=0.2)
> # For recovering quickly the bound N we can perform a short analysis
> # N = birewire.analysis.bipartite(get.incidence(ggg, sparse=F), max.iter=2, step=1)$N
> # Now we can perform the real analysis
> # res = birewire.analysis.bipartite(get.incidence(ggg, sparse=F), max.iter=10*N, n.networks=10)
> # and monitoring the markov chain
> # tane = birewire.visual.monitoring.bipartite(ggg, display=T, n.networks=75, sequence=c(1, 10)
> # Now we can generate a null model
> # birewire.sampler.bipartite(ggg, K=10000, path="TESTBIREWIREBIPARTITE")


Figure 1: The output plots of `bipartite.analysis.bipartite` (left side) and of `birewire.visual.monitoring.bipartite` (right side) relative to the example in section 8. The gradient of the colour from blue to red indicates the position of the sampling network respect the others. The starting network (blue) is marked with the text `start`.
References
|
{"Source-Url": "https://www.bioconductor.org/packages/release/bioc/vignettes/BiRewire/inst/doc/BiRewire.pdf", "len_cl100k_base": 5124, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27344, "total-output-tokens": 6581, "length": "2e12", "weborganizer": {"__label__adult": 0.0003941059112548828, "__label__art_design": 0.0005707740783691406, "__label__crime_law": 0.0004978179931640625, "__label__education_jobs": 0.0016145706176757812, "__label__entertainment": 0.00021588802337646484, "__label__fashion_beauty": 0.0002231597900390625, "__label__finance_business": 0.0006151199340820312, "__label__food_dining": 0.0005240440368652344, "__label__games": 0.0009326934814453124, "__label__hardware": 0.0011396408081054688, "__label__health": 0.0015163421630859375, "__label__history": 0.0004858970642089844, "__label__home_hobbies": 0.0002231597900390625, "__label__industrial": 0.0006952285766601562, "__label__literature": 0.0004088878631591797, "__label__politics": 0.0005402565002441406, "__label__religion": 0.0005931854248046875, "__label__science_tech": 0.388671875, "__label__social_life": 0.00029349327087402344, "__label__software": 0.050506591796875, "__label__software_dev": 0.54833984375, "__label__sports_fitness": 0.00042176246643066406, "__label__transportation": 0.0004286766052246094, "__label__travel": 0.00029587745666503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21101, 0.04518]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21101, 0.48645]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21101, 0.73979]], "google_gemma-3-12b-it_contains_pii": [[0, 1686, false], [1686, 4422, null], [4422, 6382, null], [6382, 8918, null], [8918, 11630, null], [11630, 13489, null], [13489, 15533, null], [15533, 17609, null], [17609, 18896, null], [18896, 21101, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1686, true], [1686, 4422, null], [4422, 6382, null], [6382, 8918, null], [8918, 11630, null], [11630, 13489, null], [13489, 15533, null], [15533, 17609, null], [17609, 18896, null], [18896, 21101, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21101, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21101, null]], "pdf_page_numbers": [[0, 1686, 1], [1686, 4422, 2], [4422, 6382, 3], [6382, 8918, 4], [8918, 11630, 5], [11630, 13489, 6], [13489, 15533, 7], [15533, 17609, 8], [17609, 18896, 9], [18896, 21101, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21101, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
b49d90258acf501a2f1e004b5f5bc1ee65ec92b7
|
An Infrastructure for Adaptive Control of Multi-Agent Systems
Karl Kleinmann, Richard Lazarus, Ray Tomlinson
BBN Technologies
10 Moulton St
Cambridge, MA 02138
{kkleinmann, rlazarus, rtomlinson}@bbn.com
Abstract—In this paper, we present the control infrastructure of the Cougaar distributed agent system that was developed under the DARPA ALP and UltraLog programs. We motivate its design from a control theory perspective and discuss the characteristics of an agent system as the controlled process. These characteristics are arguably the reason why formal methods of control theory are rarely applied in software engineering for agent systems.
1. INTRODUCTION
Large, distributed, multi-agent systems (DMAS) have a huge number of internal states and many degrees of freedom. While these characteristics provide great benefits, like flexibility for system configuration, they also impose a complex multivariable control problem. The control goal is usually not only the optimization of the actual application but also the containment of hardware or software-related failures, which can be viewed as stochastic disturbances. In the area of survivable systems that are designed to operate under warlike conditions, robustness (which addresses intentional network and platform disruptions) and security (which addresses intentional software intrusions) become control goals of equal importance to the primary application (e.g., a planning system).
In this paper, we present the infrastructure elements of the control architecture of Cougaar [1] that address the challenges above. Cougaar is an agent architecture for large-scale DMAS that has been sponsored by DARPA through the former ALP program (1996-2001) and the current UltraLog program (2001-2004). In addition, Cougaar is open source and enjoys a worldwide user community. Under the UltraLog program [2], the Cougaar software is extended to inherently ensure survivability under extremely chaotic and high-stress environments, with particular focus on robustness, security, and scalability.
Survivability is predicated on maintaining the highest quality of service across many dimensions based on mission or application objectives. Hence, it is essential that the agents are aware not only of their own performance, but also of the externally available resources. In addition, adaptive control must encompass optimizing cost functions describing varying emphasis across the multiple quality of service dimensions at various points in the control hierarchy.
As a primary application, UltraLog showcases a military logistics planning and plan-execution system that implements over 500 distinct agents running on more than 100 machines.
Section 2 of this paper discusses DMAS from a control theory perspective, motivated in part by [5]-[8], and suggests why their characteristics constitute a hard and unusual control problem, and what makes DMAS unique as a controlled process. We also point out analogies where control theory and software engineering use different terminologies for the same abstractions.
Section 3 outlines the control objectives of DMAS, using the various levels and types of requirements of the UltraLog application as an example.
Section 4 briefly introduces the Cougaar agent architecture and describes in detail its control infrastructure elements and their interactions.
Section 5 gives some examples for implemented control strategies, one of which comes with the Cougaar open source distribution, and an interpretation of the approach, comparing it with other adaptive control designs.
Section 6 summarizes the current design status and discusses what we hope to accomplish with future research.
2. SOFTWARE AGENT SYSTEMS AS CONTROLLED PROCESSES
Control theory captures the fundamentals for three activities:
- Design of the control system. This activity characterizes the system boundaries of the process to be controlled (also called controlled system or plant), the control inputs by which the behavior of the process can be changed, the structure of the controller that generates these inputs, and the sensor data as input for the controller reflecting current and desired performance.
• **Initialization of control parameters.** For the controller, they are either derived from an analytic or data-driven model of the controlled process, or based on experiments and heuristics.
• **Tuning of control parameters during operation.** In an adaptive control system, the controller can be continually adjusted to cope with changes of the inherent process behavior over time.
Whereas parts of almost every control loop are implemented in software, in cases where the controlled process is the software system itself, the means of control theory are rarely applied. As [7] points out, “the basic paradigm of control has not found its place as a first-class concept in software engineering.”
There are certain characteristics in DMAS that distinguish them from the examples of controlled processes commonly considered in control theory:
• **Dynamic system boundaries.** DMAS have typically many degrees of freedom, e.g., mobile agents can reside on various nodes over time, new communication channels are added and removed again, agents can be rehydrated elsewhere, the distribution of the application over the agents can change. This desired flexibility leads to a constantly changing topology and does not allow to assume constant system boundaries and to partition the system statically.
• **System size.** DMAS can consist of hundreds of agents containing thousands of components to be controlled. Thus, a huge number of internal states need to be measured or monitored by separate instrumentation code. Additional sensor data are generated while monitoring the communication between peers in the network and storing these matrices including their history.
• **Type of cost functions and performance criteria.** As opposed to processes in the physical world, where every change of a control input variable adds cost, many control actions in the logical world have a strongly non-linear impact on the cost function (e.g., up to a certain point and under certain conditions, using more CPU can be done for free). Furthermore, control goals are initially described in a symbolic way, and there is often no analytic way to transform these into numeric values and map them into set points for internal states. Therefore, the set points are mostly step functions, not trajectories.
These characteristics, originating both in the nature of software and the particular design of agent-based systems, make an approach to the control of DMAS based on control theory very complex. Because of the dynamic system boundaries and the system size, it is hard to build a model of the process that is smaller than the DMAS itself. There are no good abstractions that can be analytically derived from internal states and capture the desired behavior. In addition, the number of internal states and their couplings impose a complex multivariable control problem. Since control inputs often lead to structural changes in the system, the system dynamics become nonlinear.
On the other hand, the software system can be used as a perfect model for itself and, given an automated testing environment, control approaches and control parameter variations can be simulated at almost no additional cost. The use of feed-forward controllers often avoids stability problems, with experimentally determined control parameters.
This heuristic approach is often taken in software engineering. In Cougaar, we have created a set of terminology for describing control components and measures that possess analogs in traditional control theory. These abstractions include: control input vs. actions or operating modes; sensor input vs. sensor conditions; disturbance vs. stress; controller vs. engine; and control algorithm vs. rules or plays.
### 3. CONTROL OBJECTIVES IN DMAS
Besides the primary system function, a DMAS has to accommodate various requirements in parallel (e.g., usability, reliability, fidelity, and stability) that become additional control goals in the multidimensional control and optimization problem. Because of the distributed nature of DMAS, these generic requirements become a special meaning since communication over wide area networks, memory, CPU resources, and participating platforms are not only variable and unpredictable, but also vulnerable against kinetic or information attacks.
The primary system function of the UltraLog system is the planning and plan-execution of military deployment operations, the control goal is to build a timely plan in the face of varying workloads and system conditions. Besides this logistics application, the system has to accommodate extreme hardware- and software-related failures, motivated by the operational scenario of operating under warlike conditions. These requirements are captured under the functions of robustness and security. The control goal of robustness is to maintain a processing infrastructure despite the loss of processing resources (caused by intentional network or hardware platform disruptions). The control goal of security is to maintain system integrity despite information attacks (intentional software intrusions).
In the control hierarchy, there are two levels specifically designed to achieve these control goals:
• **Application level control.** The control inputs on this level are typically complex actions or sequences of actions composed of control primitives and designed as specific defenses against certain stresses. Examples are variable fidelity processing that requires less computing resources;
load balancing by moving agents to different hosts; or reconstituting agents that were residing on destroyed hosts. These control actions are mostly initiated and implemented by manager agents within a local scope, but often have global impact. Since some complex actions can have conflicting impacts, an additional control layer for deconfliction is required that arbitrates the action selection [4]).
- **Agent infrastructure level control.** The control inputs on this level are the parameters of the components within an agent, providing the agent with the autonomy to make local decisions. Examples are lowering the rate of status reports or turning on message compression when the local network load is high. In the following section, the agent-level control mechanisms of the Cougaar agent infrastructure are described in detail.
## 4. The Cougaar Control Infrastructure
The adaptive control mechanisms described here are part of Cougaar, a 100% Java agent architecture for building large distributed multi-agent systems, comprising around 500,000 lines of code. The prototype application uses over 500 distinct agents distributed over a 5-LAN network of over 100 machines. Cougaar was designed to support data intensive, inherently distributed applications, where application scalability is paramount. Intra-agent communication is accomplished via publish and subscribe to a local blackboard to reduce latency for tightly coupled component interaction. Inter-agent communication transfers locally published objects to targeted recipients to allow wide distribution of loosely coupled interactions. Communities of agents form to manage resources and provide scalable services.
Cougaar has several subsystems for collecting and measuring overlapping sets of performance data, each with different usage requirements and quality of service characteristics. These include the Metrics Service and various domain-specific sensor groups. The instrumentation is dynamic (sensor values are only measured when needed) and built into the architecture [3].
One innovation of Cougaar is its hierarchical component model, based on the JavaBeans API. This model provides unique security and composition properties for Cougaar agents. All internal system functions and application functions are added at configuration or run time into a Cougaar agent as components, where one or more binders wrap each component to mediate and secure component access to system functions.
Each Cougaar node (agent container, one per JVM) contains a Tomcat web server that provides access to the agents blackboard to external clients such as user interfaces and status monitors. Data access is provided by servlets, dynamically loadable plugins provided by the client. These servlets have full access to agent-state and load/execute only when invoked. For example, the CSMART UI for Cougaar configuration uses servlets both for system control, and runtime monitoring.
Under the UltraLog project, access control systems have been added as Cougaar components (binders) to limit component access to agent-internal data (restricted subscriptions), e.g., to restrict servlet access to Blackboard data and to restrict messaging between agents. The security subsystem can both provide performance metrics and use such metrics at runtime to tune access control policies. This security system, thereby, adaptively controls access to system data using the Cougaar agent-level control infrastructure based on feedback from performance measurements.
Adaptive control in Cougaar can be implemented using the inherent Adaptivity Engine (AE) mechanisms and associated components. Cougaar services are expected to have Operating Modes-modes of operation that provide increased Quality of Service (QoS) with increased resource consumption or with particular dependencies on other QoS providers. The AE provides the mechanisms by which control actions (Plays) can specify QoS in multiple dimensions (Operating Modes) based on measured operating state (Conditions). Figure 1 illustrates these components and their associated data flow.
The following discusses the key components, services, and objects, as well as their interactions, of the Cougaar agent-level control infrastructure. These include:
- **Operating Modes.** An operating mode is created and published by a component representing one control input dimension (out of many) of the component. An Operating Mode is a data structure with a list of ranges of values that it is allowed to have, as well as a current value. They are the control inputs (“knobs”) by which the component can be controlled (“tuned”).
- **Conditions.** Conditions are the generalized form of any (sensor) input information used by the controllers. Sensors can publish conditions that reflect their run-time performance measurements, and other components can aggregate measurements and state information to provide QoS values.
- **Plays, Playbook, Playbook Manager.** Plays represent the control laws; they specify restrictions or constraints on one or more Operating Modes and the Conditions under which those constraints are to be applied. A Playbook has a list of Plays that are tested in succession for applicability to the current conditions. The Playbook manager is a component that maintains the Playbook and provides the services needed to manipulate and use the Playbook.
- **TechSpecs.** TechSpecs are published by components as a high-level model (description) of their behavior. They allow the controller (AE) to reason and predict the
consequences of alternate Operating Mode settings of the Component.
- **Adaptivity Engine.** Each agent contains a component named Adaptivity Engine that acts as the controller for that agent and certain other external components. The Adaptivity Engine responds to changes in Conditions and modifications of the Playbook, evaluates the Plays, and sets new Operating Modes accordingly. By observing the system behavior and using the TechSpecs as a system model, one can use the Adaptivity Engine and associated components to implement an adaptive control approach that optimizes system behavior in accordance with specified objectives (Plays in the Playbook).
- **Operating Mode Policies and Operating Mode Policy Manager.** Higher-level system policies (set dynamically by other agents, or by a human operator) may restrict valid plays that the agent may enforce. In this way, Cougaar agents use sensors to adapt to changes in the environment and to optimize across application goals. Policies are communicated between agent blackboards via the Cougaar relay mechanism. As Operating Mode Policies can be disseminated across agents, it is the mechanism by which one can implement hierarchical control within a Cougaar agent society.
In addition to these conceptual aspects, there are various technical implementation aspects considered that are less relevant for the control issues discussed in this paper. Examples are, access to plays, conditions, and operating modes via service providers; processing order of plays; or detection of missing or overly constrained operating modes.
5. **EXAMPLES AND INTERPRETATION OF THE CONTROL APPROACH**
This section provides some examples that demonstrate the effective control of a DMAS using these mechanisms. More specifically, we have included two examples. Example 1 is a “toy” control problem that was developed as a “plumbing” test and demonstration of the Cougaar adaptivity engine and its associated components (and is included in the open source Cougaar distribution). Example 2 is one application of controlled system adaptivity that has been implemented for our military logistics application.
**Example 1: Demonstration of the Cougaar Adaptivity Engine**
Example 1 is a two-agent system consisting of a task generator agent and a provider agent. Using a single play in its playbook, the adaptivity engine of the provider agent can modify the way tasks are processed within the provider agent. The sensor conditions are the available CPU resources (inverse to the CPU load) of the provider agent’s host and the rate by which new tasks are arriving from the generator agent at the provider. The Operating Mode used in
this playbook tunes the algorithm by which incoming tasks are allocated by the provider agent’s allocator plugin. The quality of the allocations done by this plugin depend on how many iterations the allocation algorithm can afford. The play connects the two conditions by dividing the task rate by the CPU value, and maps this input to the Operating Mode determining the number of iteration cycles in the allocator plugin. The play represents the heuristic that if the number of incoming tasks is low and enough CPU resources are available, the task allocation can be done more precisely using many iterations of the allocation algorithm. On the other hand, if the number of incoming tasks is high or limited CPU resources are available, the allocation should be done fast using less computing resources. Figure 2 shows this nonlinear control algorithm implemented as a play in the playbook.
To further simplify our control problem, we have quantized control regions into 2 levels of task fidelity (high and low fidelity) and 2 planning horizons (the next 6 days and the time period thereafter. These control regions correspond to the Operation Modes of the logistics application components. For each of these 4 control regions, we have specified unique utility curves. Figures 4 and 5 illustrate the utility functions for each planning horizon, respectively:
1) high fidelity within the 6 day planning horizon
2) low fidelity within the 6 day planning horizon (no utility)
3) high fidelity within the subsequent planning horizon
4) high fidelity within the subsequent planning horizon
In our military logistics application, the completion of tasks vary in their utility to military planners based on 2 basic factors: task planning fidelity and task planning horizon. These are simplifying assumption that we have made, based on our simplified planning model that is used for researching system survivability. Of course, a real planning system would have to satisfy a larger set of planning requirements. Our objective is to maximize utility under system conditions of varying workload or processing resource availability. Assuming that high fidelity tasks require a high level of computational resources, but provide a high utility, we must trade-off computing high and low fidelity tasks to achieve the maximum utility.
In order to maximize utility with respect to system load, we need a measure of system load to drive our control actions. To quantify system load, we determine a quantized “falling behind” measure that is driven by a measure of task backlog. This falling behind measure is quantized into 3 levels: severe, medium and none; and provides the Condition that drives Adaptivity Engine play selection.
Once our Operating Modes and Conditions are defined, we construct a Playbook that represents the choice of control actions for the Adaptivity Engine. As a first attempt, we constructed a set of Plays based on the defined planning horizons. Table 1 describes this Playbook. Experimental
results were used to define the thresholds for the falling behind measures that maximized our utility score.


Table 1: A Playbook Example
<table>
<thead>
<tr>
<th>Play</th>
<th>Condition</th>
<th>Operating Modes</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>None</td>
<td>high fidelity = end of period</td>
</tr>
<tr>
<td></td>
<td></td>
<td>low fidelity = none</td>
</tr>
<tr>
<td>2</td>
<td>Medium</td>
<td>high fidelity = 6 days</td>
</tr>
<tr>
<td></td>
<td></td>
<td>low fidelity = 7 days to end of period</td>
</tr>
<tr>
<td>3</td>
<td>Severe</td>
<td>high fidelity = 6 days</td>
</tr>
<tr>
<td></td>
<td></td>
<td>low fidelity = none</td>
</tr>
</tbody>
</table>
Utilizing the Conditions, Operating Modes, and Plays described above, we were able to maximize the utility of our application. Based on dynamic system measurements under varying system stresses (affecting available computational resources), the adaptive controller performed well, selecting the appropriate control action for the measured system condition. This initial implementation represents our first attempt at closed loop control of a DMAS using the Adaptivity Engine mechanism. As part of our continuing research, we plan to use the utility functions and a more granular measure of system load to construct a Playbook as a closed form solution of the utility functions.
6. CONCLUSION
We discussed several characteristics of DMAS that make them a special case of a controlled process, to which the conventional means of control theory are hard to apply. We argued that, instead, software engineering uses a more experimentally driven approach to control, often leading to rule-based controllers parameterized by heuristics.
From a control theory perspective, the control infrastructure presented in the previous section has the following properties:
- The architecture allows both feedforward and feedback control, depending on the selection of conditions and operating modes.
- The modification of the control algorithm (Plays), either by using a model (TechSpecs) or by policies constraining existing plays, constitutes for an adaptive control system.
- The Adaptivity Engine in conjunction with the Playbook constitute a rule-based controller. Theoretically, the control algorithm could be linear or nonlinear; as soon as policies impose constraints, it becomes nonlinear.
The Cougaar open source agent architecture provides a rich set of sensor instrumentation and a control infrastructure that is easy to adapt to various applications. Our results obtained under the UltraLog program have shown that this infrastructure is suited to support the various control goals of a survivable system.
However, in this area of multivariable control, there are still many open issues to be solved by our future research. Examples are, the proper distribution of control knowledge in order to avoid single points of failure; the systematic optimization of control parameters; or the wide use of small models for components (TechSpecs) that are currently still in the early stages of development.
7. ACKNOWLEDGEMENTS
This work was sponsored, in part, by the DARPA UltraLog contract #MDA972-01-C-0025. These ideas represent contributions by the many individuals who participated in the DARPA ALP and UltraLog programs.
8. REFERENCES
|
{"Source-Url": "http://cougaar.org/doc/papers/2003/Kimas03KKleinmann.pdf", "len_cl100k_base": 4656, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20030, "total-output-tokens": 5300, "length": "2e12", "weborganizer": {"__label__adult": 0.0003848075866699219, "__label__art_design": 0.0003809928894042969, "__label__crime_law": 0.0005550384521484375, "__label__education_jobs": 0.0005865097045898438, "__label__entertainment": 0.0001016855239868164, "__label__fashion_beauty": 0.0001773834228515625, "__label__finance_business": 0.00040340423583984375, "__label__food_dining": 0.0003643035888671875, "__label__games": 0.0008950233459472656, "__label__hardware": 0.0013952255249023438, "__label__health": 0.0006504058837890625, "__label__history": 0.0003457069396972656, "__label__home_hobbies": 0.00012040138244628906, "__label__industrial": 0.0007343292236328125, "__label__literature": 0.00027751922607421875, "__label__politics": 0.0003409385681152344, "__label__religion": 0.0003871917724609375, "__label__science_tech": 0.1270751953125, "__label__social_life": 0.00010389089584350586, "__label__software": 0.01442718505859375, "__label__software_dev": 0.8486328125, "__label__sports_fitness": 0.00035762786865234375, "__label__transportation": 0.0011777877807617188, "__label__travel": 0.0002579689025878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25531, 0.0165]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25531, 0.56053]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25531, 0.92647]], "google_gemma-3-12b-it_contains_pii": [[0, 4186, false], [4186, 9681, null], [9681, 15230, null], [15230, 17904, null], [17904, 20912, null], [20912, 24387, null], [24387, 25531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4186, true], [4186, 9681, null], [9681, 15230, null], [15230, 17904, null], [17904, 20912, null], [20912, 24387, null], [24387, 25531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25531, null]], "pdf_page_numbers": [[0, 4186, 1], [4186, 9681, 2], [9681, 15230, 3], [15230, 17904, 4], [17904, 20912, 5], [20912, 24387, 6], [24387, 25531, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25531, 0.08163]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
0b75fe1707ccac107a9c17fda22e3069310d3cb3
|
Cleaning code cheat sheet
Why Clean Code?
Code is clean if it can be understood easily - by everyone on the team. With understandability comes readability, extensibility, and maintainability. All things needed to keep a project going over a long time without accumulating a large amount of technical debt.
- Smells
- Rigidity: The software is difficult to change. A small change causes a cascade of subsequent changes.
- Fragility: The software breaks in many places due to a single change.
- Immobility: You cannot reuse parts of the code in other projects because of involved risks and high effort.
- Viscosity of Design: Taking a shortcut and introducing technical debt requires less effort than doing it right.
- Viscosity of Environment: Building, testing, and other tasks take a long time. Therefore, these activities are not executed properly by everyone and technical debt is introduced.
- Needless Complexity: The design contains elements that are currently not useful. The added complexity makes the code harder to comprehend.
- Needless Repetition: Code contains lots of code duplication: exact code duplications or design duplicates (doing the same thing in a different way).
- Opacity: The code is hard to understand. Therefore, any change takes additional time to first reengineer the code and is more likely to result in defects due to not understanding the side effects.
- Class Design Principles
- Single Responsibility Principle (SRP): A class should have one, and only one, reason to change.
- Open Closed Principle (OCP): You should be able to extend a classes behaviour without modifying it.
- Liskov Substitution Principle (LSP): Derived classes must be substitutable for their base classes.
- Dependency Inversion Principle (DIP): Depend on abstractions, not on concretions.
- Interface Segregation Principle (ISP): Make fine-grained interfaces that are client-specific.
- Classes Should Be Small: Smaller classes are easier to grasp. Classes should be smaller than about 100 lines of code. Otherwise, it is hard to spot how the class does its job and it probably does more than a single job.
- Package Cohesion
- Release Reuse Equivalency Principle (RREP): The granularity of reuse is the granularity of release.
- Common Reuse Principle (CRP): Classes that change together are packaged together.
- Package Coupling
- Acyclic Dependencies Principle (ADP): The dependency graph of packages must have no cycles.
- Stable Dependencies Principle (SDP): Depend in the direction of stability.
- Stable Abstractions Principle (SAP): Abstraction increases with stability
- General Coding Conventions
- Coding, architecture, design guidelines (check them with tools)
- Keep it Simple, Stupid (KISS): Simpler is always better. Reduce complexity as much as possible.
- Boy Scout Rule: Leave the campground cleaner than you found it.
- Root Cause Analysis: Always look for the root cause of a problem. Otherwise, it will get you again and again.
- Multiple Languages in One Source File
- C#, Java, Javascript, XML, HTML, XAML, German...
- Environments
- Project Build Requires Only One Step
- Executing Tests Requires Only One Step
- Source Control
- Always use a source control system.
- Continuous integration: Assure integrity with Continuous integration
- Over Configurability: Prevent configuration just for the sake of it — because nobody can decide how it should be. Otherwise, this will result in overly complex, unstable systems.
- Feature Layers: Do not add functionality on top, but simplify overall.
- Dependencies
- Make Logical Dependencies Physical: If one module depends upon another, that dependency should be physical, not just logical. Don’t make assumptions.
- Micro Layers: Base Classes Depending On Their Derivatives: Base classes should work with any derived class.
- Too Much Information: Minimise interface to minimise coupling
- Why Classes?
- Classes are used together are packaged together.
- Classes that change together are packaged together.
- The granule of reuse is the granularity of release.
- Classes that change together are packaged together.
- Derive classes must be substitutable for their base classes.
- Why Packages?
- The entire package is an investment in knowledge. With knowledge comes added value.
- Dependencies on a file should be inside the file.
- Why Modules?
- Modules are boundaries in the design structure and design.
- Why Modules?
- Modules are boundaries in the design structure and design.
- The software breaks in many places due to a single change.
- Why Abstractions?
- The software is difficult to change. A small change causes a cascade of subsequent changes.
Writing clean code from the start in a project is an investment in keeping the cost of change as constant as possible throughout the lifecycle of a software product. Therefore, the initial cost of change is a bit higher when writing clean code (grey line) than quick and dirty programming (black line), but is paid back sooner. Especially if you keep in mind that most of the cost has to be paid during maintenance of the software. Unclean code results in technical debt that increases over time if not refactored into clean code. There are other reasons leading to Technical Debt such as bad processes and lack of documentation, but unclean code is a major driver. As a result, your ability to respond to changes is reduced (red line).
In Clean Code, Bugs Cannot Hide
Most software defects are introduced when changing existing code. The reason behind this is that the developer changing the code cannot fully understand the changes made. Clean code minimises the risk of introducing defects by making the code as easy to understand as possible.
- Principles
- Loose Coupling
- Two classes, components or modules are coupled when at least one of them changes the other. The less these items know about each other, the looser they are coupled.
- High Cohesion:
- Cohesion is the degree to which elements of a whole belong together. Methods and fields in a single class and classes of a component should have high cohesion. High cohesion in classes and components results in simpler, more easily understandable code structure and design.
- Change is Local:
- When a software system has to be maintained, extended and changed for a long time, keeping change local reduces involved costs and risks. Keeping change local means that there are boundaries in the design which changes do not cross.
- It is Easy to Remove:
- We normally build software by adding, extending or changing features. However, removing elements is important so that the overall design can be kept as simple as possible. When a block gets too complicated, it has to be removed and replaced with one or more simpler blocks.
- Fields Not Declaring State:
- Fields holding data that does not belong to the state of the instance but are used to hold temporary data. Use local variables or extract to a class abstracting the performed action.
- Enablinger Codes:
- Name Classes After How They Implement Their Interfaces
- The name of a class should reflect how it should be. Otherwise, the class could have direct access to the variables it is manipulating.
- Artificial Coupling:
- Things that don’t depend upon each other should not be artificially coupled.
- Tisanal Coupling:
- If, for example, the order of some method calls is important, then make sure that they cannot be called in the wrong order.
- Transistive Navigation:
- Aka Law of Demeter, writing shy code.
- Naming:
- Choose Descriptive / Unambiguous Names
- Names have to reflect what a variable, field, property stands for. Names have to be precise.
- Choose Names at Appropriate Level of Abstraction
- Choose names that reflect the level of abstraction of the class or method you are working in.
- Name Interfaces After Functionality They Abstract
- The name of an interface should be derived from its usage by the client, such as IStream.
- Name Classes After How They Implement Their Interfaces
- The name of a class should reflect how it fulfills the functionality provided by its interface(s), such as MemoryStream : IStream
- Name Methods After What They Do
- The name of a method should describe what is done, not how it is done.
- Use Long Names for Long Scopes
- Names have to reflect the entire functionality.
- Standard Nomenclature Where Possible
- Don’t invent your own language when there is a standard.
- Encodings in Names
- No prefixes, no type/scope information
Clean Code Cheat Sheet
Understandability
Consistency
- If you do something a certain way, do all similar things in the same way.
- Same variable name for same concepts, same naming pattern for corresponding concepts.
Use Explainatory Variables
- Use locals to give steps in algorithms names.
Encapsulate Boundary Conditions
- Boundary conditions are hard to keep track of. Put the processing for them in one place, e.g. nested if-level + 1.
Prefer Dedicated Value Objects to Primitive Types
- Instead of passing primitive types like strings and integers, use dedicated primitive types, e.g. AbsolutePath instead of string.
Poorly Written Comment
- Comment does not add any value (redundant to code), is not well formed, is not correct grammar/spelling.
Obscured Intent
- Too dense algorithms that lose all expressiveness.
Obvious Behaviour Is Unimplemented
- Violations of "the Principle of Least Astonishment". What you expect is what you get.
Hidden Logical Dependency
- A method can only work when invoked correctly depending on something else in the same class, e.g. a DeleteItem method must only be called if a CanDeleteItem method returned true, otherwise it will fail.
Methods
Methods Should Do One Thing
- Loops, exception handling, encapsulate in sub-methods.
Methods Should Descend 1 Level of Abstraction
- The statements within a method should all be written at the same level of abstraction, which should be one level below the operation described by the name of the function.
Method with Too Many Arguments
- Prefer fewer arguments. Maybe functionality can be outsourced to a dedicated class that holds the information in fields.
Method with Out/Ref Arguments
- Prevent usage. Return complex object holding all values, split into several methods. If your method must change the state of something, have it change the state of the object it is called on.
Selector / Flag Arguments
- public on Foo(BOOL flag)
- Split method into several independent methods that can be called from the client without the flag.
Inappropriate Static
- Static method that should be an instance method
Source Code Structure
Vertical Separation
- Variables and methods should be defined close to where they are used.
- Local variables should be declared just above their first usage and should have a small vertical scope.
Nesting
- Nested code should be more specific or handle less probable scenarios than unnested code.
Structure Code into Namespaces by Feature
- Keep everything belonging to the same feature together. Don’t use namespaces communicating layers. A feature may use another feature; a business feature may use a core feature like logging.
Conditionals
Encapsulate Conditionals
- if (this.ShouldBeDeleted(timer)) is preferable to if (timer.HasExpired & timer.IsRecurrent).
Positive Conditionals
- Positive conditionals are easier to read than negative conditionals.
Useless Stuff
- Dead Comment, Code
- Delete unused things. You can find them in your version control system.
Clutter
- Code that is not dead but does not add any functionality.
Inappropriate Information
- Comment holding information better held in a different kind of system: product backlog, source control. Use code comments for technical notes only.
Maintainability Killers
Duplication
- Eliminate duplication. Violation of the "Don’t repeat yourself" (DRY) principle.
Magic Numbers / Strings
- Replace Magic Numbers and Strings with named constants to give them a meaningful name when meaning cannot be derived from the value itself.
Enums (Persistent or Defining Behaviour)
- Use reference codes instead of enums if they have to be persisted. Use polymorphism instead of enums if they define behaviour.
Exception Handling
Catch Specific Exceptions
- Catch exceptions as specific as possible. Catch only the exceptions for which you can react in a meaningful manner.
Catch Where You Can React in a Meaningful Way
- Only catch exceptions when you can react in a meaningful way. Otherwise, let someone up in the call stack react to it.
Use Exceptions instead of Return Codes or null
- In an exceptional case, throw an exception when your method cannot do its job. Don’t accept or return null. Don’t return error codes.
Fail Fast
- Exceptions should be thrown as early as possible after detecting an exceptional case. This helps to pinpoint the exact location of the problem by looking at the stack trace of the exception.
Using Exceptions for Control Flow
- Using exceptions for control flow has bad performance, is hard to understand and results in very hard handling of real exceptional cases.
Swallowing Exceptions
- Exceptions can be swallowed only if the exceptional case is completely resolved after leaving the catch block. Otherwise, the system is left in an inconsistent state.
From Legacy Code to Clean Code
Always have a Running System
- Change your system in small steps, from a running state to a running state.
1) Identify Features
- Identify the existing features in your code and prioritise them according to how relevant they are for future development (likelihood and risk of change).
2) Introduce Boundary Interfaces for Testability
- Introduce an internal component boundary and push everything unwanted structures.
3) Write Feature Acceptance Tests
- Cover a feature with Acceptance Tests to establish a safety net for refactoring.
4) Identify Components
- Within a feature, identify the components used to provide the feature. Prioritise components according to relevance for future development (likelihood and risk of change).
5) Refactor Interfaces between Components
- Refactor (or introduce) interfaces between components so that each component can be tested in isolation of its environment.
6) Write Component Acceptance Tests
- Cover the features provided by a component with Acceptance Tests.
7) Decide for Each Component
- Refactor, Reengineer, Keep
- Decide for each component whether to refactor, reengineer or keep it.
8a) Refactor Component
- Redisign classes within the component and refactor step by step (see Refactoring Patterns). Add unit tests for each newly designed class.
8b) Reengineer Component
- Use ATDD and TDD (see Clean ATDD/TDD cheat sheet) to re-implement the component.
8c) Keep Component
- If you anticipate only few future changes to a component and the component had few defects in the past, consider keeping it as it is.
Refactoring Patterns
Reconcile Differences – Unify Similar Code
- Change names of code stepwise until they are identical.
Isolate Change
- First, isolate the code to be refactored from the rest. Then refactor. Finally, undo isolation.
Migrate Data
- Move from one representation to another by temporary duplication of data structures.
Temporary Parallel Implementation
- Refactor by introducing a temporary parallel implementation of an algorithm. Switch one caller after the other. Remove old isolation when no longer needed.
Demilitarized Zone for Components
- Introduce an internal component boundary and push everything unwanted outside of the internal boundary into the demilitarized zone between component interface and internal boundary. Then refactor the component interface to match the internal boundary and eliminate the demilitarized zone.
How to Learn Clean Code
Pair Programming
- Two developers solving a problem together at a single workstation. One is the driver, the other is the navigator. The driver is responsible for writing the code. The navigator is responsible for keeping the solution aligned with the architecture, the coding guidelines and looks at where to go next (e.g. which test to write next). Both challenge their ideas and approaches to change.
Commit Reviews
- A developer walks a peer developer through all code changes prior to committing (or pushing) the changes to the version control system. The peer developer checks the code against clean code guidelines and design guidelines.
Coding Dojo
- In a Coding Dojo, a group of developers come together to exercise their skills. Two developers solve a problem (kata) in pair programming. The rest observe. After 10 minutes, the group rotates to build a new pair. The observers may critique the current solution, but only when all tests are green.
Bibliography
Clean Code: A Handbook of Agile Software Craftsmanship by Robert Martin
Legend:
<table>
<thead>
<tr>
<th>Condition</th>
<th>DO</th>
<th>DON'T</th>
</tr>
</thead>
<tbody>
<tr>
<td>To serve more complex objects better without affecting simple objects</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To implement an essential function better with minor changes</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To make a simple extension without a significant change in the existing code</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To refactor a component step by step, without breaking the application</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To implement a feature with new data instead of changing existing data</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To improve a feature with new data instead of changing existing data</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To implement a new feature without breaking existing functionality</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To improve an existing feature without breaking existing functionality</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To add a new feature without breaking existing functionality</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To remove a feature without breaking existing functionality</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To ensure that changes do not affect existing functionality</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To ensure that changes do not affect existing data</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To ensure that changes do not affect existing code</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>To ensure that changes do not affect existing algorithms</td>
<td>+</td>
<td></td>
</tr>
</tbody>
</table>
Clean ATDD/TDD
Always unit test boundaries. Do not assume behaviour.
Incorrect Behaviour at Boundaries
Don't Assume tested methods (e.g. anonymousText)
Resource Files
Test Method Naming
production assembly + ".Test"
Structure the tests always by AAA. Never mix these three blocks.
use methods or properties.
have a lifetime equal possible.
probably do not want to test drive everything. Use POUT to increase sanity.
POUTing
Write a unit test that reproduces the defect
DDT – Defect Driven Testing
– Test Driven Development
ATDD
– Specify
– Always unit test boundary. Do not assume behaviour.
Faking (Stubs, Fakes, Spies, Mocks …)
– Use fakes to simulate all dependencies of the testee.
Faking Framework
– Use a dynamic fake framework for fakes that show different behaviour in different test scenarios (little behaviour reuse).
Mockually Writing Fakes
– Use manually written fakes when they can be used in several tests and they have only little changed behaviour in these scenarios (behaviour reuse).
Mixing Stubbing and Expectation Declaration
– Make sure that the test setup is in a range, act, assert syntax when using stubs and expect in the same block.
Due to mix setting up stubs (so that the test case can run) with expectations (on what the testee should do) in the same code block.
Checking Fakes instead of Testee
– Tests that do not check the testee but values returned by fakes. Normally due to excessive fake usage.
Excessive Fake Usage
– If your test needs a lot of mocks or mock setup, then consider splitting the testee into several classes or provide an additional abstraction between your testee and its dependencies.
Unit Test Principles
Fast
– Unit tests have to be fact in order to be executed often. Fast means much smaller than seconds.
Isolated
– Clear where the failure happened. No dependency between tests (random order).
Repeatable
– No assumed initial state, nothing left behind, no dependency on external services that might be unavailable (databases, file system ...).
Self-Validating
– No manual test interpretation or intervention. Red or green!
Tinys
– Tests are written at the right time (TDD, DDT, POUTing)
Unit Test Smells
Test Not Testing Anything
– Passing test that at first sight appears valid but does not test the testee.
Test Needing Excessive Setup
– A test that needs dozens of lines of code to set up its environment. This is mainly due to things that are really tested.
Too Large Test / Assertions for Multiple Scenarios
– A valid test that is, however, too large. Reasons can be that this test checks for more than one feature or the testee does more than one thing (violation of Single Responsibility Principle).
Checking Internals
– A test that accesses internals (private/protected members) of the testee directly (Reflection). This is a refactoring killer.
Test Only Running on Developer’s Machine
– A test that checks more than it is dedicated to. The test fails whenever something changes that it checks unnecessarily. Especially probable when fakes are involved or checking for item order in unordered collections.
Irrelevant Information
– Test contains information that is not relevant to understand it.
Chatty Test
– A test that fills the console with text − probably used once to manually check for something.
Test Swallowing Exceptions
– A test that catches exceptions and lets the test pass.
Test Not Belonging in Host Test Fixture
– A test that tests a completely different testee than all other tests in the fixture.
Obsolete Test
– A test that checks something no longer required in the system. May even prevent clean-up of production code because it is still referenced.
Hidden Test Functionality
– Test functionality hidden in either the SetUp method, base class or helper class. The test should be clear by looking at the test method only − no initialization or asserts somewhere else.
Boated Construction
– The construction of dependencies and arguments used in calls to testee makes test hard to read. Extract to helper methods that can be reused.
Unclear Fail Reason
– Sporadic test or split message assertions.
Conditional Test Logic
– Tests should not have any conditional test logic because it’s hard to read.
Test Logic in Production Code
– Tests depend on special logic in production code.
Greasy Test
– Sometimes passes, sometimes fails due to left overs or environment.
TDD Principles
A Test Checks One Feature
– A test checks exactly one feature of the testee. That means that it tests all things included in this feature but nothing more. This includes probably more than one call to the testee. This way, the test serves as samples and documentation of the usage of the testee.
Tiny Steps
– Make tiny little steps. Add only a little code in test before writing the required production code. Then repeat. Add only one Assert per step.
Keep Tests Simple
– Whenever a test gets complicated, check whether you can split the testee into several classes (Single Responsibility Principle).
Prefer State Verification to Behaviour Verification
– Use behaviour verification only if there is no state to verify.
Test Domain Specific Language
– Use test DSLs to simplify reading tests: helper methods, classes.
TDD Process Smells
Using Code Coverage as a Goal
– Using code coverage to find missing tests but don’t use it as a driving tool.
Otherwise, the result could be tests that increase code coverage but not certainty.
No Green Bar in the last ~10 Minutes
– Make small steps to get feedback as fast and frequent as possible.
Not Running Test Before Writing Production Code
– Only if the test fails, then new code is required. Additionally, if the test, surprisingly, does not, fail then make sure the test is correct.
Not Spending Enough Time on Refactoring
– Refactoring is an investment in the future. Readability, changability and extensibility will pay back.
Skipping Something Too Easy to Test
– Don’t assume, check it. If it is easy, then the test is even easier.
Skipping Something Too Hard to Test
– Make it simpler, otherwise bugs will hide in there and maintainability will suffer.
Acceptance Test Driven Development
– Acceptance tests check for the required functionality. Let them guide your TDD
User Feature Test
– An acceptance test is a test for a complete user feature from top to bottom that provides business value.
Automated ATDD
– Use automated Acceptance Test Driven Development for regression testing and executable specifications.
Component Acceptance Tests
– Write acceptance tests for individual components or subsystems so that these parts can be combined freely without losing test coverage.
Simulate System Boundaries
– Simulate system boundaries like the user interface, databases, file system and external services to speed up your acceptance tests and be able to check exceptional cases (e.g. a full hard disk). Use system tests to check the boundaries.
Acceptance Test Spree
– Do not write acceptance tests for every possibility. Write acceptance tests only for real scenarios. The exceptional and theoretical cases can be covered more easily with unit tests.
Red Bar Patterns
One Step Test
– Pick a test you are confident you can implement and whch maximises learning effect (e.g. impact on design).
Partial Test
– Write a test that does not fully check the required behaviour, but brings you a step closer to it. Then use Extend Test below.
Extend Test
– Extend an existing test to better match real-world scenarios.
Another Test
– If you think of new tests, then write them on the TO DO list and don’t lose focus on current test.
Learning Test
– Write tests against external components to make sure they behave as expected.
Green Bar Patterns
Fake It (‘til You Make It)
– Return a constant to get first test running. Refactor later.
Triangulate – Drive Abstraction
– Write test with at least two sets of sample data. Abstract implementation on these.
Obvious Implementation
– If the implementation is obvious then just implement it and see if it runs. If not, then step back and just get test running and refactor that.
One to Many – Drive Collection Operations
– First, implement operation for a single element. Then, step to several elements.
Acceptance Test Driven Development
– Acceptance tests check for the required functionality. Let them guide your TDD
User Feature Test
– An acceptance test is a test for a complete user feature from top to bottom that provides business value.
Automated ATDD
– Use automated Acceptance Test Driven Development for regression testing and executable specifications.
Component Acceptance Tests
– Write acceptance tests for individual components or subsystems so that these parts can be combined freely without losing test coverage.
Simulate System Boundaries
– Simulate system boundaries like the user interface, databases, file system and external services to speed up your acceptance tests and be able to check exceptional cases (e.g. a full hard disk). Use system tests to check the boundaries.
Acceptance Test Spree
– Do not write acceptance tests for every possibility. Write acceptance tests only for real scenarios. The exceptional and theoretical cases can be covered more easily with unit tests.
Continuous Integration
Pre-Commit Check
Run all unit and acceptance tests covering currently worked on code prior to committing to the source code repository.
Post-Commit Check
Run all unit and acceptance tests on every commit to the version control system on the continuous integration server.
Communicate Failed Integration to Whole Team
Whenever a stage on the continuous integration server fails, notify whole team in order to get blocking situation resolved as soon as possible.
Build Staging
Split the complete continuous integration workflow into individual stages to reduce feedback time.
Automatically Build an Installer for Test System
Automatically build an installer as often as possible to test software on a test system (for manual tests, or tests with real hardware).
Continuous Deployment
Install the system to a test environment on every commit or manual request. Deployment to production environment is automated, too.
Test Pyramid
Test Driven Development: By Example by Kent Beck
ATDD by Example: A Practical Guide to Acceptance Test-Driven Development by Markus Gärtner
The Art of Unit testing by Roy Osherove
xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros
Bibliography
ATDD, TDD cycle
Write acceptance criteria for user story
The whole team defines acceptance criteria for user stories.
Define examples
The whole team defines examples for acceptance criteria used to show that code works.
Write acceptance test skeleton
Map the examples into an empty specification/test in your acceptance test framework (Selenium, MSpec classes and It statements ...)
Explore design
Implement a Spike to gather enough knowledge so you can design a possible solution.
Make an initial design
Roughly design how you want to implement the new functionality, especially the interface for your acceptance test (how to call and verify functionality).
Refactor
Refactor existing code to simplify introduction of new functionality. Run all tests to keep code working.
Write an acceptance test
Add arrange, act and assert parts to the acceptance test skeleton (Given, When, Then or Establish, Because, It ...)
Succeeded and all acceptance tests implemented yet
Run acceptance test
You have no class design idea
You have a class design idea
Make error reason obvious
The failing test should state what went wrong so you don’t have to debug the code.
Succeeded, code clean, TO DO list empty
Succeeded, code clean, TO DO list not empty
TO DO list
- Add missing test when you think of one
- Remove test when written
We write the TO DO list into the same file as the unit test with // TODO:
Pick test:
1) Prove that the code is making a hard coded assumption.
2) Prove that something is wrong.
3) Prove that something is missing.
Write a test
Add a minimal test or make a minimal change to an existing test (< 10 minutes).
Run test
Succeeded
Failed
Make error reason obvious
The failing test should state what went wrong so you don’t have to debug the code.
Failed
Succeeded
Clean up code
Apply clean code guidelines. Redesign classes as needed. (< 10 minutes).
Succeeded, code not clean
DO list
Legend:
DO
DON'T
|
{"Source-Url": "https://www.planetgeek.ch/wp-content/uploads/2013/06/Clean-Code-V2.1.pdf", "len_cl100k_base": 6607, "olmocr-version": "0.1.50", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 15677, "total-output-tokens": 6906, "length": "2e12", "weborganizer": {"__label__adult": 0.0005068778991699219, "__label__art_design": 0.0002982616424560547, "__label__crime_law": 0.0003025531768798828, "__label__education_jobs": 0.0006814002990722656, "__label__entertainment": 5.072355270385742e-05, "__label__fashion_beauty": 0.00016987323760986328, "__label__finance_business": 0.00017511844635009766, "__label__food_dining": 0.0004131793975830078, "__label__games": 0.000701904296875, "__label__hardware": 0.00039768218994140625, "__label__health": 0.0002472400665283203, "__label__history": 0.00016009807586669922, "__label__home_hobbies": 9.09566879272461e-05, "__label__industrial": 0.0002467632293701172, "__label__literature": 0.00024139881134033203, "__label__politics": 0.0002213716506958008, "__label__religion": 0.0004086494445800781, "__label__science_tech": 0.0006589889526367188, "__label__social_life": 9.5367431640625e-05, "__label__software": 0.0033130645751953125, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.0004172325134277344, "__label__transportation": 0.0003666877746582031, "__label__travel": 0.00023674964904785156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30623, 0.00138]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30623, 0.52083]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30623, 0.8965]], "google_gemma-3-12b-it_contains_pii": [[0, 8744, false], [8744, 18176, null], [18176, 27473, null], [27473, 30623, null]], "google_gemma-3-12b-it_is_public_document": [[0, 8744, true], [8744, 18176, null], [18176, 27473, null], [27473, 30623, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30623, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30623, null]], "pdf_page_numbers": [[0, 8744, 1], [8744, 18176, 2], [18176, 27473, 3], [27473, 30623, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30623, 0.03587]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
4a6a5f7157db8f53a2523945f4ec0b37d9a1000f
|
Precise Buffer Overflow Detection via Model Checking
Sagar Chaki, Scott Hissam
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, USA
chaki@sei.cmu.edu, shissam@sei.cmu.edu
Abstract
Buffer overflows are the source of a vast majority of vulnerabilities in today’s software. Existing solutions for detecting buffer overflow, either statically or dynamically, have serious drawbacks that hinder their wider adoption by practitioners. In this paper we present an automated overflow detection technique based on model checking and iterative refinement. We discuss advantages, and limitations, of our approach with respect to today’s existing solutions. We also describe how our approach may be implemented on top of a model checking technology being developed at the Software Engineering Institute (SEI).
Introduction
Software vulnerabilities [1, 2] are one of the major causes of concern in today’s information centric world. For example it is estimated [3] that “Hacker attacks cost the world economy a whopping $1.6 trillion in 2000” and that “US virus and worm attacks cost $10.7 billion in the first three quarters of 2001”. This problem is further highlighted by the increasing number of attacks that exploit such vulnerabilities. For example, “The CMU CERT Coordination Center reported 76,404 attack incidents in the first half of 2003, approaching the total of 82,094 for all of 2002 in which the incident count was nearly four times the 2000 total. If anything, the CERT statistics may understate the problem, because the organization counts all related attacks as a single incident. A worm or virus like Blaster or SoBig, a self-replicating program that can infect millions of computers, is but one event.”
Buffer overflows are widely recognized [17] to be the prime source of vulnerabilities in commodity software. For example, the CodeRed1 worm that caused an estimated global damage worth $2.1 billion in 2001 [3] exploited a buffer overflow in Windows. In addition, Wagner et al. [22] report, on the basis of CERT advisories, that “buffer overruns account for up to 50% of today’s vulnerabilities, and this ratio seems to be increasing over time.”
© 2005 Carnegie Mellon University. Unlimited distribution subject to the copyright.
**Abstract**
Buffer overflows are the source of a vast majority of vulnerabilities in today’s software. Existing solution for detecting buffer overflow, either statically or dynamically, have serious drawbacks that hinder their wider adoption by practitioners. In this paper we present an automated overflow detection technique based on model checking and iterative refinement. We discuss advantages, and limitations, of our approach with respect to today’s existing solutions. We also describe how our approach may be implemented on top of a model checking technology being developed at the Software Engineering Institute (SEI).
**Subject Terms**
- Precise Buffer Overflow Detection via Model Checking
- Model Checking
- Iterative Refinement
Broadly speaking, a buffer overflow occurs when a piece of data $D$ is written to a buffer $B$ such that the size of $D$ is greater than the legally allocated size of $B$. In the case of a type-safe language or a language with explicit bound checking (such as Java), this leads to an exception being thrown. Unfortunately, the vast majority of commercial and legacy software is written in unsafe languages (such as C or C++). Such languages allow buffers to be overflowed with impunity.
Buffer overflows are typically used by attackers to execute arbitrary code (such as a shell) with administrative privileges. For example, a common strategy is to overwrite a program’s activation record (commonly called a stack smashing attack) in order to redirect its control flow to any desired point. As such, buffer overflows are extremely dangerous and can lead to catastrophic system compromises and failures.
As mentioned before, the vast majority of commercial off-the-shelf (COTS) and legacy software involves unsafe programming languages and is therefore particularly vulnerable to buffer overflows. Unfortunately, our critical infrastructure is becoming increasingly dependent on legacy and COTS systems. For instance, only about 23% of the software in the Joint Strike Fighter (JSF) consists of new automatically generated code that we may reasonably assume to be free of buffer overflows. However, the remaining 77% of the JSF software is an assembly of COTS components (12%), legacy code (30%), multi-used systems (23%) and manually written (12%) programs [3], developed in large part using languages that have no intrinsic safeguard against buffer overflows. Thus, an overwhelming majority of the JSF software is prone to buffer overflow vulnerabilities. Other instances of systems relying on COTS software include the voice switching systems of the White House and the NSANet [3]. More importantly, this situation is only going to worsen in the days to come. It is therefore imperative that we develop effective tools that can detect, and aid in fixing, buffers overruns in large software systems written in unsafe languages.
**Buffer Overflow Detection: Existing Approaches**
Given its significance, it is not surprising that considerable effort has been devoted toward the development of buffer overflow detection systems. Nevertheless, a satisfactory solution to this problem remains elusive. In this section we will discuss existing automated techniques for overflow detection. Manual approaches are inherently non-scalable, therefore we focus on procedures that involve a fair amount of automation.
A number of approaches that have been proposed to detect buffer overflows are type-theoretic [18] in nature. Such techniques require that programs be written in a type-safe language and are hence not applicable to the vast body of (legacy as well as in-production) systems that involve unsafe languages such as C or C++. Techniques based on simulation or testing are inexpensive and widely prevalent. However, they usually suffer from extremely low coverage and are typically unable to provide any reasonable degrees of assurance about critical software systems.
Yet other buffer overflow detection schemes advocate a run-time or dynamic [19, 20, 21] strategy. Such approaches incur performance penalties that are unacceptable in many
---
2 Strictly speaking, our technique only applies to COTS components for which source code is also available.
© 2005 Carnegie Mellon University. Unlimited distribution subject to the copyright.
situations. Even when performance is not a serious issue, it is often imperative that we be assured of the correctness of a system before it is deployed since any failure in real-life would be catastrophic. Such guarantees can only be obtained via static approaches.
More recently, a number of static approaches for buffer overflow detection have been proposed that rely on static analysis of programs. These approaches are usually based on converting the buffer overflow problem into a constraint solving problem, such as integer range checking [22] or integer linear programming [23]. Static analysis amounts, in principle, to a form of model checking [24] over the control flow graph of a program. However, a control flow graph is an extremely imprecise model. Therefore, in practice, static analysis is plagued by false positives. More specifically, every probable buffer overflow flagged by static analysis must be manually inspected to ensure that it corresponds to an actual problem and is not an artifact of the imprecise model which has no concrete realization.
In practice, three drawbacks seriously limit the effectiveness of static analysis as an approach for buffer overflow detection. First, each bug report must be manually verified. Second, a large majority of problems reported by static analysis turn out to be false alarms. Finally, there is no automated procedure for getting rid of false alarms by constructing more precise models.
Our buffer overflow detection technique is also static but is based on a paradigm called iterative refinement that overcomes each of the above shortcomings of static analysis. We will describe iterative refinement in detail shortly. But first we describe our model checking infrastructure on which we plan to develop the buffer overflow detection technology.
The PACC Initiative
The Predictable Assembly from Certifiable Components (PACC) [6] initiative at the SEI aims to predict the behavior of a component-based system prior to implementation, based on known properties of components. The vision of PACC is that software components have certified properties (for example, safety, reliability, and performance) and the behavior of systems assembled from components is predictable. To this end, PACC is developing a component specification language called Construction and Composition Language (CCL) [7], a run-time called Pin [10], and a set of reasoning frameworks [9], packaged together as a Prediction Enabled Component Technology (PECT) [8].
Currently, two reasoning frameworks are being developed by the PACC team. The performance family of reasoning frameworks [11] employs analytic theories such as rate monotonic analysis and real-time queuing theory for predicting run-time attributes related to system performance. Typical examples of such attributes are the average and worst-case latencies of a system under various distributions of the arrival rate and execution times of jobs. The ComFoRT [12] reasoning framework uses software verification technology based on model checking [4] and automated abstraction-refinement to prove claims related to system reliability and safety. A significant advantage of both these reasoning frameworks is that they are static, and do not require a system to be executed for making any kind of prediction about its run-time behavior.
In the remainder of this paper we will establish strong connections between the model checking technology being developed as part of the ComFoRT reasoning framework and buffer overflow detection. In particular, we will show how the model checking infrastructure developed as part of ComFoRT can be leveraged to develop an effective overflow detection system.
ComFoRT: Model Checking in PACC
Model checking is an automated approach for exhaustively analyzing whether systems satisfy specific behavioral claims that express safety and reliability requirements. Due to its exhaustive nature, model checking is especially attractive for analyzing concurrent systems where the number of possible interleaving between various components is quite large, and yet must be explored. A distributed safety critical software system is a typical example of such a system. The ComFoRT reasoning framework packages the effectiveness of state-of-the-art model checking in a form that enables software developers to apply the analysis technique without being experts in its use. The key ideas behind ComFoRT are:
• Safety and reliability claims about a target system are expressed using a state/event-based temporal logic. The logic, called SE-LTL [13], was developed particularly for the purpose of specifying properties of software systems. An SE-LTL claim essentially encodes desirable (or undesirable) sequences composed of a combination of constraints on data and occurrence of events. Such claims may equivalently be expressed as finite state machines.
• A CCL component specification, which also includes a relevant claim to be verified, is automatically interpreted into a concurrent program that can be input to a model checker. This program is written in a restricted form of ANSI-C along with finite state machine descriptions of certain library routines.
• The concurrent program resulting from the interpretation is input to a model checker, Copper. The output of Copper is either yes, which means that the claim holds, or no, which means that it does not. If the result is no, then Copper also returns a counterexample, which is an actual execution of the program that violates the claim. The counterexample is reverse-interpreted to the original CCL form and returned as diagnostic feedback.
Iterative Refinement
The ability of Copper to verify claims on concurrent, and in general infinite-state, C programs is based on the iterative refinement paradigm. As mentioned before, iterative refinement addresses the three critical drawbacks of conventional static analysis of programs. It achieves this via an iterative procedure that can be described (please see Figure 1 for a pictorial description), in the context of Copper, as follows:
1. Construct a finite state conservative concurrent model M of the target C program. Copper uses a technique called predicate abstraction to achieve this.
2. Model check M against the desired claim. If M satisfies the claim, and since it is a conservative model, then so does the original C program. In this case, Copper exits
with a yes. Otherwise let CE be a counterexample to the claim with respect to the model M.
3. Check if CE is also a counterexample with respect to the original C program. If so, then Copper exits with no and returns CE as the counterexample. Otherwise CE is said to be spurious since it is a behavior that does not belong to the original C program, but was only introduced in the model M by the abstraction process.
4. Construct a more precise model M using the spurious CE. The new model is guaranteed not to contain CE as an admissible behavior. Repeat from step 2 above.
In summary, iterative refinement improves upon static analysis by enabling automated verification of counterexample for spuriousness, and automated model refinement to eliminate spurious counterexamples. It is therefore extremely suitable for detecting violations of safety conditions in large-scale software systems in an automated and scalable manner. In particular, the absence of buffer overflows is a perfect example of a safety claim that is amenable to iterative refinement.
Our Approach: PACC for Buffer overflow
In general, model checking is an extremely attractive choice as a foundation for buffer overflow detection technology. The main reason behind this is that buffer overflow is concerned with a program’s control flow and simple relationships between its data. More specifically, in order to prove the absence of buffer overflow, we only need to show, for every control flow point where some data \( D \) is being written into a buffer \( B \), that the allocated size of \( B \) is no less than the size of \( D \). We do not need to be concerned about the properties of the data \( D \) itself. Such situations are most conducive for software model checking to succeed\(^3\).
In contrast to dynamic approaches [29, 30, 31, 5], model checking is static. It therefore incurs no run-time overheads and requires no mechanisms for graceful recovery once an anomaly has been detected. This feature is particularly useful for mission-critical software whose correctness must be ensured before actual deployment. Finally, the capability of model checking to generate counterexamples is invaluable for producing
\(^3\) Indeed, model checking has been applied with considerable success toward the detection of software bugs (including security bugs) [25, 26, 27] in recent times.
© 2005 Carnegie Mellon University. Unlimited distribution subject to the copyright.
diagnostic feedback\(^4\). However, in order to be fruitfully applied to infinite state systems, such as software, model checking must be combined with a technique such as iterative refinement.
We therefore plan to apply iterative refinement for buffer overflow detection in C programs. More specifically, we will develop our overflow detection technology on top of the ComFoRT reasoning framework. Several features of ComFoRT make it a lucrative and advantageous choice for buffer overflow flaw detection in comparison to the tools and techniques presented above. First, ComFoRT includes powerful and automated abstraction-refinement techniques that allow it to model source code at the correct level of granularity. As mentioned before, all existing static overflow detection tools are limited by their ability to only model programs as their control flow graph. This makes them prone to an excessive number of false alarms and inhibits their wider adoption by practitioners.
In addition, ComFoRT allows the verification of concurrent systems. This will allow our approach to detect buffer overruns in multi-threaded and distributed systems. Such vulnerabilities are expected to become increasingly more frequent, and threatening, in the days to come. They are also virtually impossible to detect using present day tools and algorithms that can only analyze components in isolation.
Another important problem faced by existing buffer overflow detection systems is the inability to specify appropriate environments. This ultimately results in an increased number of false alarms. It is important to note that these false alarms are the result of imprecise modeling of environment as opposed to imprecise modeling of the program being analyzed. The ability to analyze concurrent systems will enable us to specify proper environments and eliminate this category of spurious counterexamples as well.
Certifying Buffer Overflow Freedom
For software systems that are mission-critical in nature, the ability to trust analysis results becomes imperative. This is no longer a trivial issue since the tools that analyze complex software have themselves become quite complicated. As part of both the performance and ComFoRT reasoning frameworks, we are developing validation techniques that enhance our ability to achieve increased confidence in our predictions.
In the context of the ComFoRT reasoning framework, we are investigating techniques for combining certifying model checking [15] and proof carrying code [14]. The goal is to enhance the existing ComFoRT framework so that it outputs a *proof certificate* along with *yes* if a desired claim is found to hold. The validity of the proof certificate can be checked separately to assure the correctness of positive answer returned by ComFoRT.
The power of this technique lies in the fact that it can be automated and applied to realistic software systems. In addition, certificates are tamper-proof. A valid proof certificate guarantees that the software system is provably secure, even if it was generated by an untrusted source and transmitted over an untrusted communication channel. We
\(^4\) Interestingly, this feature of model checking has also been successfully used to generate attack graphs [28] for intrusion detection in large-scale networks.
plan to extend this technique to generate trusted certificates for software certified to be free of buffer overflows.
**Challenges and Success Measures**
It is also important to discuss some of the challenges that must be overcome in order to adapt and apply successfully model checking technology to buffer overflow detection. An important challenge is scalability. Model checking is known to be hampered by the state-explosion problem, in particular for concurrent systems whose number of reachable states increases exponentially with the number of components. We believe that powerful abstraction and compositional reasoning techniques, some of which were developed as part of ComFoRT, enable us to tackle this problem.
The predicate abstraction implemented as part of ComFoRT has limited support for pointers. On the other hand, pointers are an integral part of the buffer overflow problem. We must therefore add improved pointer support to ComFoRT as part of the development of our buffer overflow detection tool. While the theory behind this step is fairly well-know, its practical ramifications are yet to be completely understood. Finally, the abstraction refinement scheme implemented in ComFoRT is geared toward SE-LTL counterexamples. We must tailor this step for buffer overflows. Once again, this is theoretically straightforward but practically unchartered.
As the old saying goes, nothing succeeds like success. Thus, the ultimate success story would be the fruitful demonstration of the effectiveness of our tool on a wide selection of representative examples. Some additional achievements would be: (a) a better understanding of the applicability of model checking for buffer overflow detection, and (b) a qualitative measure of the relative advantages of iterative refinement over traditional static analysis methods in the context of buffer overflow detection.
**Summary**
In summary, the goal of this effort is to develop a buffer overflow infrastructure that:
- **Models source code more precisely, yet scalably**, than existing tools. This will reduce the amount of false alarms, and enable us to analyze larger program, fostering wider acceptance by practitioners. We believe that techniques such as iterative refinement and symbolic representations will help us in this direction.
- **Detects buffer overflows that arise only due to the interaction of multiple components and specifies appropriate environments.** This is impossible using existing buffer overflow detection tools that analyze components in isolation. We believe that or use of model checking will provide us with the vital capability of finding distributed buffer overflows.
- **Certifies code to be free of buffer overflows.** This is clearly an extremely powerful capability that is also lacking in existing tools. An important precondition for certification is that analysis must be conservative, i.e., if the analysis cannot find any problems, then there really are no problems. Fortunately, the abstraction techniques we use are conservative.
Conclusion
Society is becoming ever increasingly reliant on software to manage critical infrastructure. Buffer overflows remain, and continue to grow in importance as, the major source of vulnerabilities in such software systems. Despite considerable effort, a satisfactory solution to the buffer overflow detection problem remains elusive. In this paper we have presented a static buffer overflow detection scheme based on model checking and iterative refinement. In particular, we believe that the model checking technology being developed as part of the PACC initiative at the SEI can be adapted to develop such a buffer overflow detection tool. Our tool will not only be able to analyze concurrent systems but will also be able to generate certificates that guarantee that the target program is free from buffer overflows. We believe that such a tool will go a long way in enhancing the state-of-the-art in our buffer overflow detection and certification technology.
Reference:
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a633384.pdf", "len_cl100k_base": 4281, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22910, "total-output-tokens": 6304, "length": "2e12", "weborganizer": {"__label__adult": 0.0003690719604492187, "__label__art_design": 0.0002586841583251953, "__label__crime_law": 0.0007271766662597656, "__label__education_jobs": 0.0003845691680908203, "__label__entertainment": 5.495548248291016e-05, "__label__fashion_beauty": 0.0001481771469116211, "__label__finance_business": 0.00014030933380126953, "__label__food_dining": 0.0003256797790527344, "__label__games": 0.0006046295166015625, "__label__hardware": 0.0009794235229492188, "__label__health": 0.0005350112915039062, "__label__history": 0.00015985965728759766, "__label__home_hobbies": 7.170438766479492e-05, "__label__industrial": 0.0003066062927246094, "__label__literature": 0.00020706653594970703, "__label__politics": 0.0002644062042236328, "__label__religion": 0.0003533363342285156, "__label__science_tech": 0.0171356201171875, "__label__social_life": 7.385015487670898e-05, "__label__software": 0.00673675537109375, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.0002665519714355469, "__label__transportation": 0.00041961669921875, "__label__travel": 0.00015866756439208984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27916, 0.01861]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27916, 0.35838]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27916, 0.90915]], "google_gemma-3-12b-it_contains_pii": [[0, 2316, false], [2316, 3062, null], [3062, 6609, null], [6609, 9950, null], [9950, 13020, null], [13020, 15475, null], [15475, 18786, null], [18786, 21834, null], [21834, 24902, null], [24902, 27916, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2316, true], [2316, 3062, null], [3062, 6609, null], [6609, 9950, null], [9950, 13020, null], [13020, 15475, null], [15475, 18786, null], [18786, 21834, null], [21834, 24902, null], [24902, 27916, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27916, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27916, null]], "pdf_page_numbers": [[0, 2316, 1], [2316, 3062, 2], [3062, 6609, 3], [6609, 9950, 4], [9950, 13020, 5], [13020, 15475, 6], [15475, 18786, 7], [18786, 21834, 8], [21834, 24902, 9], [24902, 27916, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27916, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
592a595fb0b9969f51e93c9b95cde10a994878a0
|
Peter Šípoš
Scripting in Audacity Audio Editor
Department of Distributed and Dependable Systems
Supervisor of the bachelor thesis: RNDr. Tomáš Pop
Study programme: Computer Science
Specialization: Programming
Prague 2013
Dedication. I wish to dedicate this thesis to my parents for their support and for my supervisor for his patience.
I declare that I carried out this bachelor thesis independently, and only with the cited sources, literature and other professional sources.
I understand that my work relates to the rights and obligations under the Act No. 121/2000 Coll., the Copyright Act, as amended, in particular the fact that the Charles University in Prague has the right to conclude a license agreement on the use of this work as a school work pursuant to Section 60 paragraph 1 of the Copyright Act.
In .......... date ............ signature of the author
Název práce: Scripting in Audacity Audio Editor
Autor: Peter Šípoš
Katedra: Katedra distribuovaných a spolehlivých systémů
Vedoucí bakalářské práce: RNDr. Tomáš Pop, Katedra distribuovaných a spolehlivých systémů
Abstrakt: Cílem této bakalářské práce byl vytvoření doplňku k programu Audacity, což by umožnil používání scriptu během editace zvukových nahrávek.
V první části práce je popsaná architektura Audacity, jaké jsou možnosti pro doplnění funkčnosti a jaké alternativy jsou k dispozici. Poté popíše kritické rozhodnutí, které byly učiněné před implementací doplňku. Tyhle rozhodnutí byly ve spojení s výběrem scriptovacího jazyku, spravováním chybových stavů a návrhem grafického rozhraní. Na konci práce pomocí příkladů je ukázan funkčnost doplňku.
Na závěr, aplikace byl naimplementován, jako ukázka technik, které byly popsané v práci.
Klíčová slova: Scriptování, Audacity, Editace zvuku
Title: Scripting in Audacity Audio Editor
Author: Peter Šípoš
Department: Department of Distributed and Dependable Systems
Supervisor: RNDr. Tomáš Pop, Department of Distributed and Dependable Systems
Abstract: Audacity is popular and widely used audio editor available for all the main platforms including Windows, Mac as well as various Linux distributions. Audacity functionality can be controlled via sophisticated user interface, but the editor suffers from a lack of support for an automated execution of scripts and therefore, using audacity e.g., to perform the same action on multiple files can be tedious.
The thesis aims at extending Audacity editor to allow using scripts in the audio editing workflow.
The first part of the thesis overviews Audacity’s architecture, and discuss, how Audacity can be extended and what alternative applications are available. Then, the thesis describes the most important decisions that were taken, including the choice of scripting language, managing errors and designing user interface. Finally, the extension functionality is shown on several examples reflecting a typical use-cases.
Keywords: Scripting, Audacity, Audio Editing
Introduction
In the beginning of computer history the computers were dedicated to scientifical calculations or office work. Later, they became capable of displaying pictures, playing audio and video files. However, these computers were not enough powerful to create the audio and video materials, thus for this purpose special equipment was used.
As the computating power of home computers was increasing, creating multimedia materials became easier in home environment. In their time the most advanced home computers, that were capable of audio editing were Amiga computers from Commodore. These computers could be extended with digitization boards, that made them capable of recording audio signals. Moreover, audio editing software was available for them, so they were fully equipped to become a home cut studio. Because of their reasonable pricing these computers became very popular.
The IBM-compatible personal computers were also becoming more powerful, so playing and recording audio using them became real, too. Several audio editor softwares were created. Amongst them, maybe the best-known applications were Cool Edit and Soundforge. These applications are also being developed nowadays. During the years the Cool Edit was taken over by Adobe, that releases it under name Audition and the Sound Forge is property of Sony Corporation now.
Beside the proprietary software several open-source alternatives appeared. I would like to mention two of them: the Audacity and the SoX. While the SoX is a command-line application, that focuses on playing and recording audio files, but it supports some basic editing task, such as mixing, the Audacity is a complete audio editing software.
The Audacity is a fairly well-known audio editing software, which is used by a lot of users. It’s a multiplatform application, that can run on Windows, Linux and MacOS operating systems. It’s functionality is pretty extensive. It supports e.g. multitrack editing, filters, plugins .... Through its graphical interface the commands can be easily accessed. However, its mightiness, focusing on graphical user experience, is its biggest deficiency: repeatedly performing the same operations on several audio files is inconvenient.
This thesis aims to tackle this deficiency, which means, that the execution of "batch commands" become more convenient. This can be reached by creating a scripting interface, that, through a selected scripting language, makes the Audacity’s inner functionality available for scripting purposes. Thanks to this interface the user will be able to control Audacity with a script, that performs the task by calling the Audacity’s functions.
Choosing Audacity as a target of this thesis is beneficial, because of its large user base. These users won’t need to get used to a new audio editor and meanwhile they can use scripts to make their tasks easier.
Moreover, to make the workflow more convenient the scripting interface created as a part of this thesis will contain a script editor. This editor will provide the basic editing functionality including syntax-highlighting.
1. Background
In this chapter I would like to describe some techniques, that are used in my application and in Audacity itself.
1.1 The Architecture of Audacity
I use Audacity because it is a well-known, continuously developed and mature audio editor. However, it is designed as an application with graphical user interface, currently with restricted capabilities at scripting. Fortunately, Audacity supports plugins. The main purpose of these plugins is to add some sound-aware capabilities, such as generating DTMF tones. These plugins uses documented function calls and have access to all components building up the editor.
Fortunately, the developers of Audacity project are aware of this weakness. The experimental plugin mod-script-pipe can be used to access the Audacity’s functions from an external application. Its operation is more detailly described in Section 2.1.
Audacity uses wxWidgets library to create its user interface by implementeng all necessary functions to draw the components and to handle events. This style of design comes to be useful for mod-script-pipe. This plugin uses a technique called window-hijacking to take control over the GUI components. This means that the plugin creates its own instances of these components, so it has access to their functions.
1.2 Interprocess Communication
This thesis does not aim at comprehensive overview of interprocess communication, but since IPC is a very crucial part of my application I would like to describe this technique.
There are nine options for IPC in Windows operating systems, which are the following \[1\] : clipboard, component object model\[2\], data copy, dynamic data exchange\[3\], file mapping, mailslots, pipes, remote procedure call\[3\] and Windows sockets.
Each process in the system can communicate with other processes through one of the IPC methods. The simplest example from user perspective is copying an image from webpage to the clipboard and after pasting it to a text document. However, in the background several requirements must be fulfilled. First of all, both processes must be prepared for the clipboard feature (this is not problematic nowadays) and the receiver must support the format of the data in the clipboard. For example, copying a video clip from video editing software to the Notepad will not work. Another problem with the clipboard is that every single process can access it. This is acceptable when the user perform the above mentioned tasks,
\[1\] Also known under acronym COM
\[2\] Also known under acronym DDE.
\[3\] Also known under acronym RPC.
but it can be really problematic when some automated mechanism uses it, because the user interaction can interfere with them.
As I will describe in Section 2.1, the implemented application in this thesis will use named pipes to communicate with Audacity. Named pipes, beside the anonymous pipes, are a specific type of pipes. While the anonymous pipes are best suitable for redirecting standard input and output between the child and parent process [1], the named pipes can be used between fully distinct processes, even if they are running on separate computers [2].
1.3 Scripting Languages in General
It tends to be hard to define, what scripting languages are. Usually scripting languages are such computer languages that are interpreted rather than compiled [3]. Another way to describe the scripting languages is definition of scripts itself. While the source code of an application written e.g. in C++ calls functions implemented in the same language, the source code of a program (or more likely the script) written in a scripting languages usually calls functions implemented in another language [3].
Another option to define the scripting languages is specify them as domain specific languages. Usually they are prepared for a specific environment, while the general purpose programming languages, as their name predicts, can be used for a wide spectrum tasks. For instance the bash is considered as a scripting programming language. It is designed to control the Unix operating system by executing commands, which are usually implemented in a different programming language. The Microsoft’s Powershell is very similar to the bash and not surprisingly it is also considered as a scripting language.
The third option to distinguish the scripting languages from the rest of language’s world is making difference between compiled and interpreted languages. Scripting languages are usually interpreted languages, which means that the source code is not translated to a machine code or an intermediate code, but the interpreter directly interprets it. For instance the bash is interpreted language. However in some cases this is not so straightforward. The above mentioned PowerShell can e.g. access .NET libraries, which are precompiled to an intermediate language. Moreover, the new JavaScript implementations in web browser before the execution compile the script, because of performance issues.
In the previous paragraph I mentioned the JavaScript. Despite the slightly unusual behaviour of its new interpreters it is considered as being a scripting language. The code written in JavaScript more likely calls functions that are implemented in a different language. For instance an HTML5 game uses JavaScript code to perform painting, but these functions are basically implemented in a different language. What is more, the JavaScript’s initial purpose was enriching websites, which declares it to be a domain specific language.
1.4 Related Work
In the previous Section 1.3 I described the scripting languages by mentioning some of them. The bash and PowerShell are mainly used for interacting with
the operating system and its components (e.g. file copying, application installation, ...). Another widespread scripting languages are Perl and Python. One implementation of Python is related to this thesis. That is the GIMP-python [3], which enables creating plugins for GIMP using the Python language.
However, for GIMP there is another scripting interface. It is called the Script-Fu [5].
2. Analysis
In the following chapter I would like to discuss my decisions concerning the Audacity and the scripting interface in the early stages of development.
2.1 Overall Architecture
Audacity is designed to be controlled primarily using mouse through user interface or with a keyboard using shortcuts. However, we need to control it with commands sent from a script. In order to achieve this we need to utilize some kind of programming interface which can be called from our script.
Principally there are two possible options to tackle this problem. First option is to implement new interface to Audacity, by modifying its source code and adding necessary functions to existing parts of the application. The main benefit of this approach is having full control of capabilities accessible from script.
On the other hand, there is a major disadvantage, because it is hard to push patches directly to main Audacity source chain. This means that after new version release of Audacity the whole editor or some components may not work.
Second option is to create a standalone application, which communicates with Audacity using an existing interface. Implementing such an interface implies, that we can access to the Audacity using standardized functions. This enables us maintain our code independent from the Audacity’s source, which can be handy in the future as the applications are being developed.
I decided to use the second approach and within this to use an interface which is an experimental module written by Audacity team. The module is called mod-script-pipe and is a plugin for the Audacity.
The module enables us to control the main application by commands sent via a system pipe and it is primarily designed to be called from scripts.
The advantages of the second approach are the following:
- We communicate with the application via a standard channel
- The plugin is going to be developed in future
- We need not modify existing Audacity source
The main disadvantages of mod-script-pipe are the following:
- The plugin is still only available in beta status
- It is not included in standard Audacity executables, it should be compiled from source code
- Already existing functions can change
- Its documentation is in early stages
Considering all advantages and disadvantages of these three approaches - full application implemented, using external plugin or implement a new one with a standalone application - I decided to implement an interface connecting to the plugin not touching the Audacity’s source code.
2.2 Choosing the Implementation Programming Language
Audacity is written in C++ language using wxWidgets library to make it available for Windows and Unix-like systems, too. However, my decision to create a standalone application gave me freedom to choose a different programming language.
I decided to build the application based on .NET environment using C# language. The main reasons for my decision were the following:
- My application won’t execute calculation demanding tasks, so built-in optimization for .NET code will be enough sufficient
- More convenient use, because the .NET environment controls some areas itself, e.g. disposing unused objects using garbage collector
- The chosen JavaScript interpreter is also written in managed code
2.3 Choosing Scripting Language
Choosing the language user uses to interact with my application is very important. In future, the application should be able to use its own plugins for different scripting languages, but in the context of this thesis I decided to use JavaScript. JavaScript is pretty well-known scripting languages among the users who use scripting in their everyday life, so they can get used to it very fast in this case, too. Moreover, the JavaScript supports features, that are necessary to interact with Audacity’s scripting API, e.g. classes, functions.
Upon my decision to use C# language I had to focus on interpreters, that can be accessed from the .NET environment. I considered two JavaScript interpreters, which could be used for my application: the Node.js [6] and JavaScript.NET [7]. Both solutions have their advantages and disadvantages. Node.js is based on Google Chrome’s JavaScript engine called V8, which is fast, supports a lot of functions (e.g. disk I/O, networking, encryption, ...). The JavaScript.NET is way more simple implementation of the language interpreter. It focuses purely on the language itself and we have to implement the classes we need to interact with. On the other hand, I found, that the main advantage of this interpreter is, that it can be attached to the main project as a DLL library, so no additional user interaction is needed, while the Node.js should be installed on the target computer.
According to the above mentioned causes I decided to implement my application using the JavaScript.NET interpreter.
2.4 Error Handling
User-created scripts can contain errors or errors can occur while the script is being executed and thus, error handling is crucial part of script interpreters.
Runtime errors can be handled in different ways. One option is implementing functions which always return a value, even when an error occurred. In this case
the returned value contains information about the failure. Another method is
let the function execution interrupt on when an error happened. In this case an
exception is thrown to describe the errorstate.
The above mentioned approaches have their advantages and disadvantages.
Managing exceptions consumes more resources than an usual execution flow.
However, exceptions shouldn’t be very frequent.
Idea of functions always returning some values may sound good, but it has
several drawbacks. The .NET Runtime signalizes errors by exceptions. This
means, that in this particular case all exceptions should be handled inside our
API, which brings several problems:
- There will be redundant code parts within the source code, caused by similar
errors happening at different actions (errors at pipe communication can be
such an example)
- Signaling error statuses needs our own system, which would basically be
the same or very similar to the system implemented in .NET itself
Based on previously mentioned causes I decided to use the exceptions for error
handling rather than creating some special return values for my API functions.
2.5 Audacity API
The mentioned Audacity extension uses its own commands to control the main
program’s interface. These commands can be supposed to be the Application
Programming Interface (API) for the main program.
The commands form two groups:
**General commands** They control general functionality e.g. creating a new
track or playing the audio file
**Menu commands** They control functions which are accessible from the menus
e.g. using some filters
Menu commands control functions that are usually invoked from the menu
structure of Audacity’s graphical interface. Such a function is applying a filter to
our audio file. General commands can be described as everything else, including
the menu commands. In fact, menu commands form subset of all commands, but
they are special, because majority of executed commands is menu commands.
These commands are sent through one of the pipes created by the plugin.
Their results are returned through the second pipe. However, these commands
have to be specially formed their interpreter can withstand some malformation.
On the other hand, these malformations are allowed mostly for whitespaces (see:
example file for the plugin), so our custom interpreter really should respect the
conventions.
2.6 Script Environment
It’s important for users to have ability to customize the environment for their needs. This feature is needed because of their different needs based on different properties of their scripts. For example, some users have mainly simple, short scripts for that the default settings are sufficient. Other users create more complex scripts, so they may need to adjust the environment. An example of this situation could be a script which is not necessarily connected to Audacity depending on some function results. In this case there is no need to automatically connect to the Audacity, therefore automatic connection and cleanup is not necessary.
On the other hand, having only global settings may not be ideal, because users can obtain scripts from external sources, which may naturally be written for different environment settings. I considered the option of adding switches to scripts’ source code, which override the system defaults. The idea came from compiled programming languages, such as Pascal and C, where these switches adjust the compiler [9]. However, I found, that implementing switches, that are available from the scripts, can be confusing, because of mixing several approaches of scripting languages and macros.
Finally, I decided to add a graphical option to control the automatic initialization and cleanup process.
2.7 Storage Settings
We want to keep our settings for script environment. To store these data the simplest way is to use a plain text file. In my opinion the amount of data is quite small, so I decided to use the plain text format. I also considered using XML format to save data, but serialized data may be too complicated for editing by hand, which can be useful in some cases.
The main problem was to decide, where to save the file. In this work I create the program and I provide testing on Windows operating system. This implies, that I have to count with the Windows multi user environment.
Since Windows Vista the Program Files directory is not accessible for writing without special permissions. Moreover, we would have to manage users within the AudacityEditor. These factors mean, that using the application directory or simply one file in an user specified directory is inconvenient.
As a solution I found to use the user’s appdata folder to store the settings file. To access this folder the Application [10] class is used from the .NET framework, in which the UserAppDataPath property stores the path.
As result, we use the Windows’ user management and don’t have to care about the problem of permissions, because the file is also accessible in user mode.
2.8 File Access
The scripting interface communicates with Audacity using pipes and Audacity loads input files on its own. However, in some cases we need to access files from the script. These file operations involve accessing to file system to retrieve the
directory content. After the list of files in the specific directory is ready we can go through it and pass each needed filename to Audacity.
However, passing file names to Audacity can be problematic. As I mentioned in Section 2.1 my scripting interface uses named pipes to communicate with Audacity. However, in the current version of mod-script-pipe there is a restriction, that filenames passed through the pipe must not contain spaces, because space is used as delimiter.
As a solution I considered a workaround, that would involve copying the input file with invalid pathname to a temporary storage and after that the temporary pathname would be passed to Audacity. However, I found this solution problematic, because when script is being executed for multiple times for every run files should be copied and after execution deleted. This solution would be unusual and time consuming in most of the cases.
As a secondary solution I implemented an exception that is thrown, when an audio file with illegal file name is accessed. Thanks to this option user can know about the problem and act accordingly.
I would like to mention, that there is no support for file access in the current JavaScript.NET interpreter, but we can implement a wrapper for I/O functions. The wrapper can be internal part of the AudaEditor. Using this approach, wrapper is always accessible from script file. Another option is to create a standalone .NET library implementing the needed functions and a javascript wrapper to attach it to the actual project. This second approach is simply making a wrapper around a wrapper and because of file Input-Output operations tend to be frequent I decided to use the first option.
2.9 Graphical Interface
The Audacity is implemented in C++ using wxWidgets library for graphical user interface. Thanks to this library Audacity is a cross–platform application. There is an existing wrapper for wxWidgets, that makes possible to use the library in .NET environment, called WX.Net [12]. However, it seems, that development can be stopped in the future, so I decided to abandon this solution and implement the scripting interface based purely on graphical elements available in .NET libraries. Fortunately, this does not necessarily means focusing on one platform, because thanks to Mono project, the application can also run on other operating systems.
2.9.1 Editor
The main area of the scripting interface is dedicated to the editor. Beside the script editing the editor also should provide at least a basic level of safety for our source code. To fulfill this need the editor should warn the user if the source code is unsaved and in this context dangerous situation happens, like creating a new file or simply closing the window of the application.
I decided to implement a basic syntax highlighting mechanism for the editor, which distinguish two types of keywords. One group of keywords consists of reserved words for JavaScript (e.g. if, else, for, var, function, function, . . . ), and the other group consist of the keywords, that are used to describe the objects implemented as part of this thesis ( auda, filemanager, console ).
2.9.2 User Interface–Script Interaction
When we are running a script we want to get some feedback if there was a problem or everything was good. For signalizing problems during the editing I have chosen the messageboxes, but for script execution they can be disturbing if some errors appear more frequently and the flashing messageboxes became disturbing, thus I decided to use textboxes to show status messages or error information. Controlling the script execution is also connected with UI–Script interaction, which is in a more detailed way described in Section 3.4.
2.10 Projects And Files
Scripts are usually a short portion of code (source: wiki), meaning that creating projects may not be necessary. On the other hand, grouping the requested resources for the source can easeen the development by reusing once implementing code. In this case the projects mean a set of files, which contains the main script, javascript libraries (javascript files containing function definitions and requested function calls, settings), dynamic libraries with their javascript wrappers and the project settings.
The current javascript interpreter doesn’t support execution of more than one script, but this problem can be solved by concatenating the codes and preprocessing the file.
3. Implementation
In this chapter I would like to describe several parts of my implementation of the scripting interface using the tools mentioned in previous chapters.
3.1 The Overall Architecture

The mod-script-pipe plugin communicates with the outer world through pipes. My application connects to these pipes (one for incoming and one for outgoing messages) using a proxy component, which transforms the .NET function calls to matching mod-script-pipe commands and sends them through the pipe.
The interpreter is a component which translates the user’s script to API calls. In future, this component can be user defined, but for now JavaScript .NET is used.
The editor components simply provide some tools for script editing.
3.2 Application Properties And States
Application properties are user set preferences influencing the behaviour of scripts. Such an example is the property #autoinit, which causes automatical call of init()
function from editor’s Audacity API. To store these properties I decided to im-
plement a class that holds all data within it. My decision was based upon similar
implementations in Java API for system global variables.
Application properties are firstly set to default values. After the first run if
the user specific settings file is available, the values are read from that particular
file.
While application properties are connected with script interpreting, applica-
tion states are connected with the user interface. The application states contains
data about the application and about the script. These states are similar to
Windows Phone AppStates, where they contains information whether the app is
running, tombstoned, etc. However, this editor doesn’t need such feature for its
own run, but I find it ideal to monitor the scripts’ behaviour. In application states
we can indicate whether the script is running, is stopped or an error occured.
Moreover, I also find it suitable to store data about file transactions, e.g. file
loading error, project creation error. ( if file open failed, by updating the gui we
can disable controls, which otherwise would access the file’s content )
3.3 Updating The User Interface
If at least one of the application states changes, the application must update its
user interface to reflect the new values. Most frequently this happens, when a
script is being executed.
3.3.1 Signalizing The Script Status
Since the script can be running for longer interval, it is useful to have an indicator
of its status. The problem is, that Audacity doesn’t send a respond until the
command is finished. Moreover, we can’t get information from the interpreter
about the number of executed functions or the total number of functions.
Firstly, I considered using an infinite progressbar to indicate, that the script
is being executed. Its advantage is simple implementation, but doesn’t carry a
lot information.
Secondly, I thought about creating an own control, which indicates the exe-
cution status with colour ( red, green and grey ). The advantage of this solution
is, that it carries slightly more information and the red colour can warn the user
to check the error log.
3.4 Termination of Script Execution
The JavaScript.NET interpreter is running in a separate thread, while the GUI
uses its own thread. The interpreted continuously translates the script and calls
the background functions. I decided to stop the interpret by adding a common
object that contains a boolean variable. The proxy, which connect the inter-
preter to the Audacity check before every function call whether the script is nor
interrupted by the user and then stops.
However, there is one drawback of this approach, that because the Audacity
is running in a different process it continues performing its last task. However,
this can be interrupted by user interaction.
4. User Guide
In this chapter I would like to provide an overview about how my application works and which tools are necessary to use it.
4.1 Installation
The Scripting Interface requires a functional Audacity application, with enabled mod-script-pipe support. Moreover, it was tested for .NET framework 3.5, but probably older versions are appropriate, too.
The program is started by clicking on AudacityScriptingInterface icon. If one of the settings is changed it creates a text file in the
A compiled version of Audacity is located on the attached DVD disc, which was used for testing. For a clean compilation tutorials are located on the following sites: Compiling Audacity for Beginners [13], Developing on Windows [13] and Audacity Scripting [8]
4.2 User Interface
As it can be seen on Figure 4.1 the majority of the user interface is dedicated to the editor. On the right side the control panel and the consoles accompany the editor. The consoles are dedicated to several printing functions, which are called from the script. Moreover, that tabs contain information about the number of unseen messages. These counters are reset, when the user checks the messages. The error console is usually used by the script interpreter to announce the errors, but it is also available for users. The script interaction states are also displayed on the script control panel, that is in the upper right corner.
The script control panel contains two buttons, which are dedicated to run and stop the script execution. Moreover, there is a square, which is by changing its colour signalizes the states of the application. The dark grey colour means inactive state, when no script execution is running, the green colour means running script, with no errors, the yellow colour appears, when a warning message was sent to the console. Finally, the red colour means an error and in the error console appears the information about the error. Figure 4.2 shows the situation, when the script execution was interrupted. It already printed out a line to the simple output, but the error console shows the real problem. After clicking to the tab, we can see, that the scripting interface wasn’t able to connect to Audacity. This is shown on Figure 4.3
4.3 Examples
4.3.1 Example 1 - Filtering
In this example we merge two noisy audio files. Denoise filter is applied after the merger and the result is exported to a third file. We assume that the input files are stereo tracks.
Track A and B are opened. (Audio file C:/music/m.mp3 and C:/music/m-2.mp3 are used in this example) Track A corresponds to audio.mp3 file and track B to audio2.mp3
Listing 4.1: Step 1 and 2
```javascript
auda.import('C:/bc/sample/audio.mp3');
auda.import('C:/bc/sample/audio2.mp3');
```
Unnecessary parts are cut from the tracks: Position 0:00:00-0:01:00 (the first 1 minute) is cut from track A and position 0:01:00-0:03:00 from track B.
In Audacity track A occupies the 0th and 1th mono track, which are set as the second to last and last parameters of select command. The selection range values are set in seconds.
Listing 4.2: Step 3 and 4
```javascript
auda.select('Range', 0.0, 60.0, 0, 1);
auda.cmdDelete();
```
Track B is located on 2nd and 3rd mono tracks.
Listing 4.3: Step 4
```javascript
auda.select('Range', 60.0, 180.0, 2, 3);
auda.cmdDelete();
```
Denoise filter (in Audacity called NoiseRemoval) is applied to all tracks (all 4 mono tracks).
Listing 4.4: Step 5
```javascript
auda.select('All');
auda.effNoiseRemoval();
```
There is no explicit track join function in mod-script-pipe, because tracks are automatically merged into needed format on saving. The result is saved to C:/music/audio3.wav.
Listing 4.5: Step 6 and 7
```javascript
auda.export('C:/bc/sample/audio3.wav');
```
### 4.3.2 Example 2 - Speed Up
This example script makes changes to all mp3 files in the specified directory. It cuts out the first 4 seconds of every track and speeds up each of them by 100 per cent.
Listing 4.6: Speed up recordings in a specified directory
```javascript
var dir='C:/music';
var files=filemanager.GetFiles(dir,'*.mp3');
auda.init();
for(var i=0;i<files.length;i++)
{
var fullpath=files[i];
var filename=filemanager.GetFileName(fullpath);
var exportpath=dir+'/'+filename+'-2.wav';
auda.import(fullpath);
auda.select('Range',0.0,4.0,0,1);
auda.cmdDelete();
}
auda.select("All");
auda.effChangeSpeed(100.0);
auda.export(exportpath);
auda.select("All");
auda.removeTracks();
}
auda.cleanup();
5. Conclusion
The main goal of the thesis was to extend Audacity editor with the capability of automated scripting.
We have shown that scripting can be supported via experimental extension of Audacity called mod-script-pipe.
In particular, we have implemented scripting console in a separate process, executing scripts written in a well-known JavaScript language. We also created a graphical user interface, that makes creating and editing of scripts more user-friendly.
We have shown on several examples that even if scripting is not the most important part of Audacity functionality, it is in several cases very useful.
On the other hand, the current implementation can be extended in several aspects, for example future work includes support for other scripting languages and improvements in the interaction between the scripting interface and the Audacity.
A. API functions
In this appendix I would like to catalogize commands needed to use the scripting API. These commands can be accessed through pre-defined objects as shown in 4.3
A.1 Functions of *auda* object
A.1.1 *init* function
A.1.2 *cleanup* function
A.2 Functions of *filemanager* object
A.2.1 *GetFiles* function
A.3 Functions of *console* object
A.3.1 *Clean* function
A.3.2 *PrintLine* function
A.3.3 *PrintErrorLine* function
A.3.4 *PrintWarningLine* function
Bibliography
http://www.mactech.com/articles/mactech/Vol.15/15.09/ScriptingLanguages/index.html
http://www.gimp.org/docs/python/index.html
[5] Script-Fu Documentation,
[6] Node.js API documentation,
http://nodejs.org/api/
[7] JavaScript.NET Homepage
http://javascriptdotnet.codeplex.com/
http://manual.audacityteam.org/man/Scripting
[9] Freepascal compiler settings,
http://www.freepascal.org/docs-html/user/usersu68.html
http://wxnet.sourceforge.net/wxbuild/
[13] Compiling Audacity for Beginners,
http://wiki.audacityteam.org/wiki/CompilingAudacityForBeginners
[14] Developing on Windows
|
{"Source-Url": "https://dspace.cuni.cz/bitstream/handle/20.500.11956/52473/BPTX_2010_1_11320_0_260082_0_96792.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 7834, "olmocr-version": "0.1.50", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 44774, "total-output-tokens": 9261, "length": "2e12", "weborganizer": {"__label__adult": 0.0003066062927246094, "__label__art_design": 0.0004036426544189453, "__label__crime_law": 0.00019633769989013672, "__label__education_jobs": 0.0018253326416015625, "__label__entertainment": 0.0001194477081298828, "__label__fashion_beauty": 9.85264778137207e-05, "__label__finance_business": 0.00010377168655395508, "__label__food_dining": 0.00022709369659423828, "__label__games": 0.0006008148193359375, "__label__hardware": 0.0005459785461425781, "__label__health": 0.00015294551849365234, "__label__history": 0.0001385211944580078, "__label__home_hobbies": 5.8531761169433594e-05, "__label__industrial": 0.00016307830810546875, "__label__literature": 0.00024628639221191406, "__label__politics": 0.0001138448715209961, "__label__religion": 0.00030422210693359375, "__label__science_tech": 0.0036983489990234375, "__label__social_life": 0.00011134147644042967, "__label__software": 0.01371002197265625, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00014710426330566406, "__label__transportation": 0.00019109249114990232, "__label__travel": 0.00012111663818359376}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38328, 0.0228]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38328, 0.49367]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38328, 0.91169]], "google_gemma-3-12b-it_contains_pii": [[0, 225, false], [225, 340, null], [340, 873, null], [873, 2963, null], [2963, 2963, null], [2963, 2963, null], [2963, 6061, null], [6061, 8642, null], [8642, 11756, null], [11756, 12149, null], [12149, 14691, null], [14691, 17357, null], [17357, 19749, null], [19749, 22642, null], [22642, 25805, null], [25805, 27084, null], [27084, 28079, null], [28079, 30926, null], [30926, 30971, null], [30971, 33441, null], [33441, 33730, null], [33730, 35346, null], [35346, 35480, null], [35480, 36346, null], [36346, 36827, null], [36827, 38328, null]], "google_gemma-3-12b-it_is_public_document": [[0, 225, true], [225, 340, null], [340, 873, null], [873, 2963, null], [2963, 2963, null], [2963, 2963, null], [2963, 6061, null], [6061, 8642, null], [8642, 11756, null], [11756, 12149, null], [12149, 14691, null], [14691, 17357, null], [17357, 19749, null], [19749, 22642, null], [22642, 25805, null], [25805, 27084, null], [27084, 28079, null], [28079, 30926, null], [30926, 30971, null], [30971, 33441, null], [33441, 33730, null], [33730, 35346, null], [35346, 35480, null], [35480, 36346, null], [36346, 36827, null], [36827, 38328, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38328, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38328, null]], "pdf_page_numbers": [[0, 225, 1], [225, 340, 2], [340, 873, 3], [873, 2963, 4], [2963, 2963, 5], [2963, 2963, 6], [2963, 6061, 7], [6061, 8642, 8], [8642, 11756, 9], [11756, 12149, 10], [12149, 14691, 11], [14691, 17357, 12], [17357, 19749, 13], [19749, 22642, 14], [22642, 25805, 15], [25805, 27084, 16], [27084, 28079, 17], [28079, 30926, 18], [30926, 30971, 19], [30971, 33441, 20], [33441, 33730, 21], [33730, 35346, 22], [35346, 35480, 23], [35480, 36346, 24], [36346, 36827, 25], [36827, 38328, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38328, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
6e80d035f5206c52089851b9653ee518d68eb8c4
|
Pre-print version of the paper "AIOCJ: A Choreographic Framework for Safe Adaptive Distributed Applications"
The original version can be retrieved at "https://doi.org/10.1007/978-3-319-11245-9_9"
AIOCJ: A Choreographic Framework for Safe Adaptive Distributed Applications
Mila Dalla Preda¹, Saverio Giallorenzo², Ivan Lanese², Jacopo Mauro², and Maurizio Gabbrielli²
¹ Department of Computer Science - Univ. of Verona
² Department of Computer Science and Engineering - Univ. of Bologna / INRIA
Abstract. We present AIOCJ, a framework for programming distributed adaptive applications. Applications are programmed using AIOC, a choreographic language suited for expressing patterns of interaction from a global point of view. AIOC allows the programmer to specify which parts of the application can be adapted. Adaptation takes place at runtime by means of rules, which can change during the execution to tackle possibly unforeseen adaptation needs. AIOCJ relies on a solid theory that ensures applications to be deadlock-free by construction also after adaptation. We describe the architecture of AIOCJ, the design of the AIOC language, and an empirical validation of the framework.
1 Introduction
Adaptation is a main feature of current distributed applications, that should live for a long time in a continuously changing environment. Anticipating all the possible adaptation needs when designing an application is very difficult, thus the approaches able to cope with unforeseen adaptation needs are the most interesting. Also, for distributed applications like the ones that we consider, it is important to ensure deadlock-freedom (according to [1] about one third of concurrency bugs in real applications are deadlocks). While many techniques ensuring deadlock freedom exist in the literature, e.g., [2-4], to the best of our knowledge, none of them deals with adaptive applications. Indeed, most of the approaches to adaptation offer no guarantee on the behaviour of the application after adaptation [5-7], or they assume to know all the possible adaptations in advance [8], thus failing to cope with unforeseen adaptation needs.
Here we present AIOCJ, a prototype implementation of a framework for programming adaptive distributed applications that guarantees deadlock-freedom by construction (the theoretical foundations ensuring this property are discussed in [9]). AIOCJ is composed of two parts: (i) a domain-specific language, called Adaptive Interaction-Oriented Choreographies (AIOC) and (ii) an adaptation middleware that supports adaptation of AIOC programs.
The AIOC language describes applications from a global point of view following the choreography paradigm. This paradigm has been applied in different contexts, see, e.g., [10-13], but we are not aware of other tools based on it and targeting adaptive applications. A choreography defines the interactions among the processes of a distributed application. AIOC main innovation consists in two constructs supporting...
adaptation: *scopes* and *adaptation rules*. A scope delimits code that may be adapted in the future. An adaptation rule provides new code to replace the one in a given scope. Interestingly, in AIOCJ, adaptation rules can be defined and inserted in the framework while the application is running, to cope with adaptation needs which were not foreseen when the application was designed or even started.
The code below shows a toy AIOC program (left) and an adaptation rule applicable to it (right). On the left, Lines 2-4 define a scope in which the local variable `msg` of process `user` is set to "Hello World". The keyword `prop` defines properties of the scope (prefixed by `N`). In this case, the `name` property is set to "hello_world". At Line 5 `user` sends the content of `msg` to a second process (`display`), that stores it in its local variable `msg`. On the right, Lines 2-3 define the *applicability condition* of the rule, i.e., the `name` property of the scope should be set to "hello_world", and the environmental property `E.lang` should be equal to "it". Line 4 shows the code that will replace the one of the scope, i.e., variable `msg` of `user` will be set to "Ciao Mondo" (Italian for “Hello World”).
```
1 aioc {
2 scope @user{
3 msg@user = "Hello World"
4 } prop { N.name = "hello_world" };
5 send: user( msg ) -> display( msg ) }
1 rule {
2 on { N.name == "hello_world"
3 and E.lang == "it" }
4 do { msg@user = "Ciao Mondo" }
5 }
```
An AIOC program describes a full distributed application. AIOCJ generates, for each distributed process, a service written in Jolie [14, 15], a Service-Oriented orchestration language.
The adaptation middleware consists of a set of adaptation rules stored in multiple, possibly distributed, *adaptation servers*, and of an *adaptation manager* that mediates the interactions between the adaptive application and the various adaptation servers.
Structure of the paper: Section 2 presents an overview of the AIOCJ framework, while Section 3 describes its implementation. Section 4 shows a preliminary validation of the framework, with tests on the performances of AIOCJ. In Section 5 we discuss related work and future directions of research. A short demo of the use of the framework is available in the companion technical report [16].
2 Overview: The AIOCJ Framework
This section first defines the architectural model that supports adaptation of AIOCJ applications and then it introduces the syntax of the AIOC language via an example (for a formal presentation of AIOC syntax and semantics see [9]).
The AIOCJ Middleware. We consider applications composed of processes deployed as services on different localities, including local state and computational resources. Each process has a specific duty in the choreography and follows a given protocol. Processes interact via synchronous message passing over channels, also called *operations*. Adaptation is performed by an adaptation middleware including an adaptation manager and some, possibly distributed, adaptation servers. The latter are services that act as repositories of adaptation rules and may be (manually) added or removed at runtime. Running adaptation servers register themselves on the adaptation manager. The running application may interact with the adaptation manager to look for applicable adaptation rules. The effect of an adaptation rule is to replace a scope with new code that answers a given
adaptation need. The adaptation manager checks the rules according to the registration order of the adaptation servers, returning the first applicable rule, if any.
**The AIOC Language.** The language relies on a set of roles that identify the processes in the choreography. Let us introduce the syntax of the language using an example where Bob invites Alice to see a film.
```plaintext
#include isFreeDay from "calendar.org:80" with http
#include getTicket from "cinema.org:8000" with soap
preamble {
starter: bob
location@bob = "socket://localhost:8000"
location@alice = "socket://alice.com:8000"
location@cinema = "socket://cinema.org:8001"
} aioc {
end@bob = false;
while ( !end )@bob{
free_day@bob = getInput( "Insert your free day" );
proposal: bob( free_day ) -> alice( bob_free_day );
is_free@alice = isFreeDay( bob_free_day );
} prop { N.scope_name = "matching day" };
if( is_free )@alice {
scope@bob {
proposal: bob( "cinema" ) -> alice( event );
agreement@alice = getInput( "Bob proposes " + event + ", do you agree?[y/n]");
if( agreement == "y" )@alice{
end@bob = true;
book: bob( bob_free_day ) -> cinema( book_day );
ticket@cinema = getTicket( book_day );
{ notify: cinema( ticket ) -> bob( ticket )
| notify: cinema( ticket ) -> alice( ticket ) }
} prop { N.scope_name = "event selection" };
}
}
if( !end )@bob {
r@bob = getInput( "Alice refused. Try another date?[y/n]" );
if( _r != "y" )@bob( end@bob = true )
}
}
```
Listing 1.1. Appointment program
The code starts with some deployment information (Lines 1-7), discussed later on. The behaviour starts at Line 9. The program is made by a cycle where Bob first checks when Alice is available and then invites her to the cinema. Before starting the cycle, Bob initialises the variable `end`, used in the guard of the cycle, to the boolean value `false` (Line 9). Note the annotation `@bob` meaning that `end` is local to Bob. The first instructions of the while are enclosed in a scope (Lines 11-15), meaning that they may be adapted in the future. The first operation within the scope is the call to the primitive function `getInput` that asks to Bob a day where he is free and stores this date into the local variable `free_day`. At Line 13 the content of `free_day` is sent to Alice via operation `proposal`. Alice stores it in its local variable `bob_free_day`. Then, at Line 14, Alice calls the external function `isFreeDay` that checks whether she is available on `bob_free_day`. If she is available (Line 16) then Bob sends to her the invitation to go to the cinema via operation `proposal`. Alice stores it in its local variable `bob_free_day`. Then, at Line 14, Alice calls the external function `isFreeDay` that checks whether she is available on `bob_free_day`. If she is available (Line 16) then Bob sends to her the invitation to go to the cinema via operation `proposal`. Alice, reading from the input, accepts or refuses the invitation (Line 19). If Alice accepts then Bob first sets the variable `end` to `true` to end the cycle. Then, he sends to the cinema the booking request via operation `book`. The cinema generates the tickets using the external function `getTicket` and sends them to Alice and Bob via operation `notify`. The two notifications are done in parallel using the parallel operator `|` (until now we composed statements
using the sequential operator ;). Lines 18-26 are enclosed in a second scope with property \texttt{N.scope\_name = "event selection"}. If the agreement is not reached, Bob decides, reading from the input, if he wants to stop inviting Alice. If so, the program exits.
We remark the different meanings of the annotations \texttt{@bob} and \texttt{@alice}. When prefixed by a variable, they identify the owner of the variable. Prefixed by the boolean guard of conditionals and cycles, they identify the role that evaluates the guard. Prefixed by the keyword \texttt{scope}, they identify the process coordinating the adaptation of that scope. A scope, besides the code, may also include some properties describing the current implementation. These can be specified using the keyword \texttt{prop} and are prefixed by \texttt{N}. For instance, each scope of the example includes the property \texttt{scope\_name}, that can be used to distinguish its functionality.
\texttt{AIOCJ} can interact with external services, seen as functions. This allows both to interact with real services and to have easy access to libraries from other languages. To do that, one must specify the address and protocol used to interact with them. For instance, the external function \texttt{isFreeDay} used in Line 14 is associated to the service deployed at the domain “calendar.org”, reachable though port 80, and that uses http as serialisation protocol (Line 1). External functions are declared with the keyword \texttt{include}. To preserve deadlock freedom, external services must be non-blocking. After function declaration, in a \texttt{preamble} section, it is possible to declare the locations where processes are deployed. The keyword \texttt{starter} is mandatory and defines which process must be started first. The starter makes sure all other processes are ready before the execution of the choreography begins.
Now suppose that Bob, during summer, prefers to invite Alice to a picnic more than to the cinema, provided that the weather forecasts are good. This can be obtained by adding the following adaptation rule to one of the adaptation servers. This may even be done while the application is running, e.g., while Bob is sending an invitation. In this case, if the first try of Bob is unsuccessful, in the second try he will propose a picnic.
```
rule {
include getWeather from "socket://localhost:8002"
on { N.scope\_name == "event selection" and E.month > 5 and E.month < 10 }
do { forecasts@bob = getWeather( free\_day );
if( forecasts == "Clear" )@bob{
eventProposal : bob( "picnic" ) -> alice( event );
} else { eventProposal : bob( "cinema" ) -> alice( event );
agreement@alice = getInput( "Bob proposes " + event + ", do you agree?[y/n]" );
if( agreement == "y" )@alice {
end@bob = true |
if( event == "cinema" )@alice {
//cinema tickets purchase procedure
} }}}
```
\textbf{Listing 1.2.} Event selection adaptation rule
A rule specifies its applicability condition and the new code to execute. In general, the applicability condition may depend only on properties of the scope, environment variables, and variables belonging to the coordinator of the scope. In this case, the condition, introduced by the keyword \texttt{on} (Line 3), makes the rule applicable to scopes having the property \texttt{scope\_name} equal to the string \texttt{"event selection"} and only during summer. This last check relies on an environment variable \texttt{month} that contains the current month. Environment variables are prefixed by \texttt{E}.
When the rule applies, the new code to execute is defined using the keyword `do` (Line 4). In this case, the forecasts can be retrieved calling an external function `getWeather` (Line 4) that queries a weather forecasts service. This function is declared in Line 2. If the weather is clear, Bob proposes to Alice a picnic, the cinema otherwise. Booking (as in Listing 1.1 Lines 23-26) is needed only if Alice accepts the cinema proposal.
As detailed in [9], to obtain a deadlock-free application, we require the code of choreographies and rules to satisfy a well-formedness syntactic condition called *connectedness*. Intuitively, connectedness ensures that sequences of actions are executed in the correct order and avoids interference between parallel interactions. Requiring this condition does not hamper programmability, since it naturally holds in most of the cases, and it can always be enforced automatically via small patches to the choreography which preserve the behaviour of the program, as discussed in [17]. Also, checking connectedness is efficient, i.e., polynomial w.r.t. the size of the code [9].
## 3 Implementation
Our prototype implementation of AIOCJ is composed of two elements: the AIOCJ Integrated Development Environment (IDE), named AIOCJ-ecl, and the adaptation middleware that enables AIOC programs to adapt, called AIOCJ-mid.
AIOCJ-ecl is a plug-in for Eclipse [18] based on Xtext [19]. Xtext provides features such as syntax highlighting, syntax checking, and code completion, which help developers in writing choreographies and adaptation rules. Also, starting from a grammar, Xtext generates the parser for programs written in the AIOC language. Result of the parsing is an abstract syntax tree (AST) we use to implement (i) the checker for connectedness for choreographies and rules and (ii) the generation of Jolie code for each role. The connectedness check has polynomial computational complexity [9] thus making it efficient enough to be performed on-the-fly while editing the code.
The target language of code generation is Jolie [14]. Jolie supports architectural primitives such as dynamic embedding, aggregation, and redirection that we exploit to implement the adaptation mechanisms. Moreover, Jolie supports a wide range of communication technologies (TCP/IP sockets, local memory, Bluetooth) and of data formats (e.g., HTTP, SOAP, JSON). AIOCJ inherits this ability. The compilation generates a Jolie service for each role. The execution of scopes is delegated to sub-services accessed using Jolie redirection facility. Adaptation is enacted by disabling the current sub-service and replacing it with a new one, obtained from the adaptation server. To grant to all the sub-services access to variables, the state is stored by a dedicated sub-service local to the role. Auxiliary messages are exchanged to ensure that both the adaptation and the choices taken by the `if` and `while` constructs are done in a coordinated way. In particular, the scope execution not only requires interaction with the adaptation manager, but also communications among the different roles, ensuring that they all agree on whether adaptation is needed or not, and, in case, on which rule to apply. Indeed, the decision is taken by the role coordinating the adaptation and then communicated to other roles. Note that the different roles cannot autonomously take the decision, since if they take it at different times, changes in the environment or in the sets of available rules may lead to inconsistent decisions.
Synchronous message exchange is implemented on top of an asynchronous communication middleware by a sub-service that works as a message handler. The message handler of the starter role also ensures that, before the actual communication in the choreography starts, all the roles are ready.
AIOCJ-mid is implemented in Jolie and it includes:
- many, possibly distributed, adaptation servers where rules are published. Adaptation servers can be deployed and switched on and off at runtime;
- an adaptation manager that acts as a registry for adaptation servers and clients;
- an environment service that stores and makes available environment information. Environment information can change at any moment.
When an AIOCJ program reaches a scope, it queries the adaptation manager for a rule matching that scope. The adaptation manager queries each adaptation server sequentially, based on their order of registration. Each server checks the applicability condition of each of its rules. The first rule whose applicability condition holds is applied. In particular, the code of the rule is sent to the role coordinating the adaptation (via the adaptation manager) which distributes it to the involved roles. In each role, the new code replaces the old one. The study of more refined policies for rule selection, e.g., based on priorities, is a topic for future work.
4 Validation
In this section, we give a preliminary empirical validation of our implementation. The main aim is to test how our mechanisms for adaptation impact on performances.
In the literature, to the best of our knowledge, there is no approach to adaptation based on choreography programming. Thus, it is difficult to directly compare our results with other existing approaches. Moreover, we are not aware of any established benchmark to evaluate adaptive applications. For this reason, we tested AIOCJ performances by applying it to two typical programming patterns: pipes and fork-joins. Since we are interested in studying the cost of adaptation, our scenarios contain minimal computation and are particularly affected by the overhead of the adaptation process. Clearly, the percentage of the overhead due to adaptation will be far lower in real scenarios, which are usually more computationally intensive. In the first scenario, we program a pipe executing \( n \) tasks (in a pipe, the output of task \( t_i \) is given as input to task \( t_{i+1} \), for \( i \in \{1, \ldots, n-1\} \)). To keep computation to a minimum, each task simply computes the increment function. In the fork-join scenario, \( n \) tasks are computed in parallel. Each task processes one character of a message of length \( n \), shifting it by one position. The message is stored in an external service\(^1\).
To enable adaptation, each task is enclosed in a scope. We test both scenarios with an increasing number of tasks \( n \in \{10, 20, \ldots, 100\} \) to study how performances scale as the number of adaptation scopes increases. We evaluate performances in different contexts, thus allowing us to understand the impact of different adaptation features, such as scopes, adaptation servers, and adaptation rules.
\(^1\) The code of both scenarios is in the companion technical report [16].
A Choreographic Framework for Safe Adaptive Distributed Applications
**Fig. 1.** Times of execution of the pipe (left) and the fork-join (right) scenarios
**Context 1:** no scopes, no adaptation servers, no rules;
**Context 2:** each task is enclosed in a scope, no adaptation servers, no rules;
**Context 3:** each task is enclosed in a scope, one adaptation server, no rules;
**Context 4:** as Context 3, but now the adaptation server contains 50 rules. Each rule is applicable to a unique scope $i$, and no rule is applicable to scopes with $i > 50$. The rules are stored in random order.
**Context 5:** as Context 4, but with 100 rules, one for each scope.
Each rule in Contexts 4 and 5 is applicable to one specific scope only (through a unique property of the scope), hence when testing for 50 rules, only the first 50 scopes adapt.
We repeated every test 5 times. We performed our tests on a machine equipped with a 2.6GHz quad-core Intel Core i7 processor and 16GB RAM. The machine runs Mavericks 10.9.3, Java 1.7.55, and Jolie r.2728. Figure 1 shows the tests for the pipe (left) and the fork-join (right). Both charts display on the x-axis the number of tasks/scopes and on the y-axis the execution time in milliseconds.
As expected, in both scenarios there is a significant gap between Contexts 1 and 2. In words, the introduction of scopes has a strong effect on performances. The ratio is 1:13 for the pipe scenario and 1:5.5 for the fork-join scenario. This is due to the auxiliary communications needed to correctly execute a scope. The observed overhead is higher in the pipe scenario, since different scopes check for adaptation in sequence, while this is done in parallel for the fork-join scenario.
Adding an adaptation server (from Context 2 to Context 3) has little impact on performances: 19% of decay for pipe, and 17% for fork-join. The figures are reasonable, considered that Context 3 adds only one communication w.r.t. Context 2.
On the contrary, there is a notable difference when adding rules to the adaptation server (Context 4 is 1.4 times slower than Context 3 for the pipe scenario, 2.9 for the fork-join scenario). In Contexts 4 and 5, performances are really close up to 50 scopes (in the pipe scenario they almost overlap) although Context 5 has twice the rules of Context 4. This illustrates that the time to test for applicability of rules is negligible. Hence, the highest toll on performances is related to actual adaptation, since it requires
to transfer and embed the new code. This is particularly evident in the fork-join scenario where multiple adaptations are executed in parallel and the adaptation server becomes a bottleneck. This problem can be mitigated using multiple distributed adaptation servers.
The fact that the most expensive operations are scope execution and actual adaptation is highlighted also by the results below. The table shows the cost of different primitives, including scopes in different contexts. Times are referred to 5 executions of the sample code in the companion technical report [16].
<table>
<thead>
<tr>
<th>Test</th>
<th>Time (ms)</th>
<th>Test</th>
<th>Time (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>assignment</td>
<td>2.2</td>
<td>scope, 1 adaptation server, 1 matching rule</td>
<td>280.6</td>
</tr>
<tr>
<td>interaction</td>
<td>4.2</td>
<td>scope, 1 adaptation server, 50 rules, none matching</td>
<td>254.2</td>
</tr>
<tr>
<td>if statement</td>
<td>16.6</td>
<td>scope, 1 adaptation server, 50 rules, 1 matching</td>
<td>338.6</td>
</tr>
<tr>
<td>scope, no adaptation server</td>
<td>129.4</td>
<td>scope, 1 adaptation server, 100 rules, none matching</td>
<td>310.2</td>
</tr>
<tr>
<td>scope, 1 adaptation server, no rule</td>
<td>203.8</td>
<td>scope, 1 adaptation server, 100 rules, 1 matching</td>
<td>385</td>
</tr>
</tbody>
</table>
As future work we will exploit these results to increase the performances of our framework, concentrating on the bottlenecks highlighted above. For instance, scope execution (as well as conditionals and cycles) currently requires many auxiliary communications ensuring that all the processes agree on the chosen path. In many cases, some of these communications are not needed, since a process will eventually discover the chosen path from the protocol communications. Static analysis can discover redundant communications and remove them. Another improvement is letting the adaptation server send the new code directly to the involved roles, skipping the current forward chain.
5 Related Work and Conclusion
This paper presented a framework for programming rule-based adaptation of distributed applications. Its distinctive trait is that, being based on a choreographic approach, it guarantees deadlock-freedom by construction for the running distributed application, even in presence of adaptation rules which were unknown when the application was started, and for any environment condition.
Adaptation is a hot topic, and indeed there is a plethora of approaches in the literature, see, e.g., the surveys [20][21]. However, approaches based on formal methods are only emerging recently and few of them have been implemented in a working tool. In particular, the use of choreographies to capture and define adaptive applications is a novel idea. For a discussion of works on adaptation with formal bases, but which have not been implemented, we refer to [9]. Here, we just recall [22], which exploits a choreographic approach for self-adaptive monitoring of distributed applications.
Among the implemented approaches, the most related to ours is JoRBA [5]. JoRBA features scopes and adaptation rules similar to ours. However, JoRBA applications are not distributed and JoRBA does not guarantee any property of the adapted application.
In [23] choreographies are used to propagate protocol changes to the other peers, while [24] presents a test to check whether a set of peers obtained from a choreography
can be reconfigured to match a second one. Differently from ours, these works only provide change recommendations for adding and removing message sequences.
Various tools [25–27] exploit automatic planning techniques in order to elaborate, at runtime, the best sequence of activities to achieve a given goal. These techniques are more declarative than ours, but, to the best of our knowledge, they are not guaranteed to always find a plan to adapt the application.
Among the non-adaptive languages, Chor [2] is the closest to ours. Indeed, like ours, Chor is a choreographic language that compiles to Jolie. Actually, AIOCJ shares part of the Chor code base. However, due to the different semantics of the sequential operator and the lack of the parallel composition in Chor, a faithful encoding of the scenarios in Section 4 is not possible, especially for the fork-join scenario. On an almost equivalent implementation of the pipe scenario, Chor proves to be more efficient than AIOCJ.
In the future, we would like to test the expressive power of our language, trying to encode patterns of adaptation from existing approaches. An obvious benefit of such an encoding is that it will capture patterns of adaptation used in real-case scenarios, guaranteeing also deadlock freedom, which is not provided by other approaches. This task is cumbersome, due to the huge number and heterogeneity of those approaches. Nevertheless, we already started it. In particular, in the website [28], we show how to encode examples coming from distributed [29] and dynamic [30] Aspect-Oriented Programming (AOP) and from Context-Oriented Programming (COP) [31]. In general, we can deal with cross-cutting concerns like logging and authentication, typical of AOP, viewing point-cuts as empty scopes and advices as adaptation rules. Layers, typical of COP, can instead be defined by adaptation rules which can fire according to contextual conditions captured by the environment. Possible extensions of our framework include the use of asynchronous communications in the AIOC language and the introduction of mechanisms to deal with exceptions and failures. Finally, we would like to pursue a systematic analysis of the workflow change patterns like the ones presented in [32,33], showing how these patterns are captured by AIOCJ.
References
|
{"Source-Url": "https://imada.sdu.dk/~mauro/papers/DBLP:conf__sle__PredaGLMG14_preprint.pdf", "len_cl100k_base": 6283, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 26692, "total-output-tokens": 8731, "length": "2e12", "weborganizer": {"__label__adult": 0.0003139972686767578, "__label__art_design": 0.0002694129943847656, "__label__crime_law": 0.000270843505859375, "__label__education_jobs": 0.00030803680419921875, "__label__entertainment": 6.413459777832031e-05, "__label__fashion_beauty": 0.0001327991485595703, "__label__finance_business": 0.00020515918731689453, "__label__food_dining": 0.0003116130828857422, "__label__games": 0.0003376007080078125, "__label__hardware": 0.0006327629089355469, "__label__health": 0.0004124641418457031, "__label__history": 0.0001894235610961914, "__label__home_hobbies": 6.663799285888672e-05, "__label__industrial": 0.00029087066650390625, "__label__literature": 0.00021755695343017575, "__label__politics": 0.00026488304138183594, "__label__religion": 0.00035858154296875, "__label__science_tech": 0.010894775390625, "__label__social_life": 8.195638656616211e-05, "__label__software": 0.005023956298828125, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.00025582313537597656, "__label__transportation": 0.000400543212890625, "__label__travel": 0.0001977682113647461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33593, 0.03327]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33593, 0.35886]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33593, 0.86935]], "google_gemma-3-12b-it_contains_pii": [[0, 197, false], [197, 3000, null], [3000, 6446, null], [6446, 10005, null], [10005, 13633, null], [13633, 17176, null], [17176, 20429, null], [20429, 22924, null], [22924, 26384, null], [26384, 30001, null], [30001, 33593, null]], "google_gemma-3-12b-it_is_public_document": [[0, 197, true], [197, 3000, null], [3000, 6446, null], [6446, 10005, null], [10005, 13633, null], [13633, 17176, null], [17176, 20429, null], [20429, 22924, null], [22924, 26384, null], [26384, 30001, null], [30001, 33593, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33593, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33593, null]], "pdf_page_numbers": [[0, 197, 1], [197, 3000, 2], [3000, 6446, 3], [6446, 10005, 4], [10005, 13633, 5], [13633, 17176, 6], [17176, 20429, 7], [20429, 22924, 8], [22924, 26384, 9], [26384, 30001, 10], [30001, 33593, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33593, 0.0407]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
9d31bbae11c221eb50841eedb23c1c53bdacaa02
|
A Additional Implementation Details
Warming-up the KL Term: Similar to the previous work, we warm-up the KL term at the beginning of training\(^{43}\). Formally, we optimize the following objective:
\[
\mathbb{E}_{q(z|x)} \left[ \log p(z|x) \right] - \beta \text{KL}(q(z|x) || p(z)),
\]
where \(\beta\) is annealed from 0 to 1 at the first 30% of training.
Balancing the KL Terms: In hierarchical VAEs, the KL term is defined by:
\[
\text{KL}(q(z|x)||p(z)) = \sum_{l=1}^{L} \mathbb{E}_{q(z_{<l}|x)} \left[ \text{KL}(q(z_{l}|z_{<l}) || p(z_{l}|z_{<l})) \right],
\]
where each \(\text{KL}(q(z_{l}|z_{<l}) || p(z_{l}|z_{<l}))\) can be thought as the amount of information encoded in the \(l^{th}\) group. In deep hierarchical VAEs, during training, some groups of latent variables can easily become deactivated by matching the approximate posterior with the prior (i.e., posterior collapse). One simple solution is to use KL balancing coefficients\(^{20,66}\) to ensure that an equal amount of information is encoded in each group using:
\[
\text{KL}(q(z|x)||p(z)) = \sum_{l=1}^{L} \gamma_{l} \mathbb{E}_{q(z_{<l}|x)} \left[ \text{KL}(q(z_{l}|z_{<l}) || p(z_{l}|z_{<l})) \right].
\]
The balancing coefficient \(\gamma_{l}\) is set to a small value when the KL term is small for that group to encourage the model to use the latent variables in that group, and it is set to a large value when the KL term is large. The KL balancing coefficients are only applied during the KL warm-up period, and they are set to 1 afterwards to ensure that we optimize the variational bound. DVAE++\(^{20}\) sets \(\gamma_{l}\) proportional to \(\mathbb{E}_{x \sim \mathcal{M}} \left[ \mathbb{E}_{q(z_{<l}|x)} \left[ \text{KL}(q(z_{l}|z_{<l}) || p(z_{l}|z_{<l})) \right] \right] \) in each parameter update using the batch \(\mathcal{M}\). However, since we have latent variable groups in different scales (i.e., spatial dimensions), we observe that setting \(\gamma_{l}\) proportional to also the size of each group performs better, i.e., \(\gamma_{l} \propto s_{l} \mathbb{E}_{x \sim \mathcal{M}} \left[ \mathbb{E}_{q(z_{<l}|x)} \left[ \text{KL}(q(z_{l}|z_{<l}) || p(z_{l}|z_{<l})) \right] \right] \).
Annealing \(\lambda\): The coefficient of the smoothness loss \(\lambda\) is set to a fixed value in \(\{10^{-2}, 10^{-1}\}\) for almost all the experiments. We used \(10^{-1}\) only when training was unstable at \(10^{-2}\). However, on Celeb-A HQ and FFHQ, we observe that training is initially unstable unless for \(\lambda \in \{1, 10\}\) which applies a very strong smoothness. For these datasets, we anneal \(\lambda\) with exponential decay from 10 to a small value shown in Table 5 in the same number of iterations that the KL coefficient is annealed. Note that the smoothness loss is applied to both encoder and decoder. We hypothesize that a sharp decoder may require a sharp encoder, causing more instability in training.
Weight Normalization (WN): WN cannot be used with BN as BN removes any scaling of weights introduced by WN. However, previous works have seen improvements in using WN for VAEs. In NVAE, we apply WN to any convolutional layer that is not followed by BN, e.g., convolutional layers that produce the parameters of Normal distributions in encoder or decoder.
Inverse Autoregressive Flows (IAFs): We apply simple volume-preserving normalizing flows of the form \(z' = z + b(z)\) to the samples generated by the encoder at each level, where \(b(z)\) is produced by an autoregressive network. In each flow operation, the autoregressive network is created using a cell similar to Fig.\(^{4}(a)\) with the masking mechanism introduced in PixelCNN\(^{41}\). In the autoregressive cell, BN is replaced with WN, and SE is omitted, as these operations break the autoregressive dependency. We initially examined non-volume-preserving affine transformations in the form of \(z' = a(z) \odot z + b(z)\), but we did not observe any improvements. Similar results are reported by Kingma et al.\(^{4}\) (See Table 3).
Optimization: For all the experiments, we use the AdaMax\(^{81}\) optimizer for training with the initial learning rate of 0.01 and with cosine learning rate decay. For FFHQ experiments, we reduce the learning rate to 0.008 to further stabilize the training.
Image Decoder \(p(x|z)\): For all the datasets but MNIST, we use the mixture of discretized Logistic distribution\(^{70}\). In MNIST, we use a Bernoulli distribution. Note that in all the cases, our decoder is unconditional across the spatial locations in the image.
Evaluation: For estimating log-likelihood on the test datasets in evaluation, we use importance weighted sampling using the encoder\(^{11}\). We use 1000 importance weighted samples for evaluation.
Table 6: A summary of hyperparameters used in training NV AE with additional information. \( D^2 \) indicates a latent variable with the spatial dimensions of \( D \times D \). As an example, the MNIST model consists of 15 groups of latent variables in total, covering two different scales. In the first scale, we have five groups of \( 4 \times 4 \times 20 \)-dimensional latent variables (in the form of height×width×channel). In the second scale, we have 10 groups of \( 8 \times 8 \times 20 \)-dimensional variables.
<table>
<thead>
<tr>
<th>Hyperparameter</th>
<th>MNIST 28×28</th>
<th>CIFAR-10 32×32</th>
<th>ImageNet 32×32</th>
<th>CelebA 64×64</th>
<th>CelebA HQ 256×256</th>
<th>FFHQ 256×256</th>
</tr>
</thead>
<tbody>
<tr>
<td># epochs</td>
<td>400</td>
<td>400</td>
<td>45</td>
<td>90</td>
<td>300</td>
<td>200</td>
</tr>
<tr>
<td>batch size per GPU</td>
<td>200</td>
<td>32</td>
<td>24</td>
<td>16</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td># normalizing flows</td>
<td>0</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td># latent variable scales</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>3</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td># groups in each scale</td>
<td>5, 10</td>
<td>30</td>
<td>28</td>
<td>5, 10, 20</td>
<td>4, 4, 4, 4, 4, 8, 16</td>
<td>8, 16</td>
</tr>
<tr>
<td>spatial dims of ( z ) in each scale</td>
<td>( 4^2, 8^2 )</td>
<td>( 16^2 )</td>
<td>( 16^2 )</td>
<td>( 8^2, 16^2, 32^2 )</td>
<td>( 8^2, 16^2, 32^2, 64^2, 128^2 )</td>
<td>( 8^2, 16^2, 32^2, 64^2, 128^2 )</td>
</tr>
<tr>
<td># channel in ( z )</td>
<td>20</td>
<td>20</td>
<td>20</td>
<td>20</td>
<td>20</td>
<td>20</td>
</tr>
<tr>
<td># initial channels in enc.</td>
<td>32</td>
<td>128</td>
<td>192</td>
<td>64</td>
<td>30</td>
<td>30</td>
</tr>
<tr>
<td># residual cells per group</td>
<td>1</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>( \lambda )</td>
<td>0.01</td>
<td>0.1</td>
<td>0.01</td>
<td>0.1</td>
<td>0.01</td>
<td>0.1</td>
</tr>
<tr>
<td>GPU type</td>
<td>16-GB V100</td>
<td>16-GB V100</td>
<td>32-GB V100</td>
<td>16-GB V100</td>
<td>32-GB V100</td>
<td>32-GB V100</td>
</tr>
<tr>
<td># GPUs</td>
<td>2</td>
<td>8</td>
<td>24</td>
<td>8</td>
<td>24</td>
<td>24</td>
</tr>
<tr>
<td>total train time (h)</td>
<td>21</td>
<td>55</td>
<td>70</td>
<td>92</td>
<td>94</td>
<td>160</td>
</tr>
</tbody>
</table>
* A smaller model with 24 initial channels instead of 32, could be trained on only 8 GPUs in the same time (with the batch size of 6). The smaller models obtain only 0.01 bpd higher negative log-likelihood on these datasets.
**Channel Sizes:** We only set the initial number of channels in the bottom-up encoder. When we downsample the features spatially, we double the number of channels in the encoder. The number of channels is set in the reverse order for the top-down model.
**Expansion Ratio \( E \):** The depthwise residual cell in Fig. 3a requires setting an expansion ratio \( E \). We use \( E = 6 \) similar to MobileNetV2 [46]. In a few cells, we set \( E = 3 \) to reduce the memory. Please see our code for additional details.
**Datasets:** We examine NVAE on the dynamically binarized MNIST [72], CIFAR-10 [73], ImageNet 32×32 [74], CelebA 64×64 [75, 76], CelebA HQ [28], and FFHQ 256×256 [77]. For all the datasets but FFHQ, we follow Glow [62] for the train and test splits. In FFHQ, we use 63K images for training, and 7K for test. Images in FFHQ and CelebA HQ are downsampled to 256×256 pixels, and are quantized in 5 bits per pixel/channel to have a fair comparison with prior work [62].
**Hyperparameters:** Given a large number of datasets and the heavy compute requirements, we do not exhaustively optimize the hyperparameters. In our early experiments, we observed that the larger the model is, the better it performs. We often see improvements with wider networks, a larger number of hierarchical groups, and more residual cells per group. However, they also come with smaller training batch size and slower training. We set the number of hierarchical groups to around 30, and we used two residual cells per group. We set the remaining hyperparameters such that the model could be trained in no more than about a week. Table 6 summarizes the hyperparameters used in our experiments.
**B Additional Experiments and Visualizations**
In this section, we provide additional insights into NVAE.
**B.1 Is NVAE Memorizing the Training Set?**
In VAEs, since we can compute the log-likelihood on a held-out set, we can ensure that the model is not memorizing the training set. In fact, in our experiments, as we increase the model capacity (depth and width), we never observe any overfitting behavior especially on the datasets with large images. In most cases, we stop making the model large because of the compute and training time.
Figure 6: Top retrieved images from the training set are visualized for samples generated by NVAE in each row. The generated instances do not exist in the training set (best seen when zoomed in).
considerations. However, since the images generated by NVAE are realistic, this may raise a question on whether NVAE memorizes the training set.
In Fig. 6, we visualize a few samples generated by NVAE and the most similar images from the training data. For measuring the similarity, we downsample the images by $4 \times$, and we measure $L_2$ distance using the central crop of the images. Since images are aligned, this way we can compare images using the most distinct facial features (eyes, nose, and mouth). As we can see, the sampled images are not present in the training set.
B.2 Changing the Temperature of the Prior in NVAE
It is common to lower the temperature of the prior when sampling from VAEs on challenging datasets. In Fig. 7, we examine different temperatures in the prior with different settings for the batch norm layers.
B.3 Additional Generated Samples
In Fig. 8 and Fig. 9, we visualize additional generated samples by NVAE, trained on CelebA HQ. In these figures, we use higher temperatures ($t \in \{0.6, 0.7, 0.8, 0.9\}$), but we manually select the samples.
B.4 More on the Impact of Residual Normal Distributions
Fig. 10 visualizes the total number of active channels in all latent variables during training. Here, we compare the residual Normal distributions against the model that predicts the absolute parameters of the Normal distributions in the approximate posterior. This figure corresponds to the experiment that we reported in Table 4. As we can see, in the initial stage of training, the model without residual distributions turns off more latent variables.
Figure 7: Randomly sampled images from NVAE with different temperatures in the prior for the CelebA HQ dataset (best seen when zoomed in). In the batch normalization layers during sampling, we examine two settings: i) the default mode that uses the running averages from training (on the left), and ii) readjusted mode in which the running averages are re-tuned by sampling from the model 500 times with the given temperature (on the right). Readjusted BN statistics improve the diversity and quality of the images, especially for small temperatures.
Figure 8: Additional 256×256-pixel samples generated by NVAE, trained on CelebA HQ [28]. In this figure, we use higher temperatures ($t \in \{0.6, 0.7, 0.8, 0.9\}$), but we manually select the samples.
Figure 9: Additional $256 \times 256$-pixel samples generated by NVAE, trained on CelebA HQ [28]. In this figure, we use higher temperatures ($t \in \{0.6, 0.7, 0.8, 0.9\}$), but we manually select the samples.
Figure 10: The total number of active channels in $z$ is reported for two models with and without residual distributions. The model with residual distribution keeps more latent variables active in the KL warm-up phase (up to 8K iterations), and it achieves a better KL value at the end of the training (see Table 4).
B.5 Stabilizing the Training with Spectral Regularization
In our experiments, we came across many cases whose training was unstable due to the KL term, and it was stabilized by spectral regularization. Initially, instead of spectral regularization, we examined common approaches such as gradient clipping or limiting the parameters of the Normal distributions to a small range. But, none could stabilize the training without negatively affecting the performance. Fig. 11 shows an experiment on the FFHQ dataset. The training is stabilized by increasing the spectral regularization coefficient ($\lambda$) from 0.1 to 1.0.
Figure 11: An example experiment on the FFHQ dataset. All the hyper-parameters are identical between the two runs. However, training is unstable due to the KL term in the objective. We stabilize the training by increasing the spectral regularization coefficient $\lambda$.
B.6 Long-Range Correlations
NVAE’s hierarchical structure is composed of many latent variable groups operating at different scales. For example, on CelebA HQ $256 \times 256$, the generative model consists of five scales. It starts from a spatially arranged latent variable group of the size $8 \times 8$ at the top, and it samples from the hierarchy group-by-group while gradually doubling the spatial dimensions up to $128 \times 128$.
A natural question to ask is what information is captured at different scales. In Fig. 12 we visualize how the generator’s output changes as we fix the samples at different scales. As we can see, the
global long-range correlations are captured mostly at the top of the hierarchy, and the local variations are recorded at the lower groups.
Figure 12: Where does our hierarchical model capture long-range correlations? NVAE on CelebA HQ consists of latent variable groups that are operating at five scales (starting from 8 × 8 up to 128 × 128). In each row, we fix the samples at a number of top scales and we sample from the rest of the hierarchy. As we can see, the long-range global structure is mostly recorded at the top of the hierarchy in the 8 × 8 dimensional groups. The second scale does apply some global modifications such as changing eyes, hair color, skin tone, and the shape of the face. The bottom groups capture mostly low-level variations. However, the lowest scale can still make some subtle long-range modifications. For example, the hair color is slightly modified when we are only sampling from the lowest scale in the last row. This is potentially enabled because of the large receptive field in our depthwise separable residual cell.
|
{"Source-Url": "https://proceedings.neurips.cc/paper/2020/file/e3b21256183cf7c2c7a66be163579d37-Supplemental.pdf", "len_cl100k_base": 4205, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 19435, "total-output-tokens": 4483, "length": "2e12", "weborganizer": {"__label__adult": 0.0005435943603515625, "__label__art_design": 0.0019474029541015625, "__label__crime_law": 0.0005249977111816406, "__label__education_jobs": 0.0007352828979492188, "__label__entertainment": 0.00023734569549560547, "__label__fashion_beauty": 0.0003504753112792969, "__label__finance_business": 0.0003364086151123047, "__label__food_dining": 0.0004167556762695313, "__label__games": 0.0008463859558105469, "__label__hardware": 0.0043792724609375, "__label__health": 0.0008492469787597656, "__label__history": 0.0004589557647705078, "__label__home_hobbies": 0.00017404556274414062, "__label__industrial": 0.0010061264038085938, "__label__literature": 0.0003275871276855469, "__label__politics": 0.0003669261932373047, "__label__religion": 0.0008306503295898438, "__label__science_tech": 0.35888671875, "__label__social_life": 0.00011491775512695312, "__label__software": 0.0286712646484375, "__label__software_dev": 0.5966796875, "__label__sports_fitness": 0.00045371055603027344, "__label__transportation": 0.00058746337890625, "__label__travel": 0.0003497600555419922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15227, 0.04159]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15227, 0.48855]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15227, 0.87441]], "google_gemma-3-12b-it_contains_pii": [[0, 4745, false], [4745, 9550, null], [9550, 11351, null], [11351, 11902, null], [11902, 12104, null], [12104, 12315, null], [12315, 14171, null], [14171, 15227, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4745, true], [4745, 9550, null], [9550, 11351, null], [11351, 11902, null], [11902, 12104, null], [12104, 12315, null], [12315, 14171, null], [14171, 15227, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15227, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15227, null]], "pdf_page_numbers": [[0, 4745, 1], [4745, 9550, 2], [9550, 11351, 3], [11351, 11902, 4], [11902, 12104, 5], [12104, 12315, 6], [12315, 14171, 7], [14171, 15227, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15227, 0.22388]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
3395bf4cec09ea6fe0f9eecc94f27a41319a94bd
|
Principles of Eventual Consistency
Sebastian Burckhardt
Microsoft Research
sburckha@microsoft.com
Foundations and Trends® in Programming Languages
Published, sold and distributed by:
now Publishers Inc.
PO Box 1024
Hanover, MA 02339
United States
Tel. +1-781-985-4510
www.nowpublishers.com
sales@nowpublishers.com
Outside North America:
now Publishers Inc.
PO Box 179
2600 AD Delft
The Netherlands
Tel. +31-6-51115274
The preferred citation for this publication is
This Foundations and Trends® issue was typeset in LATEX using a class file designed by Neal Parikh. Printed on acid-free paper.
ISBN: 978-1-60198-859-1
© 2014 S. Burckhardt
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers.
Photocopying. In the USA: This journal is registered at the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com
For those organizations that have been granted a photocopy license, a separate system of payment has been arranged. Authorization does not extend to other kinds of copying, such as that for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale. In the rest of the world: Permission to photocopy must be obtained from the copyright owner. Please apply to now Publishers Inc., PO Box 1024, Hanover, MA 02339, USA; Tel. +1 781 871 0245; www.nowpublishers.com; sales@nowpublishers.com
now Publishers Inc. has an exclusive license to publish this material worldwide. Permission to use this content must be obtained from the copyright license holder. Please apply to now Publishers, PO Box 179, 2600 AD Delft, The Netherlands, www.nowpublishers.com; e-mail: sales@nowpublishers.com
Full text available at: http://dx.doi.org/10.1561/2500000011
Editorial Scope
Topics
Foundations and Trends® in Programming Languages publishes survey and tutorial articles in the following topics:
- Abstract interpretation
- Compilation and interpretation techniques
- Domain specific languages
- Formal semantics, including lambda calculi, process calculi, and process algebra
- Language paradigms
- Mechanical proof checking
- Memory management
- Partial evaluation
- Program logic
- Programming language implementation
- Programming language security
- Programming languages for concurrency
- Programming languages for parallelism
- Program synthesis
- Program transformations and optimizations
- Program verification
- Runtime techniques for programming languages
- Software model checking
- Static and dynamic program analysis
- Type theory and type systems
Information for Librarians
Foundations and Trends® in Programming Languages, 2014, Volume 1, 4 issues, ISSN paper version 2325-1107, ISSN online version 2325-1131. Also available as a combined paper and online subscription.
Principles of Eventual Consistency
Sebastian Burckhardt
Microsoft Research
sburckha@microsoft.com
## 5 Consistency
5.1 Basic Eventual Consistency .......................... 49
5.2 Causal Consistency ................................. 54
5.3 Strong Models ....................................... 55
5.4 Hierarchy of Models ................................. 58
## 6 Implementations
6.1 Overview ............................................. 60
6.2 Pseudocode Semantics .............................. 64
6.3 Counters ............................................. 65
6.4 Stores ................................................. 68
6.5 Protocol Templates ................................. 71
## 7 Concrete Executions
7.1 Transitions .......................................... 80
7.2 Trajectories ........................................... 82
7.3 Concrete Executions ................................. 84
7.4 Observable History ................................. 86
7.5 Infinite Executions .................................. 90
## 8 Protocols
8.1 Role Automata ........................................ 92
8.2 Transport Guarantees ............................... 94
8.3 Protocols ............................................. 96
8.4 Pseudocode Compilation ........................... 98
## 9 Implementability
9.1 CAP .................................................. 101
9.2 Progress .............................................. 105
## 10 Correctness
10.1 Proof Structure ..................................... 109
10.2 Epidemic Protocols ................................. 111
10.3 Broadcast Protocols ............................... 118
10.4 Global-Sequence Protocols ....................... 120
## 11 Related Work
11.1 Related Work ....................................... 128
Abstract
In globally distributed systems, shared state is never perfect. When communication is neither fast nor reliable, we cannot achieve strong consistency, low latency, and availability at the same time. Unfortunately, abandoning strong consistency has wide ramifications. Eventual consistency, though attractive from a performance viewpoint, is challenging to understand and reason about, both for system architects and programmers. To provide robust abstractions, we need not just systems, but also principles: we need the ability to articulate what a consistency protocol is supposed to guarantee, and the ability to prove or refute such claims.
In this tutorial, we carefully examine both the what and the how of consistency in distributed systems. First, we deconstruct consistency into individual guarantees relating the data type, the conflict resolution, and the ordering, and then reassemble them into a hierarchy of consistency models that starts with linearizability and gradually descends into sequential, causal, eventual, and quiescent consistency. Second, we present a collection of consistency protocols that illustrate common techniques, and include templates for implementations of arbitrary replicated data types that are fully available under partitions. Third, we demonstrate that our formalizations serve their purpose of enabling proofs and refutations, by proving both positive results (the correctness of the protocols) and a negative result (a version of the CAP theorem for sequential consistency).
DOI: 10.1561/2500000011.
As our use of computers relies more and more on a complex web of clients, networks, and services, the challenges of programming a distributed system become relevant to an ever expanding number of programmers. Providing good latency and scalability while tolerating network and node failures is often very difficult to achieve, even for expert architects. To reduce the complexity, we need programming abstractions that help us to layer and deconstruct our solutions. Such abstractions can be integrated into a language or provided by some library, system API, or even the hardware.
A widely used abstraction to simplify distributed algorithms is shared state, a paradigm which has seen much success in the construction of parallel architectures and databases. Unfortunately, we know that in distributed systems, shared state cannot be perfect: in general, it is impossible to achieve both strong consistency and low latency. To state it a bit more provocatively:
All implementations of mutable shared state in a geographically distributed system are either slow (require coordination when updating data) or weird (provide weak consistency only).
This unfortunate fact has far-reaching consequences in practice, as it forces programmers to make an unpleasant choice. Strong consistency means that reads and updates behave as if there were a single copy of the data only, even if it is internally replicated or cached. While strong consistency is easy to understand, it creates problems with availability and latency. And unfortunately, availability and latency are often crucial for business — for example, on websites offering goods for sale, any outage may cause an immediate, irrecoverable loss of sales [G. DeCan-
dia et al., 2007]. Where business considerations trump programming complexity, consistency is relaxed and we settle for some form of
**Eventual Consistency.** The idea is simple: (1) replicate the data across participants, (2) on each participant, perform updates tentatively locally, and (3) propagate local updates to other participants asynchronously, when connections are available.
Although the idea is simple, its consequences are not. For example, one must consider how to deal with conflicting updates. Participants must handle conflicting updates consistently, so that they agree on the outcome and (eventually) converge. Exactly what that should mean, and how to understand and compare various guarantees, data types, and system implementations is what we study in this tutorial.
Although eventual consistency is compelling from a performance and availability perspective, it is difficult to understand the precise guarantees of such systems. This is unfortunate: if we cannot clearly articulate a specification, or if the specification is not strong enough to let us write provably correct programs, eventual consistency cannot deliver on its promise: to serve as a robust abstraction for the programming of highly-available distributed applications.
The goal of this tutorial is to provide the reader with tools for reasoning about consistency models and the protocols that implement them. Our emphasis is on using basic mathematical techniques (sets, relations, and first order logic) to describe a wide variety of consistency guarantees, and to define protocols with a precision that enables us to prove both positive results (proving correctness of protocols) and negative results (proving impossibility results).
1.1 General Motivation
Geographical distribution has become inseparable from computing. Almost all computers in use today require a network connection to deliver their intended functionality. Programming a distributed system has thus become common place, and understanding both the challenges and the available solutions becomes relevant for a large number of programmers. The discipline of distributed computing is at the verge of a “relevance revolution” not unlike the one faced by concurrent and parallel computing a decade ago. Like the “multicore revolution”, which forced concurrent and parallel programming into the mainstream, the “mobile+cloud revolution” means that distributed programming in general, and the programming of devices, web applications, and cloud services in particular, is well on its way to becoming an everyday necessity for developers. We can expect them to discover and re-discover the many challenges of such systems, such as slow communication, scalability bottlenecks, and node and network failures.
1.1.1 Challenges
The performance of a distributed system is often highly dependent on the latency of network connections. For technical and physical reasons (such as the speed of light), there exists a big disparity between the speed of local computation and of wide-area communication, usually by orders of magnitude. This disparity forces programmers to reduce communication to keep their programs performant and responsive.
Another important challenge is to achieve scalability of services. Scalability bottlenecks arise when too much load is placed on a resource. For example, using a single server node to handle all web requests does not scale. Thus, services need to be distributed across multiple nodes to scale. The limited resource can also be the network. In fact, it is quite typical that the network gets saturated by communication traffic before the nodes reach full utilization. Then, programmers need to reduce communication to scale the service further.
And of course, there are failures. Servers, clients, and network connections may all fail temporarily or permanently. Failures can be a
1.1. General Motivation
A consequence of imperfect hardware, software, or human operation. The more components that there are in a system, the more likely it will fail from time to time, thus failures are unavoidable in large-scale systems.
Often, it makes sense to consider failures not as some rare event, but as a predictable part of normal operation. For example, a connection between a mobile client and a server may fail because the user is driving through a tunnel or boarding an airplane. Also, a user of a web application may close the browser without warning, which (from a server perspective) can be considered a “failure” of the client.
At best, failures remain completely hidden from the user, or are experienced as a minor performance loss and sluggish responses only. But often, they render the application unusable, sometimes without indication about what went wrong and when we may expect normal operation to resume. At worst, failures can cause permanent data corruption and loss.
1.1.2 Role of Programming Languages
What role do programming languages have to play in this story? A great benefit of a well-purposed programming language is that it can provide convenient, robust, and efficient abstractions. For example, the abstraction provided by a garbage-collected heap is convenient, since it frees the programmer from the burden of explicit memory management. It is also robust, since it cannot be broken inadvertently if used incorrectly. Last but not least (and only after much research on the topic), garbage collection is efficient enough to be practical for many application requirements. Although conceptually simple, garbage collection illustrates what we may expect from a successful combination of programming languages and systems research: a separation of concerns. The client programmer gets to work on a simpler abstracted machine, while the runtime system is engineered by experts to efficiently simulate the abstract machine on a real machine.
But what abstractions will indeed prove to be convenient, robust, and efficient in the context of distributed systems? Ideally, we would like to completely hide the distributed nature of the system (slow connections, failures, scalability limits) from the programmer. If we could
Introduction
efficiently simulate a non-distributed system on a distributed system, the programmer would never even need to know that the system is distributed. Unfortunately, this dream is impossible to achieve in general. This becomes readily apparent when we consider the problem of consistency of shared state. In a non-distributed system, access to shared data is fast and atomic. However, the same is not true for a distributed system.
1.1.3 Distributed Shared Data
Ideally, simulating shared data in a distributed system should look just like in a non-distributed system - meaning that it should appear as if there is only a single copy of the data being read and written.
The Problem. There is no doubt that strong consistency (also known as single-copy consistency, or linearizability) is the best consistency model from the perspective of application programmers. Unfortunately, it comes at a cost: maintaining the illusion of a single copy requires communication whenever we read or update data. This communication requirement is problematic when connections are slow or unavailable. Therefore, any system that guarantees strong consistency is susceptible to the following problems:
- **Availability.** If the network should become partitioned, i.e. if it is no longer possible for all nodes to communicate, then some clients may become unusable because they can no longer update or read the data.
- **Performance.** If each update requires a round-trip to some central authority, or to some quorum of servers or peers, and if communication is slow (for example, because of geographical distance between the client and the server, or between the replicas in a service), then the performance and responsiveness of the client application suffers.
These limitations of strong consistency are well known, and complicate the design of many distributed applications, such as cloud services.
The CAP theorem, originally conjectured by Brewer [2000] and later proved by Gilbert and Lynch [2002], is a particularly popular formulation of this fundamental problem (as discussed in the IEEE Computer retrospective edition 2012). It states that strong Consistency and Availability cannot be simultaneously achieved on a Partitioned network, while it is possible to achieve any combination of two of the above properties.
**Seat Reservation Example.** We can illustrate this idea informally using an example where two users wish to make an airplane reservation when there is only one seat left. Consider the case where the two users reside in different network partitions, and are thus incapable of communicating in any way (even indirectly through some server). It is intuitively clear that in such a situation, any system is forced to delay at least one user’s request, or perhaps both of them (thus sacrificing availability), or risk reserving the same seat twice (thus sacrificing consistency). Achieving both availability and consistency is only possible if the network always allows communication (thus sacrificing partition tolerance).
This simple seat reservation example is a reasonable illustration of the hard limits on what can be achieved. However, it may also create an overly pessimistic and narrow view of what it means to work with shared state in a distributed system. Airlines routinely overbook seats, and reservations can be undone (at some cost). The real world is not always strongly consistent, for many more reasons than just technological limitations.
### 1.2 Applications
Practitioners and researchers have proposed the use of eventual consistency to build more reliable or more responsive systems in many different areas.
- **Cloud Storage and Georeplication.** Eventual consistency can help us to build highly-available services for cloud storage, and to keep data that is replicated across data centers in sync. Examples include research prototypes [Li et al., 2012, Lloyd et al.](http://dx.doi.org/10.1561/2500000011)
Introduction
and many commercially used storage systems such as Voldemort, Firebase, Amazon Dynamo [G. DeCandia et al., 2007], Riak [Klophaus 2010], and Cassandra [Lakshman and Malik 2009].
- **Mobile Clients.** Eventual consistency helps us to write applications that provide meaningful functionality while disconnected from the network, and remain highly responsive even if connections to the server are slow [Terry et al. 1995, Burckhardt et al. 2012b, 2014b].
- **Epidemic or Gossip Protocols.** Eventual consistency can help us to build low-overhead robust monitoring systems for cloud services, or for loosely connected large peer-to-peer networks [Van Renesse et al. 2003, Jelasity et al. 2005, Princehouse et al. 2014].
- **Collaborative editing.** When multiple people simultaneously edit the same document, they face consistency challenges. A common solution is to use operational transformations (OT) [Imine et al. 2006, Sun and Ellis 1998, Nichols et al. 1995].
- **Revision Control.** Forking and merging of branches in revision control system is another example where we can apply general principles regarding concurrent updates, visibility, and conflict resolution [Burckhardt and Leijen 2011, Burckhardt et al. 2012a].
The examples above span a rather wide range of systems. The participating nodes may have little computational power and storage space (such as mobile phones) or plenty of computation power (such as servers in data centers) and lots of storage (such as storage back-ends in data centers). Similarly, the network connections may be slow, unreliable, low-bandwidth and expensive (e.g. cellular connections) or fast and high-bandwidth (e.g. intra-datacenter networks), or something in between (e.g. inter-datacenter networks). These differences are very important when considering how best to make the trade-off between reliability and availability. However, at an abstract level, all of these sys-
tems share the same principles of eventual consistency: shared data is updated at different replicas, updates are transmitted asynchronously, and conflicts are resolved consistently.
1.3 Warmup
To keep things concrete, we start with a pair of examples. We study two different implementations of a very simple shared data type, a register. The first one stores a single copy on some reliable server, and requires communication on each read or write operation. The second one propagates updates lazily, and both read and write operations complete immediately without requiring communication.
For illustration purposes, we keep the shared data very simple: just a value that can be read and written by multiple processes. This data type is called a register in the distributed systems literature. One can imagine a register to be used to control some configuration setting, for example.
1.3.1 Single-Copy Protocol
The first implementation of the register stores a single copy of the register on some central server — it does not use any replication. When clients wish to read or write the register, they must contact the server to perform the operation on their behalf. This general design is very common; for example, web applications typically rely on a single database backend that performs operations on behalf of clients running in web browsers.
We show the protocol definition in Fig. 1.1 A protocol definition specifies the name of the protocol, the messages, and the roles. The SingleCopyRegister protocol defines four messages and two roles, Server and Client.
Roles represent the various participants of the protocol, and are typically (but not necessarily) geographically separated. Roles react to operation calls by some user or client program, and they communicate with each other by sending and receiving messages. Technically, each role is a state machine which defines a current state and atomic
Introduction
Figure 1.1: A single-copy implementation of a register. Read an write operations contact the server and wait for the response.
```
protocol SingleCopyRegister {
message ReadReq(cid: nat) : reliable
message ReadAck(cid: nat, val: Value) : reliable
message WriteReq(cid: nat, val: Value) : reliable
message WriteAck(cid: nat) : reliable
role Server {
var current: Value;
receive(req: ReadReq) {
send ReadAck(req.cid, current);
}
receive(req: WriteReq) {
current := req.val;
send WriteAck(req.cid);
}
}
role Client(cid: nat) {
operation read() {
send ReadReq(cid);
// does not return to client program yet
}
operation write(val: Value) {
send WriteReq(cid,val);
// does not return to client program yet
}
receive ReadAck(cid) {
return val; // return to client program
}
receive WriteAck(cid) {
return ok; // return to client program
}
}
}
```
transitions that are executed in reaction to operation calls by client programs, to incoming messages, or to some periodic scheduling. In our notation, roles look a bit like objects: the role state looks like fields of an object, and each atomic transition looks like a method of the object.
A role definition starts with the name of the role, followed by an argument list that clarifies the number of instances, and how they are distinguished. Here, there is a single server role and an infinite number of clients, each identified by a client identifier $cid$ which is a nonegative integer (type $nat$).
**Messages.** There are four message format specifications (lines 3 – 6). Each one describes a message type and the contents of the message (names and types), and specifies the expected level of reliability. For example, the declaration `message WriteReq(c: Client, val:boolean) : reliable` means that each `WriteReq` message carries a client identifier $c$ (the client writing the register), and a boolean value $val$ (the value being written), and that this message is always delivered to all recipients, and never forged nor duplicated, but possibly reordered with other messages.
**Server.** In the `Server` role (lines 8 – 17), the state of the server consists of a single variable `current` which is the current value of the register (line 9). It is specified to be initially false. The only server actions are to receive a read or a write request. When receiving a message corresponding to a read request (line 10) or a write request (line 13), the corresponding operation (read or write) is performed, and the result value (in the case of read) or an acknowledgment message (in the case of write) is sent back using a `send` request.
**Client.** The `Client` role (lines 19 – 34) contains definitions for read and write operations, but has no variables (i.e. it is *stateless*). Supposedly, the operations are called by the local user or client program; the latter may call any sequence of read and write operations, but it may not call an operation until the previous one has returned.
When the `read` operation is called, the corresponding atomic transition sends a `WriteReq` message, but it does *not* complete the operation — there is no implicit return at the end of a transition (the opera-
tion cannot return because it does not know the value of the register yet). Only when the response arrives from the server, the corresponding transition contains an explicit return statement that completes the read operation and returns the result to the client program. Thus, the read-operation is non-atomic, i.e. executes not as a single transition, but as two transitions. The write operation is non-atomic as well; it blocks until an acknowledgment from the server has been received.
**Message Destination.** Note that the send instruction does not explicitly specify the destination — instead, it is the receive instruction that specifies what messages to receive. Receive operations specify a pattern that defines what messages can be received. For example, the receive actions on lines 28 and 31 match an incoming message only if the c field of the request matches this, which is the client id — therefore, only the c field acts as a destination identifier and ensures the response message is received only by the client that sent the original request to the server.
**Atomic Actions.** Our semantics compiles roles like state machines with atomic actions. Intuitively, this means that only one block of code is executing at a time, thus there is no fine-grained concurrency and we need no locks. Of course, there is still ample opportunity for subtle errors caused by the coarse-grained concurrency, i.e. by unexpected orderings of the atomic actions.
**Reliability.** Crashes by one client cannot impact other clients. However, the protocol is not robust against server crashes: a crashed server makes progress impossible for all clients. This assumption of a single reliable server is of course the cornerstone of the single-copy protocol design. It is, however, not a limitation of the epidemic protocol defined in the next section.
1.3. **Warmup**
```plaintext
: protocol EpidemicRegister {
struct Timestamp(number: nat; pid: nat);
function lessthan(Timestamp(n1,pid1), Timestamp(n2,pid2)) {
return (n1 < n2) ∨ (n1 == n2 ∧ pid1 < pid2);
}
message Latest(val: Value, t: Timestamp) : dontforge, eventualindirect
role Peer(pid: { 0 .. N }) {
var current: Value := undef;
var written: Timestamp := Timestamp(0,pid);
operation read() {
return current;
}
operation write(val: Value) {
current := val;
written := Timestamp(written.number + 1,pid);
return ok;
}
periodically {
send Latest(current, written);
}
receive Latest(val,ts) {
if (written.lessthan(ts)) {
current := val;
written := ts;
}
}
}
```
*Figure 1.2:* An implementation of the register where all operations return immediately, without waiting for messages.
1.3.2 Epidemic Protocol
The single-copy implementation is easy to understand. However, the read and write operations are likely to be quite slow in practice because they require a round-trip to the server. The epidemic register (Fig. 1.2) eliminates this problem by removing the server communication from the operations: each role stores a local copy of the register, and propagates updates asynchronously. No central server is needed: all roles are equal (we call them peers). We call this a symmetric protocol, as opposed to the asymmetric client-server protocol discussed in the previous section.
**Timestamps.** When propagating updates, we use timestamps to ensure that later updates overwrite earlier ones and not the other way around. Each node stores not just the currently known latest value of the register (current), but also a timestamp (written) that indicates the time of the write operation that originally wrote that value. When receiving a timestamped update, we ignore it if its timestamp is older than the timestamp of the current value.
**Logical clocks.** Rather than a physical clock, we use logical clocks to create timestamps, which are a well-known, clever technique for ordering events in a distributed system [Lamport, 1978]. Logical timestamps are pairs of numbers, which are totally ordered by lexicographic order as defined on lines 3–5. On each write operation (lines 18–22) the node creates a new timestamp, which is larger than the current one (and thus also larger than all timestamps previously received in update messages).
**Update Propagation.** Every once in a while, each role performs the code on lines 24–26 which broadcasts the currently stored value and its timestamp in a Latest message. This ensures that all roles become eventually aware of all updates, and are thus eventually consistent.
---
1 These patterns are similar to patterns in languages like OCaml, but must be static, i.e. the pattern may not depend on the current state of the role, but must use only constants.
2 Lexicographic order means that tuples are compared based on the first component, and then the second component if the first one is the same, and so on. It is a generalization of alphabetic order if we consider words to be tuples of letters, thus the name.
Weaker Delivery Guarantees. The delivery guarantees required by this protocol (on line 8) are
dontforge (meaning no messages should be invented) and eventualindirect (meaning that there must be some delivery path, possibly indirect via other replicas). These are weaker conditions than the reliable guarantee used by the single-copy protocol (which required that all messages be delivered to all receivers exactly once). Here, the system is allowed to duplicate and even lose messages, as long as there is always eventually some (possibly indirect) delivery path from each sender to each receiver.
This type of propagation is sometimes called epidemic, since nodes can indirectly “infect” other nodes with information. An epidemic protocol keeps functioning even if some connections are down, as long as the topology is “eventually strongly connected”. Another name for this type of protocol is state-based, because each message contains information that is identical to the local state.
Consistency and Correctness
The interesting questions are: is the epidemic protocol correct? What does correct even mean? What is the observable difference between the two protocols, from a client perspective?
Given our discussion of eventual consistency earlier, we may reasonably expect an answer along the lines of “the epidemic protocol is eventually consistent, while the single-copy protocol is strongly consistent”. However, the story is a bit more interesting than that.
- The single-copy register is linearizable, which is the strongest form of consistency.
- The epidemic register is sequentially consistent, which is a slightly weaker, yet still surprisingly strong consistency guarantee. We prove this in §10.2.2.
At first glance, this appears to contradict the CAP theorem since the epidemic register is available under partitions (all operations complete immediately), thus strong consistency should not be possible? It turns out that the original CAP is about linearizability, not sequential...
consistency; and under sequential consistency, CAP only applies to reasonably expressive data types, not including a simple register. We prove a properly qualified version of the CAP theorem in §9.1.2.
Since the single-copy register is linearizable, and the epidemic register is sequentially consistent, they are observationally equivalent to any client that does not have a side channel for communication (for more about this, see §5.3.1).
1.4 Overview
The goal of this tutorial is to provide the reader with tools for reasoning about consistency of protocols. Our emphasis is on using basic mathematical techniques (sets, relations, and first order logic) to describe a wide variety of consistency guarantees, and to define protocols with a level of precision that enables us to prove both positive results (correctness of protocols) and negative results (refute implementability).
We start with basic technical foundations in chapter 2, including a review of important concepts related to partial and total orders. We also introduce event graphs, which are mathematical objects representing information about events in executions, and which are the technical backbone of all our definitions.
In chapters 3–5, we lay out the specification methodology, and assemble consistency guarantees spanning data type semantics, ordering guarantees, and convergence guarantees:
- In chapter 3 we introduce our approach to specifying consistency guarantees, which is based on histories and abstract executions.
- In chapter 4 we first specify the semantics of sequential data types, and then generalize to replicated data types that specify the semantics in a replicated setting, in particular how to resolve conflicts. The key insight is to think of the current state not as a value, but as a graph of prior operations.
- In chapter 5 we define basic eventual consistency, collect various consistency guarantees, and present a hierarchy of the most common consistency models.
1.4. Overview
In chapter 6, we walk through a selection of protocol implementations and optimizations, to gain a better understanding of the nature of the trade-off between the consistency model and the speed/availability of operations. We show implementations for simple data types, and protocol templates that can be used to implement any replicated data type.
In chapters 7 and 8, we establish formal models for executions in asynchronous distributed systems (including crashes and transport failures), and for protocol definitions (accommodating arbitrary asynchronous protocols). These models are needed as a preparation for the next two chapters, which conclude the technical development:
- In chapter 9, we prove a version of the CAP theorem that shows that for all but the simplest data types, sequential consistency cannot be implemented in a way such that all operations are available under partitions.
- In chapter 10, we revisit the implementations presented earlier, and prove that they provide the claimed consistency guarantees.
References
Mark Batty, Mike Dodds, and Alexey Gotsman. Library abstraction for C/C++ concurrency. In Principles of Programming Languages (POPL), 2013.
References
Full text available at: http://dx.doi.org/10.1561/2500000011
References
References
|
{"Source-Url": "https://www.nowpublishers.com/article/DownloadSummary/PGL-011", "len_cl100k_base": 7222, "olmocr-version": "0.1.49", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 68639, "total-output-tokens": 13344, "length": "2e12", "weborganizer": {"__label__adult": 0.0004439353942871094, "__label__art_design": 0.0003914833068847656, "__label__crime_law": 0.00036978721618652344, "__label__education_jobs": 0.0011167526245117188, "__label__entertainment": 9.41157341003418e-05, "__label__fashion_beauty": 0.00019943714141845703, "__label__finance_business": 0.00032067298889160156, "__label__food_dining": 0.0004351139068603515, "__label__games": 0.0006127357482910156, "__label__hardware": 0.0010433197021484375, "__label__health": 0.0009703636169433594, "__label__history": 0.00033664703369140625, "__label__home_hobbies": 0.00012576580047607422, "__label__industrial": 0.0004630088806152344, "__label__literature": 0.0004911422729492188, "__label__politics": 0.00033020973205566406, "__label__religion": 0.0006456375122070312, "__label__science_tech": 0.060455322265625, "__label__social_life": 0.0001125335693359375, "__label__software": 0.005950927734375, "__label__software_dev": 0.923828125, "__label__sports_fitness": 0.00033855438232421875, "__label__transportation": 0.0007643699645996094, "__label__travel": 0.0002346038818359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50869, 0.04195]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50869, 0.39036]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50869, 0.85343]], "google_gemma-3-12b-it_contains_pii": [[0, 99, false], [99, 2345, null], [2345, 2345, null], [2345, 3376, null], [3376, 3475, null], [3475, 3475, null], [3475, 5161, null], [5161, 5161, null], [5161, 6854, null], [6854, 8002, null], [8002, 10306, null], [10306, 12452, null], [12452, 14720, null], [14720, 16623, null], [16623, 18679, null], [18679, 20615, null], [20615, 22531, null], [22531, 23632, null], [23632, 25948, null], [25948, 27796, null], [27796, 28714, null], [28714, 31001, null], [31001, 33006, null], [33006, 34981, null], [34981, 36029, null], [36029, 37278, null], [37278, 39555, null], [39555, 41918, null], [41918, 44006, null], [44006, 46041, null], [46041, 48260, null], [48260, 50415, null], [50415, 50869, null]], "google_gemma-3-12b-it_is_public_document": [[0, 99, true], [99, 2345, null], [2345, 2345, null], [2345, 3376, null], [3376, 3475, null], [3475, 3475, null], [3475, 5161, null], [5161, 5161, null], [5161, 6854, null], [6854, 8002, null], [8002, 10306, null], [10306, 12452, null], [12452, 14720, null], [14720, 16623, null], [16623, 18679, null], [18679, 20615, null], [20615, 22531, null], [22531, 23632, null], [23632, 25948, null], [25948, 27796, null], [27796, 28714, null], [28714, 31001, null], [31001, 33006, null], [33006, 34981, null], [34981, 36029, null], [36029, 37278, null], [37278, 39555, null], [39555, 41918, null], [41918, 44006, null], [44006, 46041, null], [46041, 48260, null], [48260, 50415, null], [50415, 50869, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50869, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50869, null]], "pdf_page_numbers": [[0, 99, 1], [99, 2345, 2], [2345, 2345, 3], [2345, 3376, 4], [3376, 3475, 5], [3475, 3475, 6], [3475, 5161, 7], [5161, 5161, 8], [5161, 6854, 9], [6854, 8002, 10], [8002, 10306, 11], [10306, 12452, 12], [12452, 14720, 13], [14720, 16623, 14], [16623, 18679, 15], [18679, 20615, 16], [20615, 22531, 17], [22531, 23632, 18], [23632, 25948, 19], [25948, 27796, 20], [27796, 28714, 21], [28714, 31001, 22], [31001, 33006, 23], [33006, 34981, 24], [34981, 36029, 25], [36029, 37278, 26], [37278, 39555, 27], [39555, 41918, 28], [41918, 44006, 29], [44006, 46041, 30], [46041, 48260, 31], [48260, 50415, 32], [50415, 50869, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50869, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
361f1d3b0c5f6c7312a6b375da205f081faf7f09
|
Easing IoT Development for Novice Programmers Through Code Recipes
Original
Availability:
This version is available at: 11583/2698336 since: 2018-06-19T13:44:49Z
Publisher:
ACM
Published
DOI:10.1145/3183377.3183385
Terms of use:
openAccess
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
(Article begins on next page)
Easing IoT Development for Novice Programmers
Through Code Recipes
Fulvio Corno
Politecnico di Torino
Turin, Italy
fulvio.corno@polito.it
Luigi De Russis
Politecnico di Torino
Turin, Italy
luigi.derussis@polito.it
Juan Pablo Sáenz
Politecnico di Torino
Turin, Italy
juan.saenz@polito.it
ABSTRACT
The co-existence of various kinds of devices, protocols, architectures, and programming languages make Internet of Things (IoT) systems complex to develop, even for experienced programmers. Perforce, Software Engineering challenges are even more difficult to address by novice programmers. Previous research focused on identifying the most challenging issues that novice programmers experience when developing IoT systems. The results suggested that the integration of heterogeneous software components resulted one of the most painful issues, mainly due to the lack of documentation understandable by inexperienced developers, from both conceptual and technical perspectives. In fact, novice programmers devote a significant effort looking for documentation and code samples willing to understand them conceptually, or in the worst case, at least to make them work. Driven by the research question: "How do the lessons learned by IoT novice programmers can be captured, so they become an asset for other novice developers?", in this paper, we introduce Code Recipes. They consist of summarized and well-defined documentation modules, independent from programming languages or run-time environments, by which non-expert programmers can smoothly become familiar with source code, written by other developers that faced similar issues. Through a use case, we show how Code Recipes are a feasible mechanism to support novice IoT programmers in building their IoT systems.
CCS CONCEPTS
- Social and professional topics → Computational science and engineering education; Software engineering education;
- Computer systems organization → Embedded and cyber-physical systems;
KEYWORDS
Novice programmers, Internet of Things, Documentation, Code Fragments
1 INTRODUCTION
The development of IoT systems is challenging. On one hand, it relies on various areas such as distributed systems, mobile computing, web information systems, and cloud computing, among others. On the other hand, it differs from mainstream mobile-app and client-side web application development in a sense that IoT developers are required to consider aspects such as: the multidevice programming; the reactive, always-on nature of the system; heterogeneity and diversity; the distributed, highly dynamic, and potentially migratory nature of software [8].
Naturally, these challenging issues are even more painful for novice programmers since they are not expected to have deep knowledge or experience in all those areas or aspects [5]. Our previous research aimed at identifying the pain points that novice programmers experienced when developing IoT systems [2]. An exploratory study was conducted among Electronic and Computer Engineering undergraduate students of a university course in which, following a project-based learning approach, three to four people groups were assigned to develop an IoT system. In accordance with the course learning goals, these IoT systems had to include mobile applications, web applications, cloud computing services, wearable devices, single-board computers, and IoT sensing devices [1].
The results from this exploratory study suggested that the integration of heterogeneous software components is one of the most painful issues. It commonly implies dealing with several protocols, formats, and authentication mechanisms, that are usually unknown to the students. Moreover, the lack of clear and complete documentation, or merely, the absence of documentation that can be understood by a novice developer, make this integration issue even more difficult to overcome.
Looking for solutions to support novice IoT developers in overcoming these integration issues, we noticed that despite the specificity of each project, implementations of the integration between software components were similar across most of them, especially when third-party services were involved. However, although the source code of the projects from the past years’ courses is on GitHub, it was not being reused among groups in later versions of the course. Therefore, the lessons learned by a group when implementing its project is not useful for the next year’s groups.
Taking into account the results of the exploratory study and the lack of code reuse between the course groups, we envisioned that the these solutions found by the students, that were finally included in the working prototype built at the end of the course, could become a valuable asset for the novices that are about to start implementing their projects. The source code of these prototypes reveals architectural decisions and strategies adopted by other groups to achieve the integration of diverse software components. This code should, therefore, provide some guidance to other programmers that are in the process of overcoming the same learning curve issues. Moreover, if documented, this code would be a solution to the reported lack of documentation understandable by inexperienced developers [9]. In fact, being able to observe how someone else coded, what others paid attention to, and how they solved problems all support learning better ways to code and access to superior knowledge [3].
The present work is driven by the research question: "How do the lessons learned by IoT novice programmers can be captured, so they become an asset for other novice developers?". The current proposal
aims at easing the learning curve to IoT novice developers, not by automating code reusing and hiding the code from the developers, but instead, by enabling non-expert programmers to easily become familiar with source code, written by other developers that faced similar issues.
2 USE CASE
As mentioned earlier, the results from our previous research [2] suggested that among the most challenging issues novices face when developing IoT systems, the integration with other software components was perceived by many students as the most painful issue. In particular, the integration with third-party APIs that require OAuth 2.0 authentication was a time-consuming and difficult task. The OAuth 2.0 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. Broadly speaking, this authentication protocol consists of a flow, with a set of roles (resource owner, resource server, client, and authorization server) interacting across various steps (authorization request, access token request, and protected resource request), and exchanging several resources (authorization grant, access token, refresh token, redirect URI).
In the development of IoT systems, OAuth authentication protocol becomes fundamental since most of the third party service APIs use it. The integration with the Fitbit activity tracker is a concrete example of the OAuth protocol usage. In order to gather the data captured by this wearable device, the third party application (i.e., the one developed by the novices) must obtain users authorization through the OAuth protocol.
However, due to the roles, steps, and resources that the protocol comprises, the adoption of the OAuth authentication is not trivial. The appropriate implementation of this protocol requires a clear understanding of the various steps, both from the conceptual and the technical perspective. Novice programmers struggle considerably with the adoption of OAuth, mainly due to the lack of documentation that might be understandable by non-expert programmers.
Fitbit, for instance, has a documentation website that provides guidance about the Web API for accessing data from Fitbit activity trackers. Although the developer’s site has an API explorer built in Swagger, and an API debug tool, it does not provide a fully implemented functional source code sample. Moreover, despite the clarity, readability and good overall structure of the documentation, it is targeted at experienced programmers, as with most of the developer’s documentation.
In this scenario, novice programmers are required to search code samples, willing to understand them conceptually, or in the worst case, at least to make them work. Typically, this involves the reference of the Google OAuth Client Library documentation, the Fitbit developers website, several posts published in Stack Overflow, and various code samples available on GitHub. Hence, from the experience of the novice programmers adopting the OAuth protocol, we have that: (i) a significant amount of effort is devoted looking for documentation and samples; (ii) just through the source code it is not possible to understand the whole learning process behind it; (iii) the code fragments must be surrounded by summarized, structured and well-defined documentation modules, so they become an asset for other IoT novice programmers.
3 CODE RECIPES
Code Recipes aims at capturing the most important information and documentation about one or more code fragments, to ease the development of an IoT system for novice developers. Code Recipes are specified through a set of metadata and consist of multiple code fragments along with documentation and links to ease the understanding of such code, in order to implement a given integration between subsystems of an IoT system. The joint presence of metadata and links allow novice developers to explore alternative solutions and, at their will, deepen their knowledge about a specific IoT subsystem, thus contributing to their learning process.
Our approach lies in the fact that code examples, when used effectively, can be a powerful learning resource [4]. However, while examples are a valuable resource for programmers, the rich context surrounding them is often crucial for adaptation and integration [6]. This proposal enables the integration of several software components through code fragments that might belong to different programming languages and might be deployed across various runtime environments, as it is common in IoT systems. The decoupling between the recipes and the technological stack is fundamental given the heterogeneity of the software components that are involved in an IoT system. Code Recipes, therefore, are defined as summarized and well-defined documentation modules, independent from programming languages or run-time environments.
By defining Code Recipes as documentation modules structured around code fragments, they can be incorporated in various kind of tools that might handle them in the learning process, e.g., a wiki-style web application or an Integrated Development Environment (IDE) extension.
Code Recipes, therefore, expose four features:
- Although the Recipes are structured around source code fragments, they are much more than just code. They encompass information that, besides providing technical solutions, includes comments and documentation sources that account for the learning process that other novice IoT developers followed and the decisions they made to reach a solution.
- Recipes are not constrained to a specific architecture, programming language or run-time environment. This means, first, that this proposal is aware of the heterogeneous nature of IoT environments, and second, that is suitable to be used in multiple scenarios with IoT novice developers.
- Recipes are not isolated from each other, they are cross-linked on the basis of three criteria: alternative versions, other language versions, and related recipes. This feature enables the sharing of diverse learning experiences with their commonalities and their divergences.
- Technical speaking, a structured representation (e.g., in JSON or XML) of the Code Recipes enables the implementation of various kind of tools that might handle them. For instance, a
web application (as shown in Fig. 1), a web browser extension, or an IDE plugin.
1 {
2 "id": "1506954892",
3 "author": [{
4 "id": "1506954892",
5 "name": "Juan Saenz"
6 }],
7 "date": "21.9.2017",
8 "name": "Integration between Fitbit and Java",
9 "description": "Recipe to consume the Fitbit API using OAuth 2.0",
10 "tags": ["fitbit", "java", "oauth 2.0", "api"],
11 "running_environment": "Server application built in Java",
12 "endpoints": ["Fitbit API"],
13 "dependencies": [{
14 "name": "Maven",
15 "description": "Maven plugin for Eclipse installed",
16 "url": "http://www.eclipse.org/eclipse/
17 }],
18 "code_fragments": [{
19 "programming_language": "Java",
20 "description": "This is the main class",
21 "code_fragments_url": ["https://github.com/google-oauth-client"],
22 "name": "FitbitSample",
23 "source_code_url": "/1506954892/FitbitSample.java",
24 "id": "1506954892",
25 "parameters": [{
26 "name": "SCOPES",
27 "description": ["description": "OAuth 2.0 permission for resources", "data_type": "String", "sample_value": "activity, heartrate, location, nutrition"]
28 }],
29 "documentation_urls": ["https://developers.fitbit.com/"
30 }]
31 }],
32 "related_recipes": ["1507302404"],
33 "other_language_versions": ["1496761597"],
34 "alternative_versions": ["1506957773", "1507562564"],
35 "programming_language": "Java",
36 "name": "Maven",
37 "ingredients": [{
38 "name": "Fitbit account",
39 "description": "Fitbit accounts set up for read/write API access",
40 "url": "https://dev.fitbit.com/"
41 },
42 "dependencies": [{
43 "name": "Maven",
44 "description": "Maven plugin for Eclipse installed",
45 "url": "http://www.eclipse.org/eclipse/"
46 }],
47 "code_fragments": [{
48 "programming_language": "Java",
49 "description": "This is the main class",
50 "code_fragments_url": ["https://github.com/google-oauth-client"],
51 "name": "FitbitSample",
52 "source_code_url": "/1506954892/FitbitSample.java",
53 "id": "1506954892",
54 "parameters": [{
55 "name": "scopes",
56 "description": ["description": "OAuth 2.0 permission for resources", "data_type": "String", "sample_value": "activity, heartrate, location, nutrition"]
57 }],
58 "documentation_urls": ["https://developers.fitbit.com/"
59 }]
60 }
61 }
62
Listing 1: Code Recipe Sample
Listing 1 describes a possible structure of a Code Recipes in JSON format. First, each recipe is described through an id (timestamp), its author name, publication date, name, description, and tags (lines 2 to 9). Then, the subsystems that the recipe integrates are specified in the endpoints fields (lines 10 and 11). Ingredients (line 12) correspond to the requirements of the recipe. They can be technical requirements, such as the deployment of a specific kind of web server, or data requirements, such as creating a developer account and issuing API client credentials. Dependencies (line 16) refers to requirements associated with the source code, which are fundamentally libraries and packages that must be installed.
Most importantly, Code Recipes include one or more code fragments that can be implemented in different programming languages and IDEs (lines 20 to 33). Each fragment has a set of parameters, which are values specific to each implementation of the recipe. Besides the source code, recipes include the documentation that their authors consulted, both for the whole recipe as well as for its fragments. They can be specified in the documentation URLs fields (lines 23 and 34). Finally, Code Recipes can be linked to each other in three ways (lines 36 to 38): alternative versions, that point to other recipes targeted at implementing the same integration; other language versions, that point to implementations of the same recipe in other programming languages; and related recipes, that correspond to other recipes that can be used as intermediate steps to implement the concerned recipe.
4 THE FITBIT OAUTH CODE RECIPE
With the use case described in Section 2 in mind, a Code Recipe was developed to illustrate how our approach might help novices to overcome integration issues through a collaborative approach. To develop this recipe, we took on the task of implementing a simple Java application to gather data from a Fitbit bracelet.
As mentioned before, no sample projects are provided in the Fitbit developers’ website. Therefore, the first endeavor was to find a sample project in which the OAuth authentication was implemented using Java. After googling “OAuth 2.0 Java Sample Code”, the second result took us to the documentation of the Google OAuth Client Library for Java (in the Code Recipes, this website would be included in the documentation_urls field). This website had setup instructions for Maven, the list of libraries that were required (in the Code Recipes would correspond to the dependencies field), the release notes of these libraries, and one sample code of the integration between a Java application and the Daily motion API, using OAuth 2.0.
Once downloaded and imported the sample code into the IDE, the next step was to install and configure Maven, including the Project Object Model (POM) in which the dependencies of the project were defined. Later, when the Java project was already compilable, the next task was to identify which pieces of the code should be modified to achieve the integration with the Fitbit API (in the Code Recipes, these pieces are specified in the parameters field). Among the new data that had to be inserted into the code as parameters, there were the API key, the API secret, the Callback URL and the Scope. All of this data was obtained after completing the registration as a Fitbit developer (in the Code Recipes this registration accounts as an ingredient).
Afterwards, there was the source code itself. It consisted of three Java classes, two of which had to be parameterized. The explanation of the meaning of every parameter was available in the Fitbit developers website, along with their possible values (in the Code Recipes, these parameters can be documented through a description, their data_type, and a set of sample_values). Since this was the first Recipe that was developed, there were no other Recipes to link.
Across the whole implementation process, several documentation sources were consulted. The Google OAuth Client Library documentation, the Fitbit developers website, several posts published in Stack Overflow, and various code samples available in GitHub. Notwithstanding the fact that the Code Recipe was developed by an experienced programmer, its implementation was not trivial, and many of the issues expressed by the novices in our previous research were highlighted.
5 RELATED WORKS
Warner et al. [10] created CodePilot, a prototype IDE for novices. The tool enabled multiple users to connect to a web-based programming session and work together. According to the authors, CodePilot is the first attempt to integrate real-time collaborative coding, testing, bug reporting, and version control management into a unified system. This approach aims at lowering the entry barrier for novices by unifying the collaborative development workflow into a single IDE.
Oney et al. [6] developed and evaluated a mechanism they called Codelets. It consists of a block of example code and a user interactive helper widget that assists the developer in understanding and integrating the example. Through this interactive helper, web developers always have explanations attached to their code and can recall it if necessary. This approach allows maintaining a connection between example code and related documentation throughout the example’s life-cycle.
Sidiroglou-Douskos et al. [7] presented a system named CodeCarbonCopy for transferring code from a donor application into a recipient application. This tool implemented an automatic data representation and naming translation between recipient and donor and a static analysis that automatically identifies and removes code that is irrelevant to the recipient.
Unlike CodePilot [10] and Codelets [6], Code Recipes are designed not to be tied to a specific programming language, IDE, or deployment environment. In our approach, IoT developers are intended to gain expertise understanding and adapting source code into their own implementations, despite the architectural decisions of the concerned system.
6 CONCLUSION
In view of the complexity that the development of IoT systems poses, particularly concerning the integration of heterogeneous software components, and taking into account the lack of documentation reported by novice programmers in our previous research, this paper presented Code Recipes. Code Recipes are summarized and well-defined documentation modules, non-dependent from programming languages or run-time environments, and structured around the code fragments that are required to implement some portions of an IoT system. Through this approach we aim at supporting novice IoT programmers, enabling them to easily become familiar with source code written by other developers that faced similar issues. Future work will concern the development of a Code Recipes catalog; a web-based tool through which students can use them; and the subsequent evaluation in the context of the course.
REFERENCES
[1] Fulvio Corno, Luigi De Russis, and Dario Bonino. 2016. Educating Internet of Things Professionals: The Ambient Intelligence Course. IT Professional 18, 6 (Nov 2016), 50–57. https://doi.org/10.1109/MITP.2016.100
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2698336/183734/recipes.pdf", "len_cl100k_base": 4395, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 14963, "total-output-tokens": 5984, "length": "2e12", "weborganizer": {"__label__adult": 0.00044035911560058594, "__label__art_design": 0.000263214111328125, "__label__crime_law": 0.0003352165222167969, "__label__education_jobs": 0.0022678375244140625, "__label__entertainment": 5.513429641723633e-05, "__label__fashion_beauty": 0.00017642974853515625, "__label__finance_business": 0.0001984834671020508, "__label__food_dining": 0.0004673004150390625, "__label__games": 0.0004203319549560547, "__label__hardware": 0.0008873939514160156, "__label__health": 0.0005650520324707031, "__label__history": 0.0001704692840576172, "__label__home_hobbies": 0.0001061558723449707, "__label__industrial": 0.00035262107849121094, "__label__literature": 0.0002276897430419922, "__label__politics": 0.00023818016052246096, "__label__religion": 0.0004422664642333984, "__label__science_tech": 0.004673004150390625, "__label__social_life": 0.00013971328735351562, "__label__software": 0.0028247833251953125, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0004134178161621094, "__label__transportation": 0.0006837844848632812, "__label__travel": 0.00022971630096435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25154, 0.0906]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25154, 0.54409]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25154, 0.90311]], "google_gemma-3-12b-it_contains_pii": [[0, 780, false], [780, 6436, null], [6436, 12920, null], [12920, 18869, null], [18869, 25154, null]], "google_gemma-3-12b-it_is_public_document": [[0, 780, true], [780, 6436, null], [6436, 12920, null], [12920, 18869, null], [18869, 25154, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25154, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25154, null]], "pdf_page_numbers": [[0, 780, 1], [780, 6436, 2], [6436, 12920, 3], [12920, 18869, 4], [18869, 25154, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25154, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
907e85c0ce16b3b3b445dee097d271c9ce717ed1
|
Chapter 8
Distributed Sorting
“Indeed, I believe that virtually every important aspect of programming arises somewhere in the context of sorting [and searching]!”
– Donald E. Knuth, The Art of Computer Programming
In this chapter we study a classic problem in computer science—sorting—from a distributed computing perspective. In contrast to an orthodox single-processor sorting algorithm, no node has access to all data, instead the to-be-sorted values are distributed. Distributed sorting then boils down to:
Definition 8.1 (Sorting). We choose a graph with \( n \) nodes \( v_1, \ldots, v_n \). Initially each node stores a value. After applying a sorting algorithm, node \( v_k \) stores the \( k^{th} \) smallest value.
Remarks:
• What if we route all values to the same central node \( v \), let \( v \) sort the values locally, and then route them to the correct destinations?! According to the message passing model studied in the first few chapters this is perfectly legal. With a star topology sorting finishes in \( O(1) \) time!
Definition 8.2 (Node Contention). In each step of a synchronous algorithm, each node can only send and receive \( O(1) \) messages containing \( O(1) \) values, no matter how many neighbors the node has.
Remarks:
• Using Definition 8.2 sorting on a star graph takes linear time.
8.1 Array & Mesh
To get a better intuitive understanding of distributed sorting, we start with two simple topologies, the array and the mesh. Let us begin with the array:
Algorithm 8.3 Odd/Even Sort
1: Given an array of \( n \) nodes \((v_1, \ldots, v_n)\), each storing a value (not sorted).
2: \textbf{repeat}
3: \textbf{compare and exchange} the values at nodes \( i \) and \( i + 1 \), \( i \) odd
4: \textbf{compare and exchange} the values at nodes \( i \) and \( i + 1 \), \( i \) even
5: \textbf{until} done
Remarks:
- The compare and exchange primitive in Algorithm 8.3 is defined as follows: Let the value stored at node \( i \) be \( v_i \). After the compare and exchange node \( i \) stores value \( \min(v_i, v_{i+1}) \) and node \( i + 1 \) stores value \( \max(v_i, v_{i+1}) \).
- How fast is the algorithm, and how can we prove correctness/efficiency?
- The most interesting proof uses the so-called 0-1 Sorting Lemma. It allows us to restrict our attention to an input of 0’s and 1’s only, and works for any “oblivious comparison-exchange” algorithm. (Oblivious means: Whether you exchange two values must only depend on the relative order of the two values, and not on anything else.)
Lemma 8.4 (0-1 Sorting Lemma). If an oblivious comparison-exchange algorithm sorts all inputs of 0’s and 1’s, then it sorts arbitrary inputs.
Proof. We prove the opposite direction (does not sort arbitrary inputs \( \Rightarrow \) does not sort \( 0 \)'s and \( 1 \)'s). Assume that there is an input \( x = x_1, \ldots, x_n \) that is not sorted correctly by the sorting algorithm. Then there is a smallest value \( k \) such that the value at node \( v_k \) after running the algorithm is strictly larger than the \( k \)th smallest value \( x(k) \). Define an input \( x^*_i = 0 \Leftrightarrow x_i \leq x(k), x^*_i = 1 \) else. Whenever the algorithm compares a pair of 1's or 0's, it is not important whether it exchanges the values or not, so we may simply assume that it does the same as on the input \( x \). On the other hand, whenever the algorithm exchanges some values \( x^*_i = 0 \) and \( x^*_j = 1 \), this means that \( x_i \leq x(k) < x_j \). Therefore, in this case the respective compare-exchange operation will do the same on both inputs. We conclude that the algorithm will order \( x^* \) the same way as \( x \), i.e., the output with only \( 0 \)'s and \( 1 \)'s will also not be correct.
Theorem 8.5. Algorithm 8.3 sorts correctly in \( n \) steps.
Proof. Thanks to Lemma 8.4 we only need to consider an array with \( 0 \)'s and \( 1 \)'s. In this array, let \( j_1 \) be the node containing the “rightmost” 1, i.e., \( j_1 \) is the highest-index node from \( \{v_1, \ldots, v_n\} \) with value 1. If the index \( r \) of node \( v_r = j_1 \) is odd (even), the value \( 1 \) of this node will “move to the right” in the first (second) step. In any case it will move right in every following step until it reaches the rightmost node \( v_n \). Let \( j_k \) be the node with the \( k \)th rightmost 1. We show by induction that \( j_k \) is not “blocked” anymore (constantly moves until it reaches destination!) after step \( k \). We have already anchored the induction at \( k = 1 \). Since the 1 at node \( j_{k-1} \) moves after step \( k - 1 \), \( j_k \) gets a right 0-neighbor for each step after step \( k \). (For matters of presentation we omitted a couple of simple details.)
CHAPTER 8. DISTRIBUTED SORTING
Remarks:
• Linear time is not very exciting, maybe we can do better by using a different topology? Let's try a mesh (a.k.a. grid) topology first.
Algorithm 8.6 Shearsort
1: We are given a mesh with \( m \) rows and \( m \) columns, \( m \) even, \( n = m^2 \).
2: The sorting algorithm operates in phases, and uses the odd/even sort algorithm on rows or columns.
3: repeat
4: In the odd phases \( 1, 3, \ldots \) we sort all the rows, in the even phases \( 2, 4, \ldots \) we sort all the columns, such that:
5: Columns are sorted such that the small values move up.
6: Odd rows (\( 1, 3, \ldots, m - 1 \)) are sorted such that small values move left.
7: Even rows (\( 2, 4, \ldots, m \)) are sorted such that small values move right.
4: until done
Theorem 8.7. Algorithm 8.6 sorts \( n \) values in \( \sqrt{n} (\log n + 1) \) time in snake-like order.
Proof. Since the algorithm is oblivious, we can use Lemma 8.4. We show that after a row and a column phase, half of the previously unsorted rows will be sorted. More formally, let us call a row with only 0's (or only 1's) clean, a row with 0's and 1's is dirty. At any stage, the rows of the mesh can be divided into three regions. In the north we have a region of all-0 rows, in the south all-1 rows, in the middle a region of dirty rows (possibly interspersed with clean rows). Initially all rows can be dirty. Since neither row nor column sort will touch the already clean rows in the northern and southern regions, we can concentrate on the middle region containing the dirty rows.
First we run an odd phase. Then, in the even phase, deviating from the algorithm description, let's run a peculiar column sorter: We group two consecutive rows in the middle region into pairs. Since odd and even rows are sorted in opposite directions, two consecutive rows look as follows:
\[
\begin{align*}
00000 & \ldots 11111 \\
11111 & \ldots 00000
\end{align*}
\]
Such a pair can be in one of three states. Either we have more 0's than 1's (in both rows together), or more 1's than 0's, or an equal number of 0's and 1's. Column-sorting each pair will give us at least one clean row (and two clean rows if \( "0 = 1" \)). Then we move the cleaned rows north/south and the middle region containing the dirty rows will be (roughly) halved in size.
What does this peculiar column sorter have to do with our algorithm? Well, a close look reveals that any column sorter sorts the columns in exactly the same way (we are very grateful to have Lemma 8.4!). Hence, we actually described what our algorithm does in the even phase.
All in all we need \( 2 \log m = \log n \) phases to remain with (at most) 1 dirty row in the middle which will be sorted (not cleaned) with the last row-sort. \( \square \)
8.2 Sorting Networks
In this section we construct a graph topology which is carefully manufactured for sorting. This is a deviation from previous chapters where we always had to work with the topology that was given to us. In many application areas (e.g. peer-to-peer networks, communication switches, systolic hardware) it is indeed possible (in fact, crucial!) that an engineer can build the topology best suited for her application.
Definition 8.8 (Sorting Networks). A comparator is a device with two inputs $x, y$ and two outputs $x', y'$ such that $x' = \min(x, y)$ and $y' = \max(x, y)$. We construct so-called comparison networks that consist of wires that connect comparators (the output port of a comparator is sent to an input port of another comparator). Some wires are not connected to comparator outputs (we call them input wires), and some are not connected to comparator inputs (we call them output wires). A sorting network with width $n$ has $n$ input wires and $n$ output wires. A sorting network routes $n$ values given on the input wires through the wires and comparators of the network such that the values are sorted on the output wires.
Remarks:
- Often we will draw all the wires on $n$ horizontal lines, where $n$ is the width of the network. Comparators are then vertically connecting two of these lines. An example sorting network is depicted in Figure 8.9.
Definition 8.10 (Depth). The depth of an input wire is 0. The depth of a comparator is the maximum depth of its input wires plus one. The depth of an output wire of a comparator is the depth of the comparator. The depth of a comparison network is the maximum depth (of an output wire).
Remarks:
- The odd/even sorter explained in Algorithm 8.3 can also be described as a sorting network. An odd/even sorting network with width $n$ has depth $n$.
- Note that a sorting network is an oblivious comparison-exchange network. Consequently we can apply Lemma 8.4 throughout this section.
CHAPTER 8. DISTRIBUTED SORTING
Figure 8.9: A sorting network with width 4, 6 comparators, and 16 wires.
**Definition 8.11** (Bitonic Sequence). A bitonic sequence is a sequence of numbers that first monotonically increases, and then monotonically decreases, or vice versa.
**Remarks:**
- \(<1,4,6,8,3,2>\) or \(<5,3,2,1,4,8>\) are bitonic sequences.
- \(<9,6,2,3,5,4>\) or \(<7,4,2,5,9,8>\) are not bitonic.
- Since we restrict ourselves to 0’s and 1’s (Lemma 8.4), bitonic sequences have the form \(0^i 1^j 0^k\) or \(1^i 0^j 1^k\) for \(i, j, k \geq 0\).
**Algorithm 8.12** Half Cleaner
1: A half cleaner is a comparison network of depth 1, where we compare wire \(i\) with wire \(i + n/2\) for \(i = 1, \ldots, n/2\) (we assume \(n\) to be even).
**Lemma 8.13.** Feeding a bitonic sequence into a half cleaner (Algorithm 8.12), the half cleaner cleans (makes all-0 or all-1) either the upper or the lower half of the \(n\) wires. The other half is bitonic.
**Proof.** Assume that the input is of the form \(0^i 1^j 0^k\) for \(i, j, k \geq 0\). If the midpoint falls into the 0’s, the input is already clean/bitonic and will stay so. If the midpoint falls into the 1’s the half cleaner acts as Shearsort with two adjacent rows, exactly as in the proof of Theorem 8.7. The case \(1^i 0^j 1^k\) is symmetric.
**Lemma 8.15.** A bitonic sequence sorter (Algorithm 8.14) of width \(n\) sorts bitonic sequences. It has depth \(\log n\).
**Proof.** The proof follows directly from Algorithm 8.14 and Lemma 8.13.
8.2. SORTING NETWORKS
Algorithm 8.14 Bitonic Sequence Sorter
1: A bitonic sequence sorter of width \( n \) (where width is the number of input wires and we assume \( n \) to be a power of 2) consists of a half cleaner of width \( n \), and then two bitonic sequence sorters of width \( n/2 \) each.
2: A bitonic sequence sorter of width 1 is empty.
Remarks:
- Clearly we want to sort arbitrary and not only bitonic sequences! To do this we need one more concept, merging networks.
Algorithm 8.16 Merging Network
1: A merging network of width \( n \) is a merger of width \( n \) followed by two bitonic sequence sorters of width \( n/2 \). A merger is a depth-one network where we compare wire \( i \) with wire \( n - i + 1 \), for \( i = 1, \ldots, n/2 \).
Remarks:
- Note that a merging network is a bitonic sequence sorter where we replace the (first) half-cleaner by a merger.
Lemma 8.17. A merging network of width \( n \) (Algorithm 8.16) merges two sorted input sequences of length \( n/2 \) each into one sorted sequence of length \( n \).
Proof. We have two sorted input sequences. Essentially, a merger does to two sorted sequences what a half cleaner does to a bitonic sequence, since the lower part of the input is reversed. In other words, we can use the same argument as in Theorem 8.7 and Lemma 8.13: Again, after the merger step either the upper or the lower half is clean, the other is bitonic. The bitonic sequence sorters complete sorting.
Remarks:
- How do you sort \( n \) values when you are able to merge two sorted sequences of size \( n/2 \)? Piece of cake, just apply the merger recursively.
Algorithm 8.18 Batcher’s “Bitonic” Sorting Network
1: A batcher sorting network of width \( n \) consists of two batcher sorting networks of width \( n/2 \) followed by a merging network of width \( n \). (See Figure 8.19.)
2: A batcher sorting network of width 1 is empty.
Theorem 8.20. A sorting network (Algorithm 8.18) sorts an arbitrary sequence of \( n \) values. It has depth \( O(\log^2 n) \).
Proof. Correctness is immediate: at recursive stage \( k \) (\( k = 1, 2, 3, \ldots, \log n \)) we merge \( 2^k \) sorted sequences into \( 2^{k-1} \) sorted sequences. The depth \( d(n) \) of the sorting network of width \( n \) is the depth of a sorting network of width \( n/2 \)
plus the depth $m(n)$ of a merging network of width $n$. The depth of a sorter of level 1 is 0 since the sorter is empty. Since a merging network of width $n$ has the same depth as a bitonic sequence sorter of width $n$, we know by Lemma 8.15 that $m(n) = \log n$. This gives a recursive formula for $d(n)$ which solves to $d(n) = \frac{1}{2} \log^2 n + \frac{1}{2} \log n$.
Remarks:
- Simulating Batcher’s sorting network on an ordinary sequential computer takes time $O(n \log^2 n)$. As said, there are sequential sorting algorithms that sort in asymptotically optimal time $O(n \log n)$. So a natural question is whether there is a sorting network with depth $O(\log n)$. Such a network would have some remarkable advantages over sequential asymptotically optimal sorting algorithms such as heapsort. Apart from being highly parallel, it would be completely oblivious, and as such perfectly suited for a fast hardware solution. In 1983, Ajtai, Komlos, and Szemeredi presented a celebrated $O(\log n)$ depth sorting network. (Unlike Batcher’s sorting network the constant hidden in the big-$O$ of the “AKS” sorting network is too large to be practical, however.)
- It can be shown that Batcher’s sorting network and similarly others can be simulated by a Butterfly network and other hypercubic networks, see next chapter.
- What if a sorting network is asynchronous?!? Clearly, using a synchronizer we can still sort, but it is also possible to use it for something else. Check out the next section!
8.3 Counting Networks
In this section we address distributed counting, a distributed service which can for instance be used for load balancing.
**Definition 8.21** (Distributed Counting). A distributed counter is a variable that is common to all processors in a system and that supports an atomic test-and-increment operation. The operation delivers the system’s counter value to the requesting processor and increments it.
**Remarks:**
- A naive distributed counter stores the system’s counter value with a distinguished central node. When other nodes initiate the test-and-increment operation, they send a request message to the central node and in turn receive a reply message with the current counter value. However, with a large number of nodes operating on the distributed counter, the central processor will become a bottleneck. There will be a congestion of request messages at the central processor, in other words, the system will not scale.
- Is a scalable implementation (without any kind of bottleneck) of such a distributed counter possible, or is distributed counting a problem which is inherently centralized?!!
- Distributed counting could for instance be used to implement a load balancing infrastructure, i.e., by sending the job with counter value $i$ (modulo $n$) to server $i$ (out of $n$ possible servers).
**Definition 8.22** (Balancer). A balancer is an asynchronous device which forwards messages that arrive on the left side to the wires on the right, the first to the upper, the second to the lower, the third to the upper, and so on.
**Remarks:**
- In electronics, a balancer is called a flip-flop.
**Algorithm 8.23** Bitonic Counting Network.
1: Take Batcher’s bitonic sorting network of width $w$ and replace all the comparators with balancers.
2: When a node wants to count, it sends a message to an arbitrary input wire.
3: The message is then routed through the network, following the rules of the asynchronous balancers.
4: Each output wire is completed with a “mini-counter.”
5: The mini-counter of wire $k$ replies the value “$k + i \cdot w$” to the initiator of the $i^{th}$ message it receives (starting with $i = 0$).
**Definition 8.24** (Step Property). A sequence $y_0, y_1, \ldots, y_{w-1}$ is said to have the step property, if $0 \leq y_i - y_j \leq 1$, for any $i < j$.
---
Remarks:
- If the output wires have the step property, then with \( r \) requests, exactly the values \( 1, \ldots, r \) will be assigned by the mini-counters. All we need to show is that the counting network has the step property. For that we need some additional facts...
**Facts 8.25.** For a balancer, we denote the number of consumed messages on the \( i^{th} \) input wire by \( x_i \), \( i = 0, 1 \). Similarly, we denote the number of sent messages on the \( i^{th} \) output wire by \( y_i \), \( i = 0, 1 \). A balancer has these properties:
1. A balancer does not generate output-messages; that is, \( x_0 + x_1 \ge y_0 + y_1 \) in any state.
2. Every incoming message is eventually forwarded. In other words, if we are in a quiescent state (no message in transit), then \( x_0 + x_1 = y_0 + y_1 \).
3. The number of messages sent to the upper output wire is at most one higher than the number of messages sent to the lower output wire: in any state \( y_0 = \lceil (y_0 + y_1)/2 \rceil \) (thus \( y_1 = \lfloor (y_0 + y_1)/2 \rfloor \)).
**Facts 8.26.** If a sequence \( y_0, y_1, \ldots, y_{w-1} \) has the step property,
1. then all its subsequences have the step property.
2. then its even and odd subsequences satisfy
\[
\sum_{i=0}^{w/2-1} y_{2i} = \left\lfloor \frac{1}{2} \sum_{i=0}^{w-1} y_i \right\rfloor \quad \text{and} \quad \sum_{i=0}^{w/2-1} y_{2i+1} = \left\lceil \frac{1}{2} \sum_{i=0}^{w-1} y_i \right\rceil.
\]
**Facts 8.27.** If two sequences \( x_0, x_1, \ldots, x_{w-1} \) and \( y_0, y_1, \ldots, y_{w-1} \) have the step property,
1. and \( \sum_{i=0}^{w-1} x_i = \sum_{i=0}^{w-1} y_i \), then \( x_i = y_i \) for \( i = 0, \ldots, w - 1 \).
2. and \( \sum_{i=0}^{w-1} x_i = \sum_{i=0}^{w-1} y_i + 1 \), then there exists a unique \( j \) (\( j = 0, 1, \ldots, w - 1 \)) such that \( x_j = y_j + 1 \), and \( x_i = y_i \) for \( i = 0, \ldots, w - 1 \), \( i \neq j \).
Remarks:
- An alternative representation of Batcher’s network has been introduced in [AHS94]. It is isomorphic to Batcher’s network, and relies on a Merger Network \( M[w] \) which is defined inductively: \( M[w] \) consists of two \( M[w/2] \) networks (an upper and a lower one) whose output is fed to \( w/2 \) balancers/comparators. The upper network merges the even subsequence \( x_0, x_2, \ldots, x_{w-2} \), while the lower network merges the odd subsequence \( x_1, x_3, \ldots, x_{w-1} \). Call the outputs of these two \( M[w/2] \)’s \( z \) and \( z’ \) respectively. The final stage of the network combines \( z \) and \( z’ \) by sending each pair of wires \( z_i \) and \( z’_i \) into a balancer whose outputs yield \( y_{2i} \) and \( y_{2i+1} \).
- It is enough to prove that a merger network \( M[w] \) preserves the step property.
Lemma 8.28. Let \(M[w]\) be a merger network of width \(w\). In a quiescent state (i.e., there is no message in transit), if the inputs \(x_0, x_1, \ldots, x_{w/2-1}\) resp. \(x_{w/2}, x_{w/2+1}, \ldots, x_{w-1}\) have the step property, then the output \(y_0, y_1, \ldots, y_{w-1}\) has the step property.
Proof. By induction on the width \(w\).
For \(w = 2\): \(M[2]\) is a balancer and a balancer’s output has the step property (Fact 8.25.3).
For \(w > 2\): Let \(z\) resp. \(z’\) be the output of the upper respectively lower \(M[w/2]\) subnetwork. Since \(x_0, x_1, \ldots, x_{w/2-1}\) and \(x_{w/2}, x_{w/2+1}, \ldots, x_{w-1}\) both have the step property by assumption, their even and odd subsequences also have the step property (Fact 8.26.1). By induction hypothesis, the outputs of both \(M[w/2]\) subnetworks have the step property. Let \(Z := \sum_{i=0}^{w/2-1} z_i\) and \(Z’ := \sum_{i=0}^{w/2-1} z’_i\). From Fact 8.26.2 we conclude that \(Z = [\frac{1}{2} \sum_{i=0}^{w/2-1} x_i] + [\frac{1}{2} \sum_{i=0}^{w-1} x_i]\). Similarly, we know that \(Z’ = [\frac{1}{2} \sum_{i=0}^{w/2-1} x_i] + [\frac{1}{2} \sum_{i=0}^{w-1} x_{i′}]\).
Since \([a] + [b]\) and \([a] + [b]’\) differ by at most 1 we know that \(Z\) and \(Z’\) differ by at most 1. If \(Z = Z’\), Fact 8.27.1 implies that \(z_i = z’_i\) for \(i = 0, \ldots, w/2 - 1\). Therefore, the output of \(M[w]\) is \(y_i = z_{i/2}\) for \(i = 0, \ldots, w - 1\). Since \(z_0, \ldots, z_{w/2-1}\) has the step property, so does the output of \(M[w]\) and the lemma follows.
If \(Z\) and \(Z’\) differ by 1, Fact 8.27.2 implies that \(z_i = z’_i\) for \(i = 0, \ldots, w/2 - 1\), except that there is a unique \(j\) such that \(z_j\) and \(z’_j\) differ by 1. Let \(l := \min(z_j, z’_j)\). Then, the output \(y_i\) (with \(i < 2j\)) is \(l + 1\). The output \(y_{2j}\) (with \(i > 2j + 1\)) is \(l\). The outputs \(y_{2j}\) and \(y_{2j+1}\) are balanced by the final balancer resulting in \(y_{2j} = l + 1\) and \(y_{2j+1} = l\). Therefore \(M[w]\) preserves the step property.
A bitonic counting network is constructed to fulfill Lemma 8.28, i.e., the final output comes from a Merger whose upper and lower inputs are recursively merged. Therefore, the following theorem follows immediately.
Theorem 8.29 (Correctness). In a quiescent state, the \(w\) output wires of a bitonic counting network of width \(w\) have the step property.
Remarks:
- Is every sorting network also a counting network? No. But surprisingly, the other direction is true!
Theorem 8.30 (Counting vs. Sorting). If a network is a counting network then it is also a sorting network, but not vice versa.
Proof. There are sorting networks that are not counting networks (e.g. odd/even sort, or insertion sort). For the other direction, let \(C\) be a counting network and \(I(C)\) be the isomorphic network, where every balancer is replaced by a comparator. Let \(I(C)\) have an arbitrary input of 0’s and 1’s; that is, some of the input wires have a 0, all others have a 1. There is a message at \(C\)’s \(i^{th}\) input wire if and only if \(I(C)\)’s \(i^{th}\) input wire is 0. Since \(C\) is a counting network, all messages are routed to the upper output wires. \(I(C)\) is isomorphic to \(C\), therefore a comparator in \(I(C)\) will receive a 0 on its upper (lower) wire if and only if the corresponding balancer receives a message on its upper (lower)
wire. Using an inductive argument, the 0’s and 1’s will be routed through \( I(C) \) such that all 0’s exit the network on the upper wires whereas all 1’s exit the network on the lower wires. Applying Lemma 8.4 shows that \( I(C) \) is a sorting network.
**Remarks:**
- We claimed that the counting network is correct. However, it is only correct in a quiescent state.
**Definition 8.31 (Linearizable).** A system is linearizable if the order of the values assigned reflects the real-time order in which they were requested. More formally, if there is a pair of operations \( o_1, o_2 \), where operation \( o_1 \) terminates before operation \( o_2 \) starts, and the logical order is “\( o_2 \) before \( o_1 \)”, then a distributed system is not linearizable.
**Lemma 8.32 (Linearizability).** The bitonic counting network is not linearizable.
**Proof.** Consider the bitonic counting network with width 4 in Figure 8.33: Assume that two \( inc \) operations were initiated and the corresponding messages entered the network on wire 0 and 2 (both in light gray color). After having passed the second resp. the first balancer, these traversing messages “fall asleep”; In other words, both messages take unusually long time before they are received by the next balancer. Since we are in an asynchronous setting, this may be the case.

In the meantime, another \( inc \) operation (medium gray) is initiated and enters the network on the bottom wire. The message leaves the network on wire 2, and the \( inc \) operation is completed.
Strictly afterwards, another \( inc \) operation (dark gray) is initiated and enters the network on wire 1. After having passed all balancers, the message will leave the network wire 0. Finally (and not depicted in Figure 8.33), the two light gray messages reach the next balancer and will eventually leave the network on wires 1 resp. 3. Because the dark gray and the medium gray operation do conflict with Definition 8.31, the bitonic counting network is not linearizable.
Remarks:
- Note that the example in Figure 8.33 behaves correctly in the quiescent state: Finally, exactly the values 0, 1, 2, 3 are allotted.
- It has been shown that linearizability comes at a high price (the depth grows linearly with the width).
Chapter Notes
The technique used for the famous lower bound of comparison-based sequential sorting first appeared in [FJ59]. Comprehensive introductions to the vast field of sorting can certainly be found in [Knu73]. Knuth also presents the 0/1 principle in the context of sorting networks, supposedly as a special case of a theorem for decision trees of W. G. Bouricius, and includes a historic overview of sorting network research.
Using a rather complicated proof not based on the 0/1 principle, [Hab72] first presented and analyzed Odd/Even sort on arrays. Shearsort for grids first appeared in [SSS86] as a sorting algorithm both easy to implement and to prove correct. Later it was generalized to meshes with higher dimension in [SS89]. A bubble sort based algorithm is presented in [SI86]; it takes time $O(\sqrt{n} \log n)$, but is fast in practice. Nevertheless, already [TK77] presented an asymptotically optimal algorithm for grid network which runs in $3n + O(n^{2/3} \log n)$ rounds for an $n \times n$ grid. A simpler algorithm was later found by [SS86] using $3n + O(n^{3/4})$ rounds.
Batcher presents his famous $O(\log^2 n)$ depth sorting network in [Bat68]. It took until [AKS83] to find a sorting network with asymptotically optimal depth $O(\log n)$. Unfortunately, the constants hidden in the big-O-notation render it rather impractical.
The notion of counting networks was introduced in [AHS91], and shortly afterward the notion of linearizability was studied by [HSW91]. Follow-up work in [AHS94] presents bitonic counting networks and studies contention in the counting network. An overview of research on counting networks can be found in [BH98].
Bibliography
|
{"Source-Url": "https://disco.ethz.ch/courses/fs24/podc/lecturenotes/chapter8.pdf", "len_cl100k_base": 7807, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 41067, "total-output-tokens": 9565, "length": "2e12", "weborganizer": {"__label__adult": 0.0004210472106933594, "__label__art_design": 0.0005488395690917969, "__label__crime_law": 0.0004131793975830078, "__label__education_jobs": 0.001209259033203125, "__label__entertainment": 0.0001430511474609375, "__label__fashion_beauty": 0.00022840499877929688, "__label__finance_business": 0.000370025634765625, "__label__food_dining": 0.0005402565002441406, "__label__games": 0.0008487701416015625, "__label__hardware": 0.0060272216796875, "__label__health": 0.0009813308715820312, "__label__history": 0.00049591064453125, "__label__home_hobbies": 0.00023818016052246096, "__label__industrial": 0.0010023117065429688, "__label__literature": 0.0003993511199951172, "__label__politics": 0.0003867149353027344, "__label__religion": 0.0007371902465820312, "__label__science_tech": 0.427734375, "__label__social_life": 0.00010514259338378906, "__label__software": 0.0086212158203125, "__label__software_dev": 0.546875, "__label__sports_fitness": 0.00037384033203125, "__label__transportation": 0.0010251998901367188, "__label__travel": 0.00026035308837890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29803, 0.0373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29803, 0.69533]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29803, 0.86252]], "google_gemma-3-12b-it_contains_pii": [[0, 1503, false], [1503, 4758, null], [4758, 7553, null], [7553, 9529, null], [9529, 11046, null], [11046, 13360, null], [13360, 14867, null], [14867, 17201, null], [17201, 19968, null], [19968, 23372, null], [23372, 25445, null], [25445, 27965, null], [27965, 29803, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1503, true], [1503, 4758, null], [4758, 7553, null], [7553, 9529, null], [9529, 11046, null], [11046, 13360, null], [13360, 14867, null], [14867, 17201, null], [17201, 19968, null], [19968, 23372, null], [23372, 25445, null], [25445, 27965, null], [27965, 29803, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29803, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29803, null]], "pdf_page_numbers": [[0, 1503, 1], [1503, 4758, 2], [4758, 7553, 3], [7553, 9529, 4], [9529, 11046, 5], [11046, 13360, 6], [13360, 14867, 7], [14867, 17201, 8], [17201, 19968, 9], [19968, 23372, 10], [23372, 25445, 11], [25445, 27965, 12], [27965, 29803, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29803, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
03a0107742ce64e12f7a81f07f27238e0f3f30d0
|
CogniCrypt: Supporting Developers in using Cryptography
Stefan Krüger*, Sarah Nadi†, Michael Reif‡, Karim Ali‡, Mira Mezini‡, Eric Bodden*, Florian Göpfert†, Felix Güntner†, Christian Weinert†, Daniel Demmler‡, Ram Kamath‡
*Paderborn University, {firstname.lastname}@uni-paderborn.de
†University of Alberta, {nadi, karim.ali}@ualberta.ca
‡Technische Universität Darmstadt, {reif, mezini, guentner, weinert, demmler}@cs.tu-darmstadt.de, fgoepfert@cdc.informatik.tu-darmstadt.de, aramachandrakamath@gmail.com
Abstract—Previous research suggests that developers often struggle using low-level cryptographic APIs and, as a result, produce insecure code. When asked, developers desire, among other things, more tool support to help them use such APIs. In this paper, we present CogniCrypt, a tool that supports developers with the use of cryptographic APIs. CogniCrypt assists the developer in two ways. First, for a number of common cryptographic tasks, CogniCrypt generates code that implements the respective task in a secure manner. Currently, CogniCrypt supports tasks such as data encryption, communication over secure channels, and long-term archiving. Second, CogniCrypt continuously runs static analyses in the background to ensure a secure integration of the generated code into the developer’s workspace. This video demo showcases the main features of CogniCrypt: [youtube.com/watch?v=JUq5mRHfAWY](https://youtube.com/watch?v=JUq5mRHfAWY).
Keywords—Cryptography, Code Generation, Variability Modeling, Code Analysis
I. INTRODUCTION
Cryptography is the primary means of protecting sensitive data on digital devices from eavesdropping or forgery. For this protection to be effective, the used cryptographic algorithms must be conceptually secure, implemented correctly, and used securely in the respective application. Despite the availability of mature and (still-)secure-to-use cryptographic algorithms, several studies have indicated that application developers struggle with using the Application Programming Interfaces (APIs) of libraries that implement these algorithms. For example, Lazar et al. [14] investigated 269 cryptography-related vulnerabilities and found that only 17% are related to faulty implementations of algorithms, while 83% result from application developers misusing cryptographic APIs. Other studies suggest that approximately 90% of applications using cryptographic APIs contain at least one misuse [6, 9].
To investigate the reasons for this widespread misuse, we previously triangulated the results of four empirical studies, one of which was a survey of Java developers who previously used cryptographic APIs [19]. Our results show that the majority of participants found the respective APIs hard to use. When asked what would help them use these APIs, they suggested better documentation, different API designs, and additional tool support. In terms of API design, participants used terms like use cases, task-based, and high-level. These suggestions indicate that developers struggle with the fact that cryptographic APIs reside on the low level of cryptographic algorithms instead of being more functionality-oriented APIs that provide convenient methods such as `encryptFile()`. When it comes to tool support, participants suggested tools like a CryptoDebugger, analysis tools that find misuses and provide code templates or generate code for common functionality. These suggestions indicate that participants not only lack the domain knowledge, but also struggle with APIs themselves and how to use them.
In this paper, we present CogniCrypt, an Eclipse plugin that enables easier use of cryptographic APIs. In previous work, we outlined an early vision for CogniCrypt [2]. We have now implemented a prototype of the tool that currently supports developers in the following ways:
- Generate secure implementations for common programming tasks that involve cryptography (e.g., data encryption).
- Analyze developer code and generate alerts for misuses of cryptographic APIs.
II. COGNICRYPT IN A NUTSHELL
We will use the cryptographic task “Encrypt data using a secret key” (ENC) as a running example throughout the paper. When an application developer - CogniCrypt’s user - selects this task, CogniCrypt generates code that implements a simple encryption using Java’s Cipher API.
Figures 1 and 2 illustrate the steps that a user must follow in CogniCrypt for ENC. First, to trigger the code generation, the user clicks on the CogniCrypt button in Eclipse’s tool bar. The dialog shown in Figure 1 then pops up and the user has to select both the target project for code generation and ENC from the list of supported tasks. The user then answers a few high-level questions that do not require deep cryptography knowledge. The answers to these questions help CogniCrypt generate the appropriate source code. One such question for ENC is “Should your key be derived from a user-specified password?”. Once the user has answered all questions for ENC, CogniCrypt presents the user with a list of algorithms (or combinations thereof) in different configurations and auto-selects the most secure one as shown in Figure 2. The user may change the selection through a drop-down menu. For the more keen user, CogniCrypt provides more detailed information about the selection in the same window. After the user hits
The dialog shown in Figure 1 then pops up and the user has to select both the target project for code generation and ENC from the list of supported tasks. The user then answers a few high-level questions that do not require deep cryptography knowledge. The answers to these questions help CogniCrypt generate the appropriate source code. One such question for ENC is “Should your key be derived from a user-specified password?”. Once the user has answered all questions for ENC, CogniCrypt presents the user with a list of algorithms (or combinations thereof) in different configurations and auto-selects the most secure one as shown in Figure 2. The user may change the selection through a drop-down menu. For the more keen user, CogniCrypt provides more detailed information about the selection in the same window. After the user hits
the finish button, CogniCrypt generates two code artefacts into the user’s Java project: the code that implements ENC into a package named crypto, and a method that demonstrates how the user may use the generated code in their own project.
In addition to code generation, CogniCrypt notifies the user of misuses of cryptographic APIs by running a static-analysis suite automatically every time the code is compiled. CogniCrypt generates an Eclipse error marker for each detected misuse of the supported cryptographic APIs. Figure 3 depicts a warning issued by CogniCrypt when the user changes the generated code for ENC to use the insecure encryption algorithm DES.
III. GENERATING SECURE CODE
CogniCrypt’s code-generation component enables it to generate secure implementations for several cryptographic programming tasks. Each task in CogniCrypt is specified using three artefacts: a model describing the involved algorithms, the task’s implementation, which CogniCrypt may provide the user with, and a code snippet demonstrating its usage. Figure 5 illustrates the general workflow and necessary artefacts. We will refer to different parts of the elements in Figure 5 using the circled numbers shown in the figure (e.g., 1).
A. Modelling Cryptographic Algorithms in Clafer
The cryptography domain comprises a wide range of algorithms, each can be configured in multiple ways. This variability becomes an issue for developers with little or no experience in cryptography, because not all algorithms and configurations are secure to use in every context. Therefore, developers have to figure out which algorithm may be used in which situation. To help developers close this knowledge gap, we systematize the domain knowledge by means of a variability model 1 using Clafer 2, a variability modelling language that facilitates a mix of class and feature modelling. Clafer supports two constraint solvers that can instantiate a model, i.e., generate instances of the model that satisfy all its constraints. Any element in Clafer is called a clafer and can either be abstract or concrete. The difference is that the instance generator does not create instances for abstract clasfers. In prior work 9, we describe our modelling approach and discuss the trade-offs of using other variability modelling languages.
Figure 4 shows a simplified version of the Clafer model for ENC in CogniCrypt. The model defines the abstract clafer Algorithm in Line [1]. Lines [2-4] define the attributes of Algorithm (name, security, and performance). The model defines three abstract clasfers that extend Algorithm (i.e., inherit its attributes): SymmetricBlockCipher (Line [7]), KeyDerivationAlgorithm (Line [15]), and Digest (Line [24]). Each extension defines additional attributes. Moreover, SymmetricBlockCipher defines two constraints (Lines [11-12]). The model then defines concrete clasfers for all ENC-related cryptographic algorithms by extending the three Algorithm-type clasfers (Lines [26-56]). Finally, the clafer definition for ENC (Lines [61-64]) includes all its necessary cryptographic algorithms such as a symmetric block cipher (Line [64]). If the user decides to derive the key from a password, the definition is updated to require a key derivation algorithm.
B. Configuring a Solution
The generated code for each task is specified as an XSL-based code template 5 to enable code generation by an XSL transformation. Figure 6 depicts an excerpt of the stylesheet representing part of its implementation. The code implements a simple encryption bootstrapped with an initialization vector. Since each task may be implemented in multiple ways, the stylesheet may contain one or more variability points, that is statements that depend on the configuration of task. ENC has one variability point: the argument to the call Cipher.getInstance( ) (Lines [7]-[8]). The class Cipher is used for encrypting data, and the argument to getInstance( ) specifies the encryption algorithm, block mode, and padding scheme of the encryption [21 Section on class Cipher].
To generate valid code, CogniCrypt resolves this variability by asking the user questions to help it configure a solution 2. For the supported currently tasks, the authors have developed
Variability Model User Input Constraint Solver Instance as XML
SAXON XSL Stylesheet XSL Transformer Generated Java Code
Fig. 4: Clafer model for the password-based encryption (ENC) programming task.
Fig. 5: The workflow of code generation in CogniCrypt.
these questions. The task selection determines the parts of the stylesheet that are relevant to the user and the questions that will be presented to the user. For example, for ENC, the user may choose to derive the key from a password. If they do so, CogniCrypt automatically modifies the Clafer model such that a key derivation algorithm is also required to implement the task, not only a symmetric block cipher as shown in Figure 4.
In general, each answer either adds constraints to the Clafer model (e.g., setting a security level of the cipher to high or very high) or influences the generated code directly (e.g., by adding or removing a call or changing a parameter value).
After answering all questions, CogniCrypt runs the constraint solver Choco on the Clafer model to generate all its instances, one of which is a version of the model with all variability resolved. For the ENC model in Figure 4, there are at least three distinct instances with different values (128, 192, 256) for the keysize attribute of AES. The result is a list of combinations of algorithms in different configurations that CogniCrypt shows to the user in the final dialog sorted by security level in descending order. CogniCrypt automatically selects the first solution, but the user can change the selection.
C. Generating Code
CogniCrypt stores the selected configuration in an XML file. The code is then generated into the user’s project under the package crypto by performing an XSL transformation using SAXON bootstrapped with the stylesheet and the XML configuration file. Two code artefacts are generated: the code implementing the task, and a method that demonstrates how the developer can use the implementation. This method is usually generated into an extra class in the same package as the generated implementation code. In case a Java file from the target project is currently opened in the editor, CogniCrypt generates the method into this file. It also ensures the method is generated within the respective class, but outside existing methods. With the XSL stylesheet in Figure 6 and the configuration from Figure 2 as inputs, CogniCrypt generates the class Enc in Figure 7. The developer may choose to keep the generated code as is or integrate it into their application code in a different way.
IV. Enforcing Secure Implementations
In addition to generating code, CogniCrypt continuously applies a suite of static analyses to the developer’s project in the background. These analyses ensure that all usages of cryptographic APIs remain secure, even when the developer modifies the generated code for better integration into their project or to add some functionality. Moreover, if the developer uses the cryptographic APIs directly (i.e., without using the code-generation component), running the analysis suite ensures secure usage of the APIs.
To statically analyze the underlying Eclipse project, CogniCrypt uses TS4J, a fluent interface in Java that defines and evaluates typestate analyses. A typestate analysis helps CogniCrypt determine the set of allowed operations in a specific context. TS4J is implemented as an Eclipse plugin on top of the static analysis framework Soot. CogniCrypt reports misuses by generating error markers directly on the left gutter within the Eclipse IDE as shown in Figure 1.
Figure 7 depicts one of the TS4J rules used in CogniCrypt to detect the usage of the outdated encryption algorithm DES...
A. Supported Tasks
1) Symmetric Encryption:
a) Description: encryption of data as a byte array.
b) Implementation: implementations of symmetric block ciphers in SunJCE Provider [23] such as AES, Triple-DES.
c) User Decisions: CogniCrypt asks the user whether the application encrypts large chunks of data. If so, encryption is performed iteratively on fractions of the plaintext instead. Subsequently, it allows the user to decide whether the encryption key should be derived from a password, or created by traditional means of a key generator.
2) Password Storage:
a) Description: transformation of passwords such that they can be securely stored (i.e., hashing and salting).
b) Implementation: implementations of key derivation functions in SunJCE Provider [23] such as PBKDF2.
c) User Decisions: CogniCrypt first asks the user whether they wish to implement the client or the server side of a connection, requesting the corresponding internet-address. For client implementations, if the server is already known, CogniCrypt offers to perform a trial connection to test connectivity and cryptographic parameters. CogniCrypt then allows the user to select the desired security level, providing a safe default option for optimal cryptographic protection. In particular, CogniCrypt disables insecure cryptographic parameters (i.e., cipher suites). This feature is crucial, because TLS has a vast number of parameter choices, and, in principle, allows to configure insecure cipher suites that, for example, omit encryption or enable known attacks like RC4 weaknesses [1].
3) Secure Communication:
a) Description: a cryptographic channel based on the Transport Layer Security (TLS) protocol [8] for securely transporting data from one endpoint to another. The channel ensures confidentiality and integrity of the communicated data as well as authenticity of the communication partners.
b) Implementation: based on the Java TLS implementation in the Java Secure Socket Extension (JSSE) [22].
c) User Decisions: CogniCrypt first asks the user whether they wish to implement the client or the server side of a connection, requesting the corresponding internet-address. For client implementations, if the server is already known, CogniCrypt offers to perform a trial connection to test connectivity and cryptographic parameters. CogniCrypt then allows the user to select the desired security level, providing a safe default option for optimal cryptographic protection. In particular, CogniCrypt disables insecure cryptographic parameters (i.e., cipher suites). This feature is crucial, because TLS has a vast number of parameter choices, and, in principle, allows to configure insecure cipher suites that, for example, omit encryption or enable known attacks like RC4 weaknesses [1].
4) Secure Long-Term Storage:
a) Description: MoPS [28] ensures the integrity and authenticity of documents over long periods of time, since classical protection schemes (e.g., digital signatures) do not provide everlasting security. MoPS allows users to create customized long-term protection schemes by combining reusable...
components extracted from other existing solutions, improving performance and gaining flexibility.
b) Implementation: The reference implementation of MoPS by Weinert et al. [28] has a RESTful API for configuring and maintaining file collections on remote systems. Using the API without proper guidance, the user may end up with a configuration that uses outdated cryptographic primitives (e.g., SHA-1), performs poorly due to improper component selection, or relies on inappropriate trust assumptions.
c) User Decisions: CogniCrypt asks the user at most four high-level questions (e.g., “Do you plan to add new files to your collection frequently?”). These questions identify the required features and the trust assumptions the user is willing to make. The Clafer model then translates the user choices into the most suitable component selection based on the recommendations of Weinert et al. [28]. Finally, CogniCrypt generates glue code to configure the MoPS system accordingly and provide methods for securely storing files in the system.
5) Secure Multi-Party Computation:
a) Description: ABY [7] is a framework for mixed-protocol secure two-party computation (STC). It allows two parties to apply a function to their private inputs and reveal nothing but the output of the computation. ABY enables developers to implement STC applications by offering abstractions from the underlying protocols. Furthermore, ABY can securely convert between different protocol types, improving efficiency.
b) Implementation: ABY is written in C/C++ to achieve high efficiency for the underlying primitives (bit operations, symmetric encryption) and has been encapsulated in Java Native Interface (JNI) wrappers to be used by CogniCrypt.
c) User Decisions: CogniCrypt offers the user several STC example applications, e.g., computing the Euclidean Distance between private coordinates. The user can select different properties, depending on the deployment scenario. In the future, we plan to integrate custom applications as well.
B. Implementations of Cryptographic Algorithms
CogniCrypt mainly capitalizes on algorithm implementations from the Java Cryptography Architecture (JCA) [21]. For the first three tasks described above, we have not implemented any cryptographic algorithms ourselves, but merely accessed the existing ones through the JCA APIs. In the future, we would like cryptography experts to contribute new algorithm implementations to CogniCrypt to extend support for even the most novel of cryptographic schemes. The cryptography researchers among the authors have already started integrating an implementation of a novel public-key cryptographic algorithm.
Lindner and Peikert [16] present a new public-key encryption algorithm (LP11) based on the learning-with-errors problem. As a lattice-based primitive, it is currently believed to withstand attacks on classical and quantum computers, a property typically referred to as post-quantum security. For efficiency reasons, we implemented LP11 in C++ and integrated it into CogniCrypt using JNI. We made three methods of the C++ implementation available for Java: key generation, encryption, and decryption, and implemented the necessary JCA interfaces for encryption and key generation. Not only does this setup allow for an easy integration into CogniCrypt, but it also enables standalone usage of LP11.
Unfortunately, the interfaces provided by JCA do not completely fit the properties of post-quantum encryption schemes. In particular, the interface provides two methods for key-pair generation: one bootstrapped with a key size, the other one allowing for more parameters to be included in the key generation. As one key size is not sufficient for LP11, our implementation only supports the other method. Calling the incorrect method causes CogniCrypt to alert the developer, and throws an UnsupportedOperationException at runtime.
VI. Related Work
We are not aware of any integrated tool that combines code generation and static analysis for misuses of Java cryptographic APIs. However, there has been a number of static analysis tools that detect misuses of cryptographic and other security APIs in Java [6, 9, 11, 20, 25]. Unlike CogniCrypt, these tools do not provide any IDE integration and have hard-coded checks. Additionally, CogniCrypt enables cryptography experts, who may not be experts in static analysis, to define new rules more easily through TS4J.
CogniCrypt generates task-based usage examples for Java cryptographic APIs. Although similar tools exist [5, 12, 13, 17], they rely on the mining of syntactically correct usages of the respective APIs, which is not a viable approach for cryptographic APIs for two reasons. First, many usages of cryptographic APIs are syntactically correct programs are also insecure. Second, it appears that most usage examples of cryptographic APIs are insecure [6, 9, 14], causing the mining of such usages to be a difficult endeavour.
VII. Conclusion
Cryptography can help secure sensitive data, but only if applications use cryptographic components securely. We have presented CogniCrypt, an Eclipse plugin that enables developers to securely integrate such components into their Java projects, especially if they have little experience with cryptography. CogniCrypt smoothly integrates into a developer’s workflow to generate secure code for cryptographic tasks and detect misuses of cryptographic APIs in their code.
For now, all tasks have been integrated by the authors. We plan to open-source CogniCrypt toward the end of 2017, and we encourage cryptography experts to integrate their own projects into it. We also plan to conduct a user study to evaluate whether CogniCrypt is capable of improving the security of the average developers’ code.
Acknowledgments
This work was funded by the DFG as part of projects P1, S4, S6, E3, and E1 within the CRC 1119 CROSSING as well as by the BMBF and the HMWK within CRISP. We would like to thank Mohammad Hassan Zahraee, Patrick Hill, André Sonntag, and Sneha Reddy for their work on CogniCrypt.
REFERENCES
|
{"Source-Url": "https://www.felixguenther.info/publications/ASE_KNRAMB17.pdf", "len_cl100k_base": 4750, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21453, "total-output-tokens": 6797, "length": "2e12", "weborganizer": {"__label__adult": 0.0003466606140136719, "__label__art_design": 0.0001982450485229492, "__label__crime_law": 0.0004684925079345703, "__label__education_jobs": 0.0003180503845214844, "__label__entertainment": 4.38690185546875e-05, "__label__fashion_beauty": 0.00011432170867919922, "__label__finance_business": 0.00012886524200439453, "__label__food_dining": 0.0002589225769042969, "__label__games": 0.0004210472106933594, "__label__hardware": 0.0007033348083496094, "__label__health": 0.00034618377685546875, "__label__history": 0.00013327598571777344, "__label__home_hobbies": 5.370378494262695e-05, "__label__industrial": 0.0002849102020263672, "__label__literature": 0.00013756752014160156, "__label__politics": 0.0002036094665527344, "__label__religion": 0.0003690719604492187, "__label__science_tech": 0.00901031494140625, "__label__social_life": 6.937980651855469e-05, "__label__software": 0.0055389404296875, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0002613067626953125, "__label__transportation": 0.0003387928009033203, "__label__travel": 0.0001556873321533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28531, 0.02108]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28531, 0.4346]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28531, 0.84946]], "google_gemma-3-12b-it_contains_pii": [[0, 6193, false], [6193, 10440, null], [10440, 14142, null], [14142, 17258, null], [17258, 23332, null], [23332, 28531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6193, true], [6193, 10440, null], [10440, 14142, null], [14142, 17258, null], [17258, 23332, null], [23332, 28531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28531, null]], "pdf_page_numbers": [[0, 6193, 1], [6193, 10440, 2], [10440, 14142, 3], [14142, 17258, 4], [17258, 23332, 5], [23332, 28531, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28531, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
6469c48d37fe4d4202be4c0d6e9db7d1a197529e
|
A quick overview of the S4 class system
Hervé Pagès
hpages@fredhutch.org
June 2016
What is S4?
S4 from an end-user point of view
Implementing an S4 class (in 4 slides)
Extending an existing class
What else?
Outline
What is S4?
S4 from an end-user point of view
Implementing an S4 class (in 4 slides)
Extending an existing class
What else?
The S4 class system
- The *S4 class system* is a set of facilities provided in R for OO programming.
- Implemented in the *methods* package.
- On a fresh *R* session:
```R
> sessionInfo()
...
attached base packages:
[1] stats graphics grDevices utils datasets
[6] methods base
```
- R also supports an older class system: the *S3 class system*.
A different world
The syntax
\> \texttt{foo(x, \ldots)}
not:
\> \texttt{x.foo(\ldots)}
like in other OO programming languages.
The central concepts
- The core components: \textit{classes}\(^1\), \textit{generic functions} and \textit{methods}
- The glue: \textit{method dispatch} (supports \textit{simple} and \textit{multiple} dispatch)
\(^1\)also called \textit{formal classes}, to distinguish them from the S3 classes aka \textit{old style classes}.
The result
```r
> ls('package:methods')
[1] "addNextMethod" "allGenerics"
[3] "allNames" "Arith"
[5] "as" "as<-"
[7] "asMethodDefinition" "assignClassDef"
... ...
[211] "testVirtual" "traceOff"
[213] "traceOn" "tryNew"
[215] "unRematchDefinition" "validObject"
[217] "validSlotNames"
```
- Rich, complex, can be intimidating
- The classes and methods we implement in our packages can be hard to document, especially when the class hierarchy is complicated and multiple dispatch is used.
Heavily used. In BioC 3.3: 3158 classes and 22511 methods defined in 609 packages! (out of 1211 software packages)
Top 10: 128 classes in ChemmineOB, 98 in flowCore, 79 in lRanges, 68 in rsbml, 61 in ShortRead, 58 in Biostrings, 51 in rtracklayer, 50 in oligoClasses, 45 in flowUtils, and 40 in BaseSpaceR.
For the end-user: it's mostly transparent. But when something goes wrong, error messages issued by the S4 class system can be hard to understand. Also it can be hard to find the documentation for a specific method.
Most Bioconductor packages use only a small subset of the S4 capabilities (covers 99.99% of our needs)
Outline
What is S4?
S4 from an end-user point of view
Implementing an S4 class (in 4 slides)
Extending an existing class
What else?
Where do S4 objects come from?
From a dataset
> library(graph)
> data(apopGraph)
> apopGraph
A graphNEL graph with directed edges
Number of Nodes = 50
Number of Edges = 59
From using an object constructor function
> library(IRanges)
> IRanges(start=c(101, 25), end=c(110, 80))
IRanges object with 2 ranges and 0 metadata columns:
<table>
<thead>
<tr>
<th>start</th>
<th>end</th>
<th>width</th>
</tr>
</thead>
<tbody>
<tr>
<td>101</td>
<td>110</td>
<td>10</td>
</tr>
<tr>
<td>25</td>
<td>80</td>
<td>56</td>
</tr>
</tbody>
</table>
From a coercion
```r
> library(Matrix)
> m <- matrix(3:-4, nrow=2)
> as(m, "Matrix")
2 x 4 Matrix of class "dgeMatrix"
[1,] 3 1 -1 -3
[2,] 2 0 -2 -4
```
From using a specialized high-level constructor
```r
> library(GenomicFeatures)
> makeTxDbFromUCSC("sacCer2", tablename="ensGene")
TxDb object:
# Db type: TxDb
# Supporting package: GenomicFeatures
# Data source: UCSC
# Genome: sacCer2
# Organism: Saccharomyces cerevisiae
# Taxonomy ID: 4932
# UCSC Table: ensGene
# UCSC Track: Ensembl Genes
...
From using a high-level I/O function
```r
> library(ShortRead)
> path_to_my_data <- system.file(
+ package="ShortRead",
+ "extdata", "Data", "C1-36Firecrest", "Bustard", "GERALD")
> lane1 <- readFastq(path_to_my_data, pattern="s_1_sequence.txt")
> lane1
class: ShortReadQ
length: 256 reads; width: 36 cycles
Inside another object
```r
t> sread(lane1)
A DNAStringSet instance of length 256
<table>
<thead>
<tr>
<th>width</th>
<th>seq</th>
</tr>
</thead>
<tbody>
<tr>
<td>36</td>
<td>GGACTTTTG TAGGATACCCCTCGCTTTTCTCCTCTCTGT</td>
</tr>
<tr>
<td>36</td>
<td>GATTCTTACCT ATTAGTGTTGGAACAGCATCGGAC</td>
</tr>
<tr>
<td>36</td>
<td>GCGGTGGTCT CATGTTATCAAATATCAATTTGGGT</td>
</tr>
<tr>
<td>36</td>
<td>GTTACCATGATGTTATTTCCTCATTGGAGGTAAAA</td>
</tr>
<tr>
<td>36</td>
<td>GTATGTTTCTCATGCTTTACTACCTTCTGGTTCCGACTA</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>36</td>
<td>GTTTAGATATGAGTCACATTTTGTTCATGGTAGAGT</td>
</tr>
<tr>
<td>36</td>
<td>GTTTAACAGACACCTAAAGCTACATCGTCAACGT TA</td>
</tr>
</tbody>
</table>
```
How to manipulate S4 objects?
Low-level: getters and setters
> ir <- IRanges(start=c(101, 25), end=c(110, 80))
> width(ir)
[1] 10 56
> width(ir) <- width(ir) - 5
> ir
IRanges object with 2 ranges and 0 metadata columns:
<p>| | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>start</td>
<td>end</td>
<td>width</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>101</td>
<td>105</td>
<td>5</td>
</tr>
<tr>
<td>25</td>
<td>75</td>
<td>51</td>
</tr>
</tbody>
</table>
High-level: plenty of specialized methods
> qa1 <- qa(lane1, lane="lane1")
> class(qa1)
[1] "ShortReadQQA"
attr(,"package")
[1] "ShortRead"
How to find the right man page?
- `class?graphNEL` or equivalently `?`graphNEL-class` for accessing the man page of a class
- `?qa` for accessing the man page of a generic function
- The man page for a generic might also document some or all of the methods for this generic. The See Also: section might give a clue. Also using `showMethods()` can be useful:
```r
> showMethods("qa")
Function: qa (package ShortRead)
dirPath="ShortReadQ"
dirPath="SolexaPath"
dirPath="character"
dirPath="list"
> ?`qa,ShortReadQ-method` to access the man page for a particular method (might be the same man page as for the generic)
In doubt: `??qa` will search the man pages of all the installed packages and return the list of man pages that contain the string qa
Inspecting objects and discovering methods
- `class()` and `showClass()`
```r
> class(lane1)
[1] "ShortReadQ"
attr("package")
[1] "ShortRead"
> showClass("ShortReadQ")
Class "ShortReadQ" [package "ShortRead"]
Slots:
Name: quality sread id
Class: QualityScore DNAStringSet BStringSet
Extends:
Class "ShortRead", directly
Class ".ShortReadBase", by class "ShortRead", distance 2
Known Subclasses: "AlignedRead"
```
- `str()` for compact display of the content of an object
- `showMethods()` to discover methods
- `selectMethod()` to see the code
Outline
What is S4?
S4 from an end-user point of view
Implementing an S4 class (in 4 slides)
Extending an existing class
What else?
Class definition and constructor
Class definition
```r
> setClass("SNPLocations",
+ slots=c(
+ genome="character", # a single string
+ snpid="character", # a character vector of length N
+ chrom="character", # a character vector of length N
+ pos="integer" # an integer vector of length N
+ )
+ )
```
Constructor
```r
> SNPLocations <- function(genome, snpid, chrom, pos)
+ new("SNPLocations", genome=genome, snpid=snpid, chrom=chrom, pos=pos)
> snplocs <- SNPLocations("hg19",
+ c("rs0001", "rs0002"),
+ c("chr1", "chrX"),
+ c(224033L, 1266886L))
```
Defining the `length` method
```r
> setMethod("length", "SNPLocations", function(x) length(x@snpid))
> length(snplocs) # just testing
[1] 2
```
Defining the slot getters
```r
> setGeneric("genome", function(x) standardGeneric("genome"))
> setMethod("genome", "SNPLocations", function(x) x@genome)
> setGeneric("snpid", function(x) standardGeneric("snpid"))
> setMethod("snpid", "SNPLocations", function(x) x@snpid)
> setGeneric("chrom", function(x) standardGeneric("chrom"))
> setMethod("chrom", "SNPLocations", function(x) x@chrom)
> setGeneric("pos", function(x) standardGeneric("pos"))
> setMethod("pos", "SNPLocations", function(x) x@pos)
> genome(snplocs) # just testing
[1] "hg19"
> snpid(snplocs) # just testing
[1] "rs0001" "rs0002"
```
Defining the *show* method
```r
> setMethod("show", "SNPLocations",
+ function(object)
+ cat(class(object), "instance with", length(object),
+ "SNPs on genome", genome(object), "\n")
+ )
> snplocs # just testing
SNPLocations instance with 2 SNPs on genome hg19
```
Defining the *validity* method
```r
> setValidity("SNPLocations",
+ function(object) {
+ if (!is.character(genome(object)) ||
+ length(genome(object)) != 1 || is.na(genome(object)))
+ return("'genome' slot must be a single string")
+ slot_lengths <- c(length(snpid(object)),
+ length(chrom(object)),
+ length(pos(object)))
+ if (length(unique(slot_lengths)) != 1)
+ return("lengths of slots 'snpid', 'chrom' and 'pos' differ")
+ TRUE
+ }
+ )
> snplocs@chrom <- LETTERS[1:3] # a very bad idea!
> validObject(snplocs)
Error in validObject(snplocs) :
invalid class "SNPLocations" object: lengths of slots 'snpid', 'chrom'
and 'pos' differ
```
Defining slot setters
```r
> setGeneric("chrom<-", function(x, value) standardGeneric("chrom<-"))
> setReplaceMethod("chrom", "SNPLocations",
+ function(x, value) {x@chrom <- value; validObject(x); x})
> chrom(snplocs) <- LETTERS[1:2] # repair currently broken object
> chrom(snplocs) <- LETTERS[1:3] # try to break it again
Error in validObject(x) :
invalid class "SNPLocations" object: lengths of slots 'snpid', 'chrom' and 'pos' differ
```
Defining a coercion method
```r
> setAs("SNPLocations", "data.frame",
+ function(from)
+ data.frame(snpid=snpid(from), chrom=chrom(from), pos=pos(from))
+ )
> as(snplocs, "data.frame") # testing
snpid chrom pos
1 rs0001 A 224033
2 rs0002 B 1266886
```
Outline
What is S4?
S4 from an end-user point of view
Implementing an S4 class (in 4 slides)
Extending an existing class
What else?
Slot inheritance
▶ Most of the time (but not always), the child class will have additional slots:
```r
> setClass("AnnotatedSNPs",
+ contains="SNPLocations",
+ slots=c(
+ geneid="character" # a character vector of length N
+ )
+ )
```
▶ The slots from the parent class are inherited:
```r
> showClass("AnnotatedSNPs")
Class "AnnotatedSNPs" [in ".GlobalEnv"]
Slots:
Name: geneid genome snpid chrom pos
Class: character character character character integer
Extends: "SNPLocations"
```
▶ Constructor:
```r
> AnnotatedSNPs <- function(genome, snpid, chrom, pos, geneid)
+ {
+ new("AnnotatedSNPs",
+ SNPLocations(genome, snpid, chrom, pos),
+ geneid=geneid)
+ }
```
Method inheritance
Let’s create an AnnotatedSNPs object:
```r
> snps <- AnnotatedSNPs("hg19",
+ c("rs0001", "rs0002"),
+ c("chr1", "chrX"),
+ c(224033L, 1266886L),
+ c("AAU1", "SXW-23")
```
All the methods defined for SNPLocations objects work out-of-the-box:
```r
> snps
```
AnnotatedSNPs instance with 2 SNPs on genome hg19
But sometimes they don’t do the right thing:
```r
> as(snps, "data.frame") # the 'geneid' slot is ignored
```
<table>
<thead>
<tr>
<th>snpid</th>
<th>chrom</th>
<th>pos</th>
</tr>
</thead>
<tbody>
<tr>
<td>rs0001</td>
<td>chr1</td>
<td>224033</td>
</tr>
<tr>
<td>rs0002</td>
<td>chrX</td>
<td>1266886</td>
</tr>
</tbody>
</table>
- **Being a SNPLocations object vs being a SNPLocations instance:**
```r
> is(snps, "AnnotatedSNPs") # 'snps' is an AnnotatedSNPs object
[1] TRUE
> is(snps, "SNPLocations") # and is also a SNPLocations object
[1] TRUE
> class(snps) # but is *not* a SNPLocations *instance*
[1] "AnnotatedSNPs"
attr(,"package")
[1] ".GlobalEnv"
```
- **Method overriding:** for example we could define a `show` method for AnnotatedSNPs objects. `callNextMethod` can be used in that context to call the method defined for the parent class from within the method for the child class.
- **Automatic coercion method:**
```r
> as(snps, "SNPLocations")
SNPLocations instance with 2 SNPs on genome hg19
```
The **validity method** for AnnotatedSNPs objects only needs to validate what's not already validated by the **validity method** for SNPLocations objects:
```r
> setValidity("AnnotatedSNPs",
+ function(object) {
+ if (length(object@geneid) != length(object))
+ return("'geneid' slot must have the length of the object")
+ TRUE
+ }
+ )
```
In other words: before an AnnotatedSNPs object can be considered valid, it must first be a valid SNPLocations object.
Outline
What is S4?
S4 from an end-user point of view
Implementing an S4 class (in 4 slides)
Extending an existing class
What else?
Other important S4 features
- **Virtual** classes: equivalent to *abstract* classes in Java
- Class unions (see `?setClassUnion`)
- Multiple inheritance: a powerful feature that should be used with caution. If used inappropriately, can lead to a class hierarchy that is very hard to maintain
Resources
- The *Extending RangedSummarizedExperiment* section of the *SummarizedExperiment* vignette in the *SummarizedExperiment* package.
- Note: S4 is *not* covered in the *An Introduction to R* or *The R language definition* manuals\(^2\)
- The *Writing R Extensions* manual for details about integrating S4 classes to a package
- The *R Programming for Bioinformatics* book by Robert Gentleman\(^3\)
---
\(^2\)[http://cran.fhcrc.org/manuals.html](http://cran.fhcrc.org/manuals.html)
\(^3\)[http://bioconductor.org/help/publications/books/r-programming-for-bioinformatics/](http://bioconductor.org/help/publications/books/r-programming-for-bioinformatics/)
|
{"Source-Url": "http://www.bioconductor.org/packages/release/bioc/vignettes/S4Vectors/inst/doc/S4QuickOverview.pdf", "len_cl100k_base": 4257, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 36603, "total-output-tokens": 5407, "length": "2e12", "weborganizer": {"__label__adult": 0.0002435445785522461, "__label__art_design": 0.00015723705291748047, "__label__crime_law": 0.00017833709716796875, "__label__education_jobs": 0.0003612041473388672, "__label__entertainment": 3.844499588012695e-05, "__label__fashion_beauty": 8.738040924072266e-05, "__label__finance_business": 0.0001176595687866211, "__label__food_dining": 0.00026535987854003906, "__label__games": 0.00030112266540527344, "__label__hardware": 0.0004949569702148438, "__label__health": 0.0002536773681640625, "__label__history": 0.00010824203491210938, "__label__home_hobbies": 7.408857345581055e-05, "__label__industrial": 0.0002313852310180664, "__label__literature": 0.00011593103408813477, "__label__politics": 0.0001233816146850586, "__label__religion": 0.0002435445785522461, "__label__science_tech": 0.004566192626953125, "__label__social_life": 7.659196853637695e-05, "__label__software": 0.0091552734375, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.00017511844635009766, "__label__transportation": 0.00019121170043945312, "__label__travel": 0.0001327991485595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13241, 0.03511]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13241, 0.18314]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13241, 0.65308]], "google_gemma-3-12b-it_contains_pii": [[0, 85, false], [85, 213, null], [213, 350, null], [350, 719, null], [719, 1180, null], [1180, 1807, null], [1807, 2435, null], [2435, 2572, null], [2572, 3005, null], [3005, 3537, null], [3537, 4360, null], [4360, 4831, null], [4831, 5607, null], [5607, 6195, null], [6195, 6332, null], [6332, 6935, null], [6935, 7688, null], [7688, 8702, null], [8702, 9425, null], [9425, 9562, null], [9562, 10260, null], [10260, 10830, null], [10830, 11547, null], [11547, 12019, null], [12019, 12156, null], [12156, 13241, null]], "google_gemma-3-12b-it_is_public_document": [[0, 85, true], [85, 213, null], [213, 350, null], [350, 719, null], [719, 1180, null], [1180, 1807, null], [1807, 2435, null], [2435, 2572, null], [2572, 3005, null], [3005, 3537, null], [3537, 4360, null], [4360, 4831, null], [4831, 5607, null], [5607, 6195, null], [6195, 6332, null], [6332, 6935, null], [6935, 7688, null], [7688, 8702, null], [8702, 9425, null], [9425, 9562, null], [9562, 10260, null], [10260, 10830, null], [10830, 11547, null], [11547, 12019, null], [12019, 12156, null], [12156, 13241, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13241, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13241, null]], "pdf_page_numbers": [[0, 85, 1], [85, 213, 2], [213, 350, 3], [350, 719, 4], [719, 1180, 5], [1180, 1807, 6], [1807, 2435, 7], [2435, 2572, 8], [2572, 3005, 9], [3005, 3537, 10], [3537, 4360, 11], [4360, 4831, 12], [4831, 5607, 13], [5607, 6195, 14], [6195, 6332, 15], [6332, 6935, 16], [6935, 7688, 17], [7688, 8702, 18], [8702, 9425, 19], [9425, 9562, 20], [9562, 10260, 21], [10260, 10830, 22], [10830, 11547, 23], [11547, 12019, 24], [12019, 12156, 25], [12156, 13241, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13241, 0.06266]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7b60440ef6dd3d56d016dc97f3726f1db714ca62
|
[REMOVED]
|
{"len_cl100k_base": 5395, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 26540, "total-output-tokens": 6638, "length": "2e12", "weborganizer": {"__label__adult": 0.0003964900970458984, "__label__art_design": 0.00023376941680908203, "__label__crime_law": 0.00024437904357910156, "__label__education_jobs": 0.00041961669921875, "__label__entertainment": 7.867813110351562e-05, "__label__fashion_beauty": 0.00015795230865478516, "__label__finance_business": 0.00020444393157958984, "__label__food_dining": 0.00036263465881347656, "__label__games": 0.0005474090576171875, "__label__hardware": 0.00135040283203125, "__label__health": 0.0004172325134277344, "__label__history": 0.00021517276763916016, "__label__home_hobbies": 7.981061935424805e-05, "__label__industrial": 0.0005383491516113281, "__label__literature": 0.00022351741790771484, "__label__politics": 0.00023746490478515625, "__label__religion": 0.0004353523254394531, "__label__science_tech": 0.0291595458984375, "__label__social_life": 9.715557098388672e-05, "__label__software": 0.0081939697265625, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.0003581047058105469, "__label__transportation": 0.0013217926025390625, "__label__travel": 0.00020110607147216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28461, 0.02699]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28461, 0.26384]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28461, 0.89908]], "google_gemma-3-12b-it_contains_pii": [[0, 2370, false], [2370, 4635, null], [4635, 7961, null], [7961, 10425, null], [10425, 12829, null], [12829, 15878, null], [15878, 18883, null], [18883, 22021, null], [22021, 24543, null], [24543, 27496, null], [27496, 28461, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2370, true], [2370, 4635, null], [4635, 7961, null], [7961, 10425, null], [10425, 12829, null], [12829, 15878, null], [15878, 18883, null], [18883, 22021, null], [22021, 24543, null], [24543, 27496, null], [27496, 28461, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28461, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28461, null]], "pdf_page_numbers": [[0, 2370, 1], [2370, 4635, 2], [4635, 7961, 3], [7961, 10425, 4], [10425, 12829, 5], [12829, 15878, 6], [15878, 18883, 7], [18883, 22021, 8], [22021, 24543, 9], [24543, 27496, 10], [27496, 28461, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28461, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
269f17cf2498cc39e77b6341ca802aeb90b26c5e
|
Factorbird - a Parameter Server Approach to Distributed Matrix Factorization
Sebastian Schelter
Technische Universität Berlin
@sscdotopen
Venu Satuluri
Twitter
@vsatuluri
Reza Bosagh Zadeh
Stanford University
@reza_zadeh
Abstract
We present ‘Factorbird’, a prototype of a parameter server approach for factorizing large matrices with Stochastic Gradient Descent-based algorithms. We designed Factorbird to meet the following desiderata: (a) scalability to tall and wide matrices with dozens of billions of non-zeros, (b) extensibility to different kinds of models and loss functions as long as they can be optimized using Stochastic Gradient Descent (SGD), and (c) adaptability to both batch and streaming scenarios. Factorbird uses a parameter server in order to scale to models that exceed the memory of an individual machine, and employs lock-free Hogwild!-style learning with a special partitioning scheme to drastically reduce conflicting updates. We also discuss other aspects of the design of our system such as how to efficiently grid search for hyperparameters at scale. We present experiments of Factorbird on a matrix built from a subset of Twitter’s interaction graph, consisting of more than 38 billion non-zeros and about 200 million rows and columns, which is to the best of our knowledge the largest matrix on which factorization results have been reported in the literature.
1 Introduction
In recent years, there is a growing trend to apply machine learning (ML) to massive and complex datasets [1]. For many applications, this leads to growing model sizes. A prime example are recommender systems where the size of the model is usually proportional to the number of users in the dataset [2, 9]. In cases with hundreds of millions of users this model exceeds the memory of an individual machine and distributed approaches to model training that leverage multiple machines become necessary. This leads to a set of challenges, e.g. how to partition the model among the participating machines and how to correctly execute learning algorithms in such a setting.
In this work, we describe ‘Factorbird’, a prototypical system that leverages a parameter server architecture [17] for learning large matrix factorization models for recommendation mining. After a short introduction to matrix factorization for recommender systems (Sec. 2), we describe the main challenges of our system, namely how to partition the model among the machines in the cluster and how to run Stochastic Gradient Descent (SGD) in parallel. We discuss the design decisions taken in Factorbird to overcome these challenges (Sec. 3). First, we partition one of the matrices to learn over dedicated machines in the cluster and co-partition the other one with the input data to localize a large number of updates and drastically reduce update conflicts and network traffic. Next, we apply a lock-less Hogwild!-style [14] execution scheme to efficiently run SGD in parallel. After giving insights into our software design and memory-efficient datastructures (Sec. 4), we describe techniques to assess model quality in Factorbird (Sec. 5). Our approach here is to grid search over a large number of hyperparameter combinations in a single training run. Finally, we present a set of experiments on user interaction data from twitter (Sec. 6). We run a scale-out experiment on a matrix built from a subset of Twitter’s interaction graph [20], with more than 38 billion non-zeros
*work done while at Twitter
and about 200 million rows and columns. To the best of our knowledge, this is largest dataset on which matrix factorization experiments have been published so far.
2 Background: Latent Factor Models
We build upon latent factor models, which leverage a low-rank matrix factorization of interaction data \[2\] to characterize users and items by vectors of factors inferred from interaction patterns. These methods have been among the top-scoring contributions to the Netflix prize \[3\]. They factor a sparse partially-observed \( m \times n \) matrix \( M \), representing the interactions of \( m \) users with \( n \) items, into the product of two rank \( k \) factor matrices \( U \) and \( V \), such that their product \( UV \) approximates the observed parts of \( M \) and generalizes well to unobserved parts of \( M \) (c.f. Figure 1). A user \( i \) is associated to a factor vector \( u_i \in \mathbb{R}^k \) (a row of the \( m \times k \) matrix \( U \)), while an item \( j \) is associated to a factor vector \( v_j \in \mathbb{R}^k \) (a column of the \( n \times k \) matrix \( V \)). The factorization maps users and items onto joint latent factor space of low dimensionality \( k \), such that \( u_i^T v_j \) estimates the strength of interaction between user \( i \) and item \( j \).
The standard approach to learning a latent factor model for recommender systems is to minimize the regularized squared error over the predictions to the observed parts of the matrix \[2\]:
\[
\min_{U,V} \sum_{(i,j) \in M} (m_{ij} - u_i^T v_j)^2 + \lambda (\|u_i\|^2 + \|v_j\|^2)
\]
This approach is closely related to the Singular Value Decomposition (SVD) of the matrix, which gives an optimal rank \( k \) approximation (w.r.t. to squared error) of the original matrix using the top-\( k \) singular values and singular vectors. The key difference here is that the SVD is undefined when there are missing entries.
We adopt a more sophisticated approach outlined in \[2\]. First, we introduce a global bias term \( g \), which captures the average interaction strength between a user and an item in the dataset. Next, we introduce a user-specific bias term \( b^U_i \) for user \( i \) and an item-specific bias term bias \( b^V_j \) for item \( j \). These biases model how strongly the interactions of certain users and items tend to deviate from the global average. We substitute the dot product \( u_i^T v_j \) for predicting the strength of the interaction between a user \( i \) and an item \( j \) with a prediction function \( p(i,j) = g + b^U_i + b^V_j + u_i^T v_j \) that takes the bias terms into account. Furthermore, we introduce a function \( a(i,j) \) for determining the strength of the interaction between user \( i \) and item \( j \). This allows us to transform the observed entries \( m_{ij} \) of \( M \). Finally, we add a function \( w(i,j) \) for weighting the prediction error for interaction between user \( i \) and item \( j \). This function becomes useful when \( M \) consists of data from various sources with differing confidence in the observations. The loss function that we minimize for our latent factor model in Factorbird is the following:
\[
\min_{g,b^U,b^V,U,V} \frac{1}{2} \left( \sum_{i,j \in M} w(i,j)(p(i,j) - a(i,j))^2 \right) + \frac{\lambda}{2} \left( g^2 + \|b^U\|^2 + \|b^V\|^2 + \|U\|^2_F + \|V\|^2_F \right)
\]
We adopt a graph-specific terminology for the rest of the paper as well as for the APIs of our system, as we think about the majority of Twitter’s datasets in terms of large networks. Therefore,
we assume that $M$ represents a network (e.g. the network of user followings in Twitter), $i$ and $j$ reference vertices in this network (e.g. two users in this network) and the edges of the network correspond to observed entries of $M$, meaning that $a(i,j)$ depicts the weight of a directed edge between a vertex $i$ and a vertex $j$ in this network. Furthermore, we assume that the bias vectors $b^U$ and $b^V$ are stored in $U$ and $V$ (e.g., as first column and first row), to simplify notation.
**Factorization via Stochastic Gradient Descent (SGD).** There are various popular techniques to compute a matrix factorization. Stochastic Gradient Descent (SGD) [2] randomly loops through all observed interactions, computes the error of the prediction for each interaction and modifies the model parameters in the opposite direction of the gradient. Another technique is Alternating Least Squares (ALS) [8, 16, 9], which repeatedly keeps one of the unknown matrices fixed, so that the other one can be optimally re-computed. We chose SGD as optimization technique, as it is simple, provides fast convergence and is easy to adapt to different models and loss functions. Algorithm 1 shows the individual steps to conduct when learning the matrix factorization with SGD. First, we randomly initialize the factor matrices $U$ and $V$ (c.f. line 1). Next, we randomly pick an edge $(i,j)$ from the graph and compute the weighted error $e_{ij}$ of the prediction $p(i,j)$ against the actual edge strength $a(i,j)$ (c.f. lines 3 & 4). Next, we update the global bias term as well as the bias terms for $i$ and $j$ proportional to the prediction error $e_{ij}$, the learning rate $\eta$ and the regularization constant $\lambda$ (c.f. lines 5 to 7). We weight the regularization updates according the out-degree $n_i$ of vertex $i$ (the number of observed entries in the $i$-th row of $M$) and the in-degree $n_j$ of $j$ (the number of observed entries in the $j$-th column of $M$). We update the factor vectors $u_i$ and $v_j$ analogously (c.f. lines 8 & 9). The whole process is repeated until convergence.
**Algorithm 1:** Matrix Factorization using SGD.
```
1 randomly initialize $U$ and $V$
2 while not converged do
3 randomly pick edge $(i,j)$
4 // compute weighted prediction error
5 $e_{ij} \leftarrow w(i,j)(a(i,j) - p(i,j))$
6 // update biases
7 $g \leftarrow g - \eta(e_{ij} + \lambda g)$
8 $b_i^U \leftarrow b_i^U - \eta \left( e_{ij} + \frac{\lambda}{n_i} b_i^U \right)$
9 $b_j^V \leftarrow b_j^V - \eta \left( e_{ij} + \frac{\lambda}{n_j} b_j^V \right)$
10 // update factors
11 $u_i \leftarrow u_i - \eta \left( e_{ij} v_j + \frac{\lambda}{n_i} u_i \right)$
12 $v_j \leftarrow v_j - \eta \left( e_{ij} u_i + \frac{\lambda}{n_j} v_j \right)$
```
3 System Design
Having introduced the conceptual background of the models that we wish to learn, we proceed with describing the three main design goals of Factorbird.
First, Factorbird must handle factorizations of twitter-scale graphs with hundreds of millions of vertices and dozens of billions of edges. Scalability to datasets of this size is more important than high performance on small datasets commonly used in research (such as the Netflix [3] dataset). The matrices representing these graphs are either square (e.g. user to user followings) or ‘tall-and-wide’ for bipartite graphs (e.g. user to tweets). Second, the system has to be highly usable and adaptable for data scientists without making a systems background a necessity. No systems programming should be required to try out a variation of our model or a different loss function. Instead, the system should offer interfaces that abstract from the distributed execution model and
are intuitive to implement given an ML background. Third, the system design shall be simple to keep the maintenance and debugging effort low. Additionally, the system design should be easily extendable into a streaming system in the future, where a previously learned matrix factorization is updated online given a stream of new observations. This is the main reason why decided against using ALS, which is much harder to adapt to a streaming scenario than SGD.
**Challenges.** Running SGD on datasets of such scale inevitably leads to a set of challenges that have to be overcome when designing a scalable system. The main focus of this paper is to present the prototype of a system that elegantly solves these challenges.
1. The resulting factor matrices for a huge network quickly become larger than the memory available on an individual commodity machine \([22, 23]\). For example, \(U\) and \(V\) with \(k = 100\) and a single precision factor representation for a graph with 250 million vertices already have a combined size of about 200 GB. This estimation does not even take operating system buffers and caches, object references and required statistics like degree distribution of the graph into account, which also compete for memory.
2. Due to the sheer number of observations in large datasets, we aim to leverage multiple cores and machines to learn our desired matrix factorization. Unfortunately, SGD is an inherently sequential algorithm. It randomly picks an observation and updates the model parameters before proceeding to the next observation (c.f. Algorithm 1). When we run SGD in parallel on multiple cores, there is a chance that we concurrently try to update the same \(u_i\) or \(v_j\), which results in conflicting writes and lost updates.
In order to overcome the first challenge, we decided for a distributed architecture that allows us to partition the large model (the factor matrices) over several machines. We adapt a ‘Parameter Server’ architecture \([17]\). As illustrated in Figure 2 we partition the factor matrices over a set of machines, to which we refer to as parameter machines. At the same time, we partition the graph (our input data) over a set of so-called learner machines. Each learner machine runs multi-threaded SGD on its portions of the input data. For every observation to process, the learner machine has to fetch the corresponding factor vectors from the parameter machines, update them and write them back over the network.

This architecture inevitably leads to the second challenge, the question of how to handle concurrent, possibly conflicting updates to the factor vectors. When two learner machines fetch, update and write back the same factor vector concurrently, one such update will be overridden.
In the special case of matrix factorization, approaches to parallelizing SGD have been proposed that leverage a carefully chosen partitioning of the input data to avoid conflicting updates \([7, 15]\). As these approaches require complex data movement and synchronization patterns and are at the same time hard to adapt to a streaming scenario, we decided for an alternative approach that is simpler to implement in a distributed setting. Instead of taking complex actions to prevent conflicting updates, Factorbird builds upon a recently proposed parallelization scheme for SGD-based learning called Hogwild! \([14]\). This work states that parallel SGD can be implemented without locking if most updates only modify small parts of the model. The authors explicitly name latent factor models for matrix factorization as one such case.
The special nature of the matrix factorization problem allows us for a further optimization in our
system design, which reduces the required network traffic and at the same time greatly lowers the probability for conflicting overwrites. We can reduce the communication cost by 50% through intelligent partitioning, as follows. If we partition M by either rows or columns over the learning machines, then the updates to either U or V become local, when we co-partition one of the factor matrices (U in the case of partitioning by rows, V in the case of partitioning by columns) with M on the learner machines.
In the light of minimizing update conflict potential, we decide to co-locate V on the learner machines and keep U in the parameter machines (c.f. Figure 3). We choose this scheme for the following reasons: In case of the follower graph, the number of updates to a factor vector $u_i$ in $U$ is equal to the out-degree of the corresponding vertex $i$, while the number of updates to a factor vector $v_j$ in $V$ is equal to the in-degree of the corresponding vertex $j$. As the in-degree distribution of the follower graph has much higher skew than the out-degree distribution [21], we choose to localize the updates to $V$, which gives us a higher reduction in conflict potential than localizing $U$. Other graphs in twitter have similar skew in the degree distribution.
4 Implementation
We implement Factorbird using twitter’s existing technology stack and infrastructure. We leverage an existing memcached cluster for the parameter machines and implement the learner machines in Scala as a custom finagle application [18]. We use Hadoop’s distributed filesystem (HDFS) as persistent datastore for reading inputs and storing the final results. Prepartitioning and statistics computation of the input is conducted via MapReduce using Scalding jobs. The learner machines are assigned resources via Apache Mesos.
The typical execution steps on a learner machine are as follows. First, the machine loads statistics about the input graph and its assigned partition from HDFS, e.g. the number of edges and vertices, the average edge weight and the degree distribution of the graph. These statistics are used for learning as well as efficiently allocating memory for the required datastructures. Next, the learner machine instantiates its local partition of $V$. Subsequently, the model learning phase starts: the learner machine reads the edges of its assigned partition of the input graph in a streaming fashion. For each edge $(i, j)$, it reads the corresponding factor vector $u_i$ from memcached and the factor vector $v_j$ from its local partition of $V$. The reads from memcached are conducted in batches to increase throughput. Next, the learner machine updates the factor vectors using SGD and writes them back. This process is repeated until a user-specified number of passes over the input graph has been conducted. Finally, the learner machines persist the learned matrices $U$ and $V$ in a partitioned manner in HDFS.
Factorbird makes extensive use of memory-efficient data structures for managing factor vectors, the
\[ \text{Figure 3: Partitioning scheme and system architecture.} \]
\[ \text{Note that strictly speaking, we do not run vanilla Hogwild! on } U, \text{ as memcached atomically updates the whole factor vector instead of individually updating the factors.} \]
local partition of \( V \) and graph statistics such as the degree per vertex. The main objective of these data structures is to use as little memory as possible as well as to avoid object allocation and full garbage collection invocations in the JVM. A factor vector as well as a partition of a factor matrix are therefore internally represented by byte buffers and large primitive arrays, which are efficiently preallocated. For example, a learning machine determines the size of its local partition of \( V \) at startup time by reading the number of vertices assigned to its partition from the graph statistics. The partition of \( V \) is internally represented by a huge float array, into which the individual factor vectors (the columns of \( V \)) are packed. A mapping of vertex ids to offsets in this array is stored and update operations directly write to the underlying array. During the training phase, a learner machine directly reads the training edges from a compressed file in HDFS in a streaming fashion. If more than one pass through the training edges is necessary, the learner machine will on-the-fly create a copy of the HDFS file on local disk and switch to streaming edges from this local file for subsequent passes. Furthermore, some learning approaches require synthetically generated negative examples \([24]\) (possibly taken into account with a lower confidence than observed positive examples). We therefore implement on-the-fly generation of such negative examples (with a configurable probability) and mix them into the original positive examples supplied on the learning machines.
The central abstraction for implementing the SGD updates of the learning algorithm within Factorbird is the \texttt{Learner} (c.f. Listing 1). It is the main interface for data scientists wanting to try new models and shields the programmer from the complexity of the distributed nature of the underlying learning process. The first important method that a programmer has to implement is \texttt{initialize} which Factorbird uses to randomly initialize of the factor vectors in \( U \) and \( V \) (c.f. line in Algorithm 1). The method \texttt{update} is where the SGD-based update of two factor vectors \( u_i \) and \( v_j \) is implemented. The system provides the strength \( a(i, j) \) of the edge between vertex \( i \) and \( j \), as well the vertex degrees \( n_i \) and \( n_j \) and the error weight \( w(i, j) \) as additional arguments. A typical implementation of this method conducts the steps from line 4 to 9 in Algorithm 1.
\begin{verbatim}
trait Learner {
def initialize(factors: FactorVector): Unit
def update(u_i: FactorVector, v_j: FactorVector,
a_ij: Float, n_i: Int, n_j: Int, w_ij: Float): Float
}
Listing 1: Learner abstraction.
\end{verbatim}
5 Assessing Model Quality
The next aspect we focus on in Factorbird is the quality of the learned models. Ultimately, the models have to be evaluated using online experiments with real users, but during the development and batch training phase, we concentrate on a simple offline metric: the prediction quality on held-out data, measured by the root mean squared error (RMSE). Unfortunately, this prediction quality is heavily influenced by the choice of hyperparameters for our model, such as \( \eta \) (which controls rate of learning), \( \lambda \) (which controls regularization), number of factors \( k \) as well as the rate of decay of the learning rate in our \texttt{Learner} implementation. In order to find a well-working hyperparameter combination, we conduct a grid search in the hyperparameter space. We extend Factorbird to enable hold-out tests at scale. The Scalding job which prepares the input graph randomly splits the edges into training set, validation set and test set. Factorbird then learns a model on the training set, chooses the hyperparameter combination using the prediction quality on the validation set, and finally computes the RMSE on the test set.
However, conducting a single training run with Factorbird for each hyperparameter combination to inspect is tedious and takes a long time. We therefore describe how to learn many models with different hyperparameters at once to speed up the hyperparameter search. Given that we aim to inspect \( c \) hyperparameter combinations for a factorization of rank \( k \), we pack the \( c \) factor vectors into a large \( U \) of dimensionality \( m \times c \times k \) and a large \( V \) of dimensionality \( c \times k \times n \). We use a specialized learner implementation that is aware of the packing (e.g. it knows that the factors for the \( p \)-th model are contained in the \( p \times k \)-th to \( p \times (k + 1) \)-th entries of a factor vector) and learns \( c \) models at once. Figure 4 illustrates how the factor matrices for all possible combinations of two different learning rates \( \eta_1, \eta_2 \) and two different regularization constants \( \lambda_1, \lambda_2 \) would be packed into a large \( U \) and \( V \).
In order to compute the prediction quality on held-out data, Factorbird asks the Learner implementation for Predictors which the user has to provide. A Predictor predicts the strength of an unobserved edge \((i, j)\) from the corresponding factor vectors \(u_i\) and \(v_j\) (c.f., Listing 2).
```
trait Predictor {
def predict(u_i: FactorVector, v_j: FactorVector): Float
}
```
Listing 2: Predictor abstraction.
Optionally, the user can provide a LossEstimator, which estimates the current value of the loss function using samples of edges and factors (c.f., Listing 3). During training, Factorbird continually invokes the loss estimator and make the estimates inspectable via an external dashboard.
```
trait LossEstimator {
def estimateRegularizationComponent(
numRowsOfU: Int, sampleOfU: Iterator[FactorVector],
numColumnsOfV: Int, sampleOfV: Iterator[FactorVector]): Double
def estimateErrorComponent(numEdges: Long,
sampleOfEdges: Iterator[Edge], partitionOfU: FactorMatrix,
partitionOfV: FactorMatrix): Double
}
```
Listing 3: Abstraction for loss estimation.
## 6 Experiments
We run experiments on various subsets of ‘RealGraph’, a graph that models various interactions between twitter users [20]. The learner machines for our experiments are provisioned by Apache Mesos. In all our experiments, we factorize the binarized adjacency matrix of the graph subset. That means the, transformation function \(a(i, j)\) returns 1 if user \(i\) interacted with user \(j\) and 0 otherwise (in the case of a synthetic negative example). We equally weight all prediction errors \((w(i, j) = 1)\).
In this work, we present only preliminary experiments, aimed at validating the correctness of our system and showing its capacity to handle twitter-scale graphs. There is still a large potential for improvements in accuracy and performance that we will tackle in future work. We run a first set of experiments on a small sample of the RealGraph, consisting of 100 million interactions between 440 thousand popular users. Additionally, we make Factorbird generate 500 million synthetic negative edges.
**Benefits of increasing model complexity.** In the first experiment, we show the positive effects on prediction quality of the individual parts of our chosen model. We randomly split the dataset into 80% training set, 10% validation set and 10% test set. We train models with increasing complex-
ity and measure their prediction quality in terms of RMSE on the 10% held-out data in the test set (c.f. Figure 5). We start with a baseline that only uses the global bias (the average edge strength), followed by a more complex model that uses the global bias as well as vertex-specific bias terms. Finally, we train biased factorization models with $k \in \{2, 5, 10, 20\}$. We choose the hyperparameters using the validation set. The outcome confirms that an increase in the complexity of our model results in an increase in prediction quality. The global bias baseline provides an RMSE of 0.3727, adding the vertex-specific bias terms reduces the error to 0.3121 and incorporating the factors gives additional improvements, reducing the error to 0.2477 for $k = 20$.
**Visual inspection.** Next, we plot a selection of factor vectors from the $V$ matrix of a biased factorization with $k = 2$. Two users $i$ and $j$ will be close in the resulting low-dimensional space if their follower vectors $m_i$ and $m_j$ are roughly linear combinations of each other. Due to homophily, we expect twitter users sharing a common interest or characteristic to have a large overlap in followers and therefore to be much closer in the resulting space. Figure 6 indicates that factorizations produced by Factorbird have this property. We see several clusters of twitter accounts of similar type, e.g. a cluster of european politicians, containing Sigmar Gabriel (@sigmargabriel) and Reinhard Buetikofer (@bueti) from Germany as well as Arseniy Yatsenyuk (@Yatsenyuk_AP) from Ukraine. Another cluster consists of popstars such as Kanye West (@kanyewest), Justin Bieber (@justinbieber) and the boy-band One Direction (@onedirection). A third one related to U.S. sports contains Emma Span (@emmaspan), an editor for baseball at Sports Illustrated, Detroit’s football team (@lions) and an account of the Daytona International Speedway (@disupdates).
**Scale-out.** For the scale-out experiments we build a matrix based on a large subset of Twitter’s RealGraph consisting of more than 38.5 billion non-zeros (half of which are synthetic negative examples). The dimensions of the matrix are $229 \times 195$ million. To the best of our knowledge, this is the largest dataset on which collaborative filtering experiments have been published so far. Often, the Netflix prize dataset is used for scale-out tests. Our dataset is more than two orders of magnitude larger, having approximately 470 times more rows, 11,000 times more columns and 385 times more datapoints. We run Factorbird using 50 instances for the learner machines, provisioned in a Mesos cluster, with 16 cores and 16 gb of memory each. We leverage a shared memcached cluster for our parameter machines, with a guaranteed quota of 5 million commands per second. We run two passes of hyperparameter search on 80% of the data for 16 different learners with $k = 5$. The rank of the factor matrices $U$ and $V$ is $(5 + 1) \times 16 = 96$, which means we train a model with approximately $(229M + 195M) \times 96 = 40B$ parameters. A single SGD pass through the training set finishes in about 2.5 hours. In this experiment, Factorbird issues more than 4.2 million commands per second to memcached on average and updates around 400M parameters per second.
---
Footnote: Except for a few stragglers, which Mesos provisioned on machines with low network bandwidth (Unfortunately, Mesos cannot guarantee a defined network bandwidth at the moment)
7 Related Work
SGD- and ALS-based matrix factorization techniques for recommender systems have been extensively studied in the context of the Netflix Prize [2, 9]. These techniques have been extended to work on implicit feedback data [8] and to optimize metrics different from RMSE [10, 16].
A large body of work has been conducted with respect to parallelizing and distributing matrix factorization. This includes work on the scalability of the algorithms itself, e.g. by introducing biased sampling for SGD to avoid conflicting updates during concurrent execution [7, 5, 15] or by proving convergence under a minor amount of update conflicts [14]. Furthermore, distributed implementations have been proposed and evaluated on MapReduce-based [4], graph-parallel [11, 12] and specialized systems [13].
8 Future Work
In future work, we aim to extend Factorbird to a streaming scenario. We plan to bootstrap Factorbird with a factorization that was trained offline and update this factorization model online from a stream of incoming real-time interactions (e.g., user follows). Furthermore, we would like to experiment with factorizing multiple matrices at once, in order to incorporate different types of interactions in a single factorization model. Another technical issue to work on in the future is fault tolerance. A possible approach to recovery in case of failures could be to restart learner and parameter machines from asynchronously written checkpoints of the partitions of $U$ and $V$ which they hold [19]. Moreover, we plan to investigate ways to reduce the amount of network traffic caused by Factorbird, e.g. by compression, factor vector caching or via a biased sampling of the edges to allow us to use retrieved factor vectors for more than a single update. We will potentially replace memcached with a custom application to be able to achieve higher throughput and conduct true Hogwild-style updates on the parameter machines. Moreover, this would allow us to run aggregations on the parameter machines. Additionally, we would like to implement dynamic load adaption in Factorbird to mitigate the negative effects of stragglers on the overall runtime. We aim to factorize the whole twitter interaction graph with Factorbird in upcoming work.
References
|
{"Source-Url": "https://ssc.io/pdf/factorbird.pdf", "len_cl100k_base": 6897, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31565, "total-output-tokens": 8969, "length": "2e12", "weborganizer": {"__label__adult": 0.0004143714904785156, "__label__art_design": 0.0004892349243164062, "__label__crime_law": 0.00041747093200683594, "__label__education_jobs": 0.001316070556640625, "__label__entertainment": 0.0002014636993408203, "__label__fashion_beauty": 0.00024628639221191406, "__label__finance_business": 0.0004227161407470703, "__label__food_dining": 0.00047898292541503906, "__label__games": 0.00095367431640625, "__label__hardware": 0.0012540817260742188, "__label__health": 0.0007681846618652344, "__label__history": 0.00040650367736816406, "__label__home_hobbies": 0.00011515617370605467, "__label__industrial": 0.0005106925964355469, "__label__literature": 0.0004575252532958984, "__label__politics": 0.0004715919494628906, "__label__religion": 0.0005564689636230469, "__label__science_tech": 0.2359619140625, "__label__social_life": 0.00019371509552001953, "__label__software": 0.0265655517578125, "__label__software_dev": 0.7265625, "__label__sports_fitness": 0.0003592967987060547, "__label__transportation": 0.0005702972412109375, "__label__travel": 0.00027942657470703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35329, 0.02026]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35329, 0.25678]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35329, 0.85976]], "google_gemma-3-12b-it_contains_pii": [[0, 3492, false], [3492, 7065, null], [7065, 10809, null], [10809, 14564, null], [14564, 17864, null], [17864, 22899, null], [22899, 25323, null], [25323, 28812, null], [28812, 31075, null], [31075, 35329, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3492, true], [3492, 7065, null], [7065, 10809, null], [10809, 14564, null], [14564, 17864, null], [17864, 22899, null], [22899, 25323, null], [25323, 28812, null], [28812, 31075, null], [31075, 35329, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35329, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35329, null]], "pdf_page_numbers": [[0, 3492, 1], [3492, 7065, 2], [7065, 10809, 3], [10809, 14564, 4], [14564, 17864, 5], [17864, 22899, 6], [22899, 25323, 7], [25323, 28812, 8], [28812, 31075, 9], [31075, 35329, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35329, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
429a11dd4b09f671e1de8f05f46a87f38fff881d
|
A Security Evaluation Framework Based on STRIDE Model for Software in Networks
Xi Chen, Yun Liu, Jin Yi
Abstract
Software in networks, which is a special kind of applications in service-oriented computing and ultra-large-scale systems, is a complex software system deploying on network environment. Requirements of networked software pose many security problems owing to the dynamic topology structure and users' uncertainty. How to evaluate the degree of software security in networks is a challenging problem. In this paper, we present a framework for flexible assessing software to determine how well it can satisfy intended security requirements. On the basis of analyzing the threats which software in network facing, a security evaluation method based on STRIDE model is proposed. According to its own features of networked software and threat classification method of STRIDE model, we design a SN-Security Evaluation Model, in which the dependability-based, vulnerability-based and risk-based approaches are incorporated for the software security estimation. It provides a valuable way to help users to create the threat modeling and evaluating the safety degree for software security. A case study is conducted to verify the framework proposed in the paper.
Keywords: Software Security; STRIDE; Software Dependability; Risk Evaluation
1. Introduction
People find that software is not always trustworthy. Sometimes the behaviors, the results or the performance don't fully meet original expectations. Web applications are often vulnerable for attackers easily access to the application's underlying database. [1] Therefore, the trustworthiness of the software referred to the security and reliability is attracted high attention by researchers. The increased emphasis on system security has brought new researchers from different backgrounds to the field, bringing different perspectives and different skillsets. All of this is for the good since new viewpoints can lead to new insights. [2]
A common characteristic of these factors is that they are dynamic in nature. Such factors include new vulnerabilities and threats, the network policy structure and traffic. Concerning the security assessment, generally, current methods can be classified into three categories, the qualitative, the quantitative and the combination of these two kinds. [3]
In this paper, we propose a novel and practical framework for estimating software security. The evaluation platform is based on STRIDE model, and helps to generate quantitative assessment on security of active software in network.
Aiming at decreasing errors in evaluation, our platform assesses software in three approaches: dependability-based, vulnerability-based and risk-based. With the case study of Intellective Subway application system in a certain enterprise, the experimental results show that this model has a certain degree of generality and reference value.
Our approach yields three main contributions toward efforts to advance the evaluation of security in software systems:
1. We uniquely apply the combination of dependability-based, vulnerability-based and risk-based to estimating security of software in network.
2. It presents a model for the quantitative estimation of the security level of a software, which enables users to comparatively assess the relative security of different software and approaches in a defensible manner.
3. The implementation of the platform provides flexibility, which implies two aspects: (1) There is a defined standard interface to receive testing data with different forms, and that enlarge the evaluating scale; (2) According to different targets, it needs to set parameters and choose testing approaches, which provide the scalability of evaluation platform.
The paper is organised as follows: Section 2 discusses related work. Section 3 introduces a brief overview of STRIDE model, while section 4 provides an estimation model of software security. Section 5 describes the proposed evaluation framework for details. Section 6 presents a practical case study that illustrates the use of our approach to assess security of software in network. Section 7 contains our conclusions and a discussion of future work.
2. Related work
2.1. Dependability-based evaluation
The work by Alkussayer et al. [2] surveyed existing model-based techniques for evaluating system dependability, and summarize how they are now being extended to evaluate system security. It found that many techniques from dependability evaluation can be applied in the security domain, but that significant challenges remain, largely due to fundamental differences between the accidental nature of the faults commonly assumed in dependability evaluation, and the intentional, human nature of cyber attacks. Yu et al. [4] constructed a three-state nonhomogenous Markov model for software safety assessment. The two most important metrics for safety assessment, Steady State Safety and MTTUF, are estimated using the three-state Markov model.
2.2. Vulnerability-based evaluation
Liu et al. [5] proposed VRSS for qualitative rating and quantitative scoring vulnerabilities, which can combine respective advantages of all kinds of vulnerability rating systems. Houmb et al. [6] presented a risk estimation model that makes use of one such data source, the Common Vulnerability Scoring System (CVSS). The CVSS Risk Level Estimation Model estimates a security risk level from vulnerability information as a combination of frequency and impact estimates derived from the CVSS.
2.3. Risk-based evaluation
Saripalli et al. [7] used the definition of risk as a combination of the probability of a security threat event and its severity, measured as its Impact. Our framework complements this work by introducing risk-based evaluation to cohesively encapsulate the architectural security knowledge of the system. Ni et al. [8] described software implementation for on-line risk-based security assessment which computes indices based on probabilistic risk for use by operators in the control room to assess system security levels. Chan et al. [9] constructed a quantitative Bayesian index model for the assessment of enterprises’ Information Security risk. Zhang et al. [3] proposed the assessment method based on attack tree and Bayesian Network to obtain risk with an intuitive manner. Ahmed et al. [10] provided a security metric framework that quantified objectively the most significant security risk factors, which cover both the service aspect and the network aspect of risk toward a system. Risk avoidance based on life cycle management theory can be not only used in software projects, but also in other industries. The method can forecast related risk completely and in a timely fashion [11].
3. STRIDE Model
In information security, a threat represents a potential violation of the security of a system with some negative impact, whereas vulnerability is an actual security flaw which makes a system susceptible to an attack. An attack is an exploitation of a vulnerability to realize a threat. Threat modeling for security evaluation can help identify the threat events, their attack surface and the entry or access points on the software in the context of each threat; analyze the threats and associated risks; and developing mitigating strategies.
The commonly used method of determining the threat is to classify the threat and determine its composition elements, and STRIDE is a typical threat classification model for security evaluation. It can classify the threat according to different sources of the system threat, and it is the English acronym of the following six threat types:
- **Spoofing**: Illegal use of another user's user name and password, authentication information, etc.;
- **Tampering**: Maliciously modify data;
- **Repudiation**: Users refuse to engage in activities, and there is no way to prove that he refuse the agreement;
- **Information Disclosure**: Information is exposed to people who are not allowed to access;
- **Denial of Service**: Refuse to serve the legitimate users;
- **Elevation of Privilege**: Unprivileged users gain access privileges;
Common threats on software in networks can be documented in threat event catalogs, as shown in Lists 1. List 1 is drawn based on top 19 security threats listed for web applications [12] and Web 2.0 systems [13]. Each item here would correspond to a threat event $e$, used in the software security estimation analysis.
**List 1. Threat events compromising Internet security**
1. **Cross site Scripting (XSS)**: script executes in victims browser to hijack user sessions, deface web sites, and introduce worms etc.
2. **Injection Flaws**: user data sent to the web application is not properly validated, which can manipulate a query on the server.
3. **Malicious File Execution**: PHP, XML or any other framework which accept a file from the user is vulnerable to this attack, as the file can contain a malicious script.
4. **Insecure Direct Object Reference**: direct reference to any internal implementation object such as file, database record, key etc can be exploited.
5. **Cross-site Request Forgery**: a logged on user’s pre-authenticated data of a web site can be exploited by attacker’s application when he visits his site.
6. **Information Leakage by Improper Error Handling**: attack on the applications to sniff information on system resources, working and configuration.
7. **Broken Authentication and Session Management**: attack on account credentials and session tokens which are not protected.
8. **Insecure Cryptographic Storage**: web applications which do not use cryptographic functions to protect the data are exploited.
9. **Insecure Communication**: failing to encrypt the network traffic leads to this attack.
10. **Failure to restrict URL Access**: prevention of access by not displaying the urls to unauthorized users can be exploited by direct access of urls.
11. **XML poisoning**: XML traffic between the server and the browser is poisoned.
12. **Malicious AJAX Code Execution**: malicious AJAX code in the web application silently executes attacker’s intent.
13. **RSS/Atom Injection**: RSS feeds are injected with literal Java Script that can generate attacks on client’s browser.
14. **WSDL Scanning and Enumeration**: WSDL (Web Services Definition Language) file is attacked.
15. **Client side validation in AJAX routines**: same as 2 in the above section.
16. **Web Services Routing Issues**: unencrypted SOAP messages in WS Routing leads to this attack.
17. **Parameter Manipulation with SOAP**: attacker manipulates the variables in the SOAP messages.
18. **XPATH Injection in the SOAP Messages**: XPATH statements which take user input are manipulated.
19. **RIA Thick Client Binary Manipulation**: RIA components such as the Flash, Active X Controls and applets downloaded as binary components are decompiled to obtain the code. Applying patches to these binaries can bypass security.
4. SN-Security evaluation model
4.1. Model design
According to its own features of software in networks and threat classification method of STRIDE model, this paper designed $SN – SEM$ (Software in Networks-Security Evaluation Model).
Definition 1: $SN – SEM$ is an evaluation model for the capacity property of SN-Security, which can be denoted by the following elements:
$$SN – SEM (A, P, F, S)$$
Where $A$ is the property set of Software-Security attributes; $P_i$ is the weighting factor; $F$ is the set of trust functions of a variety of software security; and $S$, the set of a variety of evaluation strategies.
Definition 2: the set of security attributes, which imply anti-threat ability property of software
1) The attribute of anti-counterfeiting $A_c$: The commonly used method is identity validation, and the accuracy of validation determines the strength of anti-counterfeiting capability of software in the network.
2) The attribute of anti-tampering $A_t$: Message integrity validation method is generally used, thus the attribute to ensure the message integrity determines the attribute of anti-tampering of software in the network.
3) The attribute of anti-denying $A_d$: Signature is generally used, thus the attribute to ensure the authenticity of the signature determine the attribute of anti-denying of software in the network.
4) The attribute of anti-leaking information $A_l$: The method of information encryption and securing communication channels is commonly used, thus the attribute of ensuring the safety of communication channel capacity and the strength of information encryption determines the attribute of anti-leaking information of software in the network.
5) The attribute of anti-refusing service $A_f$: refers to the attribute of keeping the system away from the critical condition. There are two types of refusing service attacks: one is to overload the system (the most common is the DDOS); the second is to make the system run into the error. These two methods are essentially the same, and both of them make the system enter the critical condition. So the ability of keeping the system away from the critical condition determines the ability of anti-refusing service of software in the network.
6) The attribute of anti-escalating privileges $A_e$: A reasonable distribution strategy of privileges and effective audit mechanism is the effective means to reduce of privilege and escalate attacks, so the reasonability of distribution strategy of privileges and the effectiveness of audit mechanisms determines the attribute of anti-escalating privileges of software in the network.
We defined six security attributes of software in networks according to STRIDE Model. For completeness, we have categorized the STRIDE threat events, shown in list.1, to map to one or more of the 6 Security attributes. $SN – SEM$ methodology would work with any such framework, by assigning relative weights of importance to each SA category, as will be shown later.
Definition 3: Weighting factor $P_i$ refers to the weighting influence coefficient that the 6 parameter values of security attributes of software in the network accounts in the security assessment model.
Definition 4: the strategy set of security evaluation $S$ is the corresponding security strategy set when evaluating the degree of security aiming at security attributes of software security.
4.2. Dos evaluation
To reasonably evaluate the security of software and calculate the security value is a reliable prerequisite and foundation of determining whether the software is safe or not. We can evaluate and measure the security of software by using dependability-based, vulnerability-based and risk-based estimation of software security.
Definition 5: Degree of security, dos in short, is a measurement value to measure the anti-attack ability of software and a function to the degree of security the software can reach as well as the quantitative basis of classifying the security relationship. Because the degree of security is a continuous cumulative, synthetic evaluation information on the property of security strategy and security attribute which all-directionally act on the target entity all round. Therefore in the following part, we mainly use the accumulated comprehensive evaluation index to evaluate the degree of security.
Contribution degree of software security property: scan the security of software by combining the security attribute of software on a regular basis and thus indirectly measure the contribution degree of security. We assume that, if the more times the security attribute property of a software is scanned and the more inclusive area it covers, we think the higher of the software’s security degree will be; otherwise, if the vulnerability is rarely scanned or the scanning range is not wide, then we believe the lower of the software’s security degree will be, which is defined as follows:
\[ \text{dos}_i = \sum_{k \in A} f(A_k) \times p_k \]
Where \( A \) represents the property set of security attribute, \( f(A_k) \) represents the influencing factor of the security attribute property which acts on \( A \), and \( p_k \) is the corresponding weighting factor.
5. The Security evaluation framework
The inspiration for our evaluation framework comes from recognition of the critical need for assessing the security of a software system. The proposed technique strengthens the estimating accuracy of a software security by incorporating three distinct approaches seamlessly into a cohesive framework. Degree of risk evaluation software
The initial component of the estimation is a dependability-based approach. A dependability-based approach is an effective way of ensuring design quality and addressing architectural concerns. The key objectives of dependability-based evaluation are to evaluate MTTUF (Mean Time To Unsafe Failure) and (Steady-state safety). And it could calculate MTTUF of next round.
Secondly, the incorporation of vulnerability-based estimation improves the quality of security components in the architecture. Scoring software vulnerabilities have proven effective for dealing with security problems in a software. Therefore, the proposed framework incorporates vulnerability-based estimation as one core component.
The third factor is the incorporation of a risk analysis model. Babar et al. [14] surveyed the state of practice in evaluating software architecture and concluded that 88% of the survey participants conducted security review with the goal of identifying potential risks. The proposed evaluation framework encompasses three main aspects. A detailed explanation of these approaches follows.
5.1. Dependability-based estimation
At any time safety-critical software could be in the three states: the operational state, the fail-safe state, and the fail-unsafe state. A software is in the operational state when it is operating correctly, and the probability of the software staying in the operational state at \( t \) is denoted as \( P_O(t) \); a software is in
the fail-safe state when it has ceased to perform its functions but in a safe manner, and the probability of the software staying in the fail-safe state at $t$ is denoted as $P_{FS}(t)$; a software is in the fail-unsafe state when it has failed and the failures have not been handled in a manner that guarantees the safe operation of the software, and the probability of the software staying in the fail-unsafe state at $t$ is denoted as $P_{FU}(t)$. At any time $t$, software obeys the following equation, since at any time $t$, the software can be in one and only one of the three states. These three states are mutually exclusive states, and at $t = 0$, the software always starts at the operational state.
$$P_O(t) + P_{FS}(t) + P_{FU}(t) = 1$$ \hspace{1cm} (4)$$

Given the existence of a fault, a system could be recovered or not be recovered. “a system recovers” means that the fault falls within the risk containment region of the system and we call the fault a safe fault. If a system is not able to recover, it means that the fault falls beyond the risk containment region of the system and we call the fault an unsafe fault. Safe faults cause a system to go to the fail-safe state, a non-operational state that will not cause a mishap. Unsafe faults cause a system to go to the fail-unsafe state, a non-operational state that will cause a mishap [4].
Definition 6: $N$ is a random variable which follows the geometric distribution. In the independent Bernoulli trials, the system has probability $C$ to arrive at the fail-safe state and has probability $1 - C$ to arrive at the fail-unsafe state when a failure occurs, according to the definition of the coverage. According to [4],
$$C = \frac{N_{s,s}}{N_S}$$ \hspace{1cm} (5)$$
Where $N_{s,s}$ is the number of safe faults, and $N_S$ is the number of faults. Both of them are numbers that have limited values.
According to Bound Theory[4], we could be able to get:
$$MTTSF \geq \frac{e \cdot t}{N_{s,s}}$$ \hspace{1cm} (6)$$
Using Wald’s equality, we are able to prove:
$$MTTU_U = \frac{MTTSF}{1 - C}$$ \hspace{1cm} (7)$$
5.2. Vulnerability-based evaluation
Vulnerabilities are extremely important for network security. IT management must identify and assess vulnerabilities across many disparate hardware and software platforms to prioritize these vulnerabilities and remediate those that pose the greatest risk.[5] Our solution of vulnerability in software is to scan vulnerabilities which is classified into six types, according to security attributes defined before. Then, scoring the vulnerabilities with base score produced by CVSS. According to common sense, if the software in the past are safe and reliable, then the software has a relative high degree of security. Based on this assumption, $V_A$, the vulnerability score of a security attribute defined as follows:
$$V_A = \frac{1}{n} \sum_{i=1}^{n} V_i$$ \hspace{1cm} (8)
As mentioned before, $n$ threats form a security attribute $A$. Then, vulnerabilities to the application integrated over the six security attributes is a weighted average:
$$V = \sum_{A_i} W_A V_A$$ \hspace{1cm} (9)
where $W_A$ is the relative weight assigned to an attribute $A$, representing the importance of each security attribute’s contribution degree in the calculate the degree of security.
5.3. Risk-based evaluation
Our Risk-based evaluation is based on QUIRC, which is proposed in[7]. And according to STRIDE model, we defined six attributes of software in the network, as mentioned before. Then, get the average value of all the risk factor of attack technology that may be used in a threat, and get the rounding number. The greater this value is, the greater threat posed to the system. After calculating the risks of each threat, decide which threat shall be given priority to mitigation according to the size of risk.
The scheme proposes a definition for risk as a product of the Probability ($P_c$) of a security compromise, which is combination of the probability, or frequency, of a security threat event and the magnitude of its consequence. i.e. a threat event, $c$, $P_c$ typically is a fraction less than 1, whereas $I_c$ may be assigned a value on a numerical scale.
$$R_c = P_c I_c$$ \hspace{1cm} (10)
The overall platform security risk for the given application under a given attribute $R_A$ would be average over the cumulative, weighted sum of $n$ threats which map to that six attributes:
$$R_A = \frac{1}{n} \sum_{i=1}^{n} P_c I_c$$ \hspace{1cm} (11)
Then, Net Security Risk ($R$) to the application integrated over the six security attributes is a weighted average:
$$R = \sum_{A_i} W_A R_A$$ \hspace{1cm} (12)
where $W_A$ is the relative weight assigned to an attribute $A$, as mentioned before.
In summary, the normalized calculation formula of dos is as follows:
$$dos = m_R R + m_{MTTUF} MTTUF + m_V V$$ \hspace{1cm} (13)
Where $R$ is the normalized risk of software $R$, similarly, as $MTTUF$, $V$. And $m_R$, $m_{MTTUF}$ and $m_V$ are the weighting values checking and validating the system. In order to quantize the software security, we should analyze these threats according to the specific circumstances, calculate the various values, and then calculate the dos.
6. A Practical case study
For illustration purposes, we use Intellective Subway (IS for short) case study. IS is a management software which is applied in subway networks. This context represents a general form of most of today's online web systems from the customer's perspective. Our scheme has identified nineteen specific threats (called vulnerabilities), shown in List.1. These threats convey a simpler form of threat profiling for this case study. Next, we will walk through the framework to evaluate the security of IS.
With the reference to the previous definition of $SN – SEM$ model, carry out the threat modeling and dos evaluation of the SOA application system according to the following steps:
Step 1, Put the test data, which is transformed to defined form, into the evaluation platform. That helps to process data from different testing tools, and make the evaluation platform with scalability.
Step 2, Choose any kinds of appropriate methods between dependability-based, vulnerability-based and risk-based. Then set the weight coefficients according to options and the test object. Figure.2 and figure.3 show dependability-based and risk-based evaluation user interfaces respectively.
Step 3, Calculate the security of software with different three methods according to the options.
Step 4, Evaluate the dos quantized value arrive with the contribution degree weighting factor of security property to get the reference value of software security evaluation of IS system, and then put forward reference for the system administrators to find the weak links in security defense and optimize the design of system security.
![Software Security Comprehension Evaluate]
**Figure.2** Dependability-based evaluation user interface
7. Conclusions
This paper combines dependability-based and vulnerability-based with risk-based analysis to create a framework for software security evaluation. We have carried out applied research based on the evaluation method of STRIDE model in risk assessment, and carried out the threat classification and dos evaluation of software provided by SN – SEM for the target system. We also present a standard interface for receiving testing data with different forms that provides the scalability of evaluation platform. Furthermore, we illustrate the applicability of our framework utilizing a common case study that clearly demonstrates the benefits of evaluating the security of software architecture during the design phase. However, as with all new approaches to improve software security, further validation is required. In future work, we continue to work towards the full formalization and realization of the framework.
8. Acknowledgments
This work is partly supported by National Natural Science Foundation of China under Grant No. 61071076, National High-tech Research And Development Plans(863 Program) under Grant No. 2011AA010104-2 and The Academic Discipline and Postgraduate Education Project of Beijing Municipal Commission of Education.
9. References
|
{"Source-Url": "http://www.aicit.org/IJACT/ppl/IJACT1059PPL.pdf", "len_cl100k_base": 5355, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23511, "total-output-tokens": 6720, "length": "2e12", "weborganizer": {"__label__adult": 0.0003764629364013672, "__label__art_design": 0.0003781318664550781, "__label__crime_law": 0.0009307861328125, "__label__education_jobs": 0.0005769729614257812, "__label__entertainment": 9.21487808227539e-05, "__label__fashion_beauty": 0.0001392364501953125, "__label__finance_business": 0.00034117698669433594, "__label__food_dining": 0.0003743171691894531, "__label__games": 0.0012006759643554688, "__label__hardware": 0.0010862350463867188, "__label__health": 0.0006170272827148438, "__label__history": 0.00019299983978271484, "__label__home_hobbies": 8.749961853027344e-05, "__label__industrial": 0.0003533363342285156, "__label__literature": 0.00030732154846191406, "__label__politics": 0.00022912025451660156, "__label__religion": 0.0003275871276855469, "__label__science_tech": 0.056854248046875, "__label__social_life": 8.356571197509766e-05, "__label__software": 0.02777099609375, "__label__software_dev": 0.90673828125, "__label__sports_fitness": 0.00024068355560302737, "__label__transportation": 0.00031566619873046875, "__label__travel": 0.00017130374908447266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28773, 0.02561]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28773, 0.39471]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28773, 0.90223]], "google_gemma-3-12b-it_contains_pii": [[0, 3419, false], [3419, 7345, null], [7345, 11002, null], [11002, 14403, null], [14403, 18060, null], [18060, 20243, null], [20243, 23371, null], [23371, 25090, null], [25090, 27095, null], [27095, 28773, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3419, true], [3419, 7345, null], [7345, 11002, null], [11002, 14403, null], [14403, 18060, null], [18060, 20243, null], [20243, 23371, null], [23371, 25090, null], [25090, 27095, null], [27095, 28773, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28773, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28773, null]], "pdf_page_numbers": [[0, 3419, 1], [3419, 7345, 2], [7345, 11002, 3], [11002, 14403, 4], [14403, 18060, 5], [18060, 20243, 6], [20243, 23371, 7], [23371, 25090, 8], [25090, 27095, 9], [27095, 28773, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28773, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
ebf9eaa9461c4d131d8f6520d2dc11ffe4dca9bb
|
Recall: OS Library API for Threads: *pthreads*
Here: the “p” is for “POSIX” which is a part of a standardized API
```c
int pthread_create(pthread_t *thread, const pthread_attr_t *attr,
void *(*start_routine)(void*), void *arg);
- thread is created executing start_routine with arg as its sole argument.
- return is implicit call to pthread_exit
void pthread_exit(void *value_ptr);
- terminates the thread and makes value_ptr available to any successful join
int pthread_join(pthread_t thread, void **value_ptr);
- suspends execution of the calling thread until the target thread terminates.
- On return with a non-NULL value_ptr the value passed to pthread_exit() by
the terminating thread is made available in the location referenced
by value_ptr.
```
Recall: pThreads Example
- How many threads are in this program?
- What function does each thread run?
- One possible result:
- Yes: Loop calls Join in thread order
- Does the main thread join with the threads in the same order that they were created?
- Yes: Depends on scheduling order!
- Does the threads exit in the same order they were created?
- No: Depends on scheduling order!
- Would the result change if run again?
- Yes: Depends on scheduling order!
- Is this code safe/correct???
- No – threads share a variable that is used without locking and there is a race
condition!
Recall: Locks
- Locks provide two *atomic* operations:
- Lock Acquire() – wait until lock is free; then mark it as busy
» After this returns, we say the calling thread *holds* the lock
- Lock Release() – mark lock as free
» Should only be called by a thread that currently holds the lock
» After this returns, the calling thread no longer holds the lock
- For now, don’t worry about how to implement locks!
- We’ll cover that in substantial depth later on in the class
OS Library Locks: * pthreads
```c
int pthread_mutex_init(pthread_mutex_t *mutex,
const pthread_mutexattr_t *attr)
int pthread_mutex_lock(pthread_mutex_t *mutex);
int pthread_mutex_unlock(pthread_mutex_t *mutex);
```
You’ll get a chance to use these in Homework 1
---
Our Example: Fixing the Race Condition for increment (++)
```c
int common = 162;
pthread_mutex_t common_lock = PTHREAD_MUTEX_INITIALIZER;
void *threadfun(void *threadid)
{
long tid = (long)threadid;
pthread_mutex_lock(&common_lock);
int my_common = common++;
pthread_mutex_unlock(&common_lock);
printf("Thread %lx stack: %lx common: %lx (%d)\n", tid,
(unsigned long) &tid,
(unsigned long) &common, my_common);
thread_exit(NULL);
}
```
---
Recall: Adding locking to a Red/Black tree
**Thread A**
- Insert(3)
- Lock.acquire()
- Insert 3 into the data structure
- Lock.release()
**Thread B**
- Insert(4)
- Lock.acquire()
- Insert 4 into the data structure
- Lock.release()
- Get(6)
- Lock.acquire()
- Check for membership
- Lock.release()
---
Recall: Dual Mode Operation
- **Hardware** provides at least two modes (at least 1 mode bit):
1. Kernel Mode (or "supervisor" mode)
2. User Mode
- Certain operations are **prohibited** when running in user mode
- Changing the page table pointer, disabling interrupts, interacting directly w/ hardware, writing to kernel memory
- Carefully controlled transitions between user mode and kernel mode
- System calls, interrupts, exceptions
Implementing Safe Kernel Mode Transfers
• Important aspects:
– Controlled transfer into kernel (e.g., syscall table)
– Separate kernel stack!
• Carefully constructed kernel code packs up the user process state and sets it aside
– Details depend on the machine architecture
– More on this next time
• Should be impossible for buggy or malicious user program to cause the kernel to corrupt itself!
3 types of Kernel Mode Transfer
• Syscall
– Process requests a system service, e.g., exit
– Like a function call, but “outside” the process
– Does not have the address of the system function to call
– Like a Remote Procedure Call (RPC) – for later
– Marshall the syscall id and args in registers and exec syscall
• Interrupt
– External asynchronous event triggers context switch
– eg. Timer, I/O device
– Independent of user process
• Trap or Exception
– Internal synchronous event in process triggers context switch
– e.g., Protection violation (segmentation fault), Divide by zero, ...
Handling System Calls safely
• Vector through well-defined syscall entry points!
– Table mapping system call number to handler
– Atomicly set to kernel mode at same time as jump to systemcall code in kernel
– Separate Kernel Stack in kernel memory during syscall execution
• System call handler must never trust user and must validate everything!
• On entry: Copy arguments
– From user memory/registers/stack into kernel memory
– Protect kernel from malicious code evading checks
• On entry: Validate arguments
– Protect kernel from errors in user code
– Protect kernel from invalid values and addresses
• On exit: Copy results back
– Into user memory
How do we take interrupts safely?
• Interrupt processing not visible to the user process:
– Occurs between instructions, restarted transparently
– No change to process state
– What can be observed even with perfect interrupt processing?
• Interrupt vector
– Limited number of entry points into kernel
• Kernel interrupt stack
– Handler works regardless of state of user code
• Interrupt masking
– Handler is non-blocking
• Atomic transfer of control
– “Single instruction”-like to change:
» Program counter
» Stack pointer
» Memory protection
» Kernel/user mode
• Exceptions handled similarly, except synchronously (attached to particular instruction)
Interrupt Controller
- Interrupts invoked with interrupt lines from devices
- Interrupt controller chooses interrupt request to honor
- Interrupt identity specified with ID line
- Mask enables/disables interrupts
- Priority encoder picks highest enabled interrupt
- CPU can disable all interrupts with internal flag
- Non-Contactable Interrupt line (NMI) can't be disabled
Network
**Interrupt Vector**
- Where else do you see this dispatch pattern?
- System Call
- Exceptions
**Interrupt Vector**
- Address and properties of each interrupt handler
```
intrpHandler_i () {
...
}
```
**Need for Separate Kernel Stacks**
- Kernel needs space to work
- Cannot put anything on the user stack (Why?)
- Two-stack model
- OS thread has interrupt stack (located in kernel memory) plus User stack (located in user memory)
- Syscall handler copies user args to kernel space before invoking specific function (e.g., open)
- Interrupts (???)
**Before**
- User-level Process
- Registers
- Kernel
```
code:
foo () {
while(1)
x = x+1
y = y+2
}
stack:
```
During Interrupt/System Call
- User-level Process
- Registers
- Kernel
```
foo () {
i = i + 1;
y = 2 * y;
}
```
- stack:
- Exception Stack
```
SS ESP
IFLAGS
CS
EIP
```
Managing Processes
- How to manage process state?
- How to create a process?
- How to exit from a process?
- Processes are created and managed... by processes!
Administrivia
- Kubiatowicz Office Hours
- 3pm-4pm, Tuesday/Thursday
- TOMORROW (Friday) is Drop Deadline! VERY HARD TO DROP LATER!
- Recommendation: Read assigned readings before lecture
- You should be going to sections – Important information covered in sections
- Any section will do until groups assigned
- Get finding groups of 4 people ASAP
- Priority for same section; if cannot make this work, keep same TA
- Remember: Your TA needs to see you in section!
Administrivia (Con’t)
- Starting next week, we will be adhering to strict slip-day policies for non-DSP students
- Slip days are no-questions asked (or justification needed) extensions
- Anything beyond this requires documentation (i.e. doctor’s note, etc)
- If you run out of slip days, assignments will be discounted
» 10% first day, 20% second day, 40% third day, 80% fourth day
- You get 4 slip days for homework and 5 slip days for group projects
- No project extensions on design documents, since we need to keep design reviews on track
- Conserve your slip days!
- Midterm 1 will be on 2/15 from 8-10pm
- No class on day of midterm (extra office hours!)
- Closed book
- One page of handwritten notes – both sides
Bootstrapping
• If processes are created by other processes, how does the first process start?
• First process is started by the kernel
– Often configured as an argument to the kernel before the kernel boots
– Often called the “init” process
• After this, all processes on the system are created by other processes
Process Management API
• exit – terminate a process
• fork – copy the current process
• exec – change the program being run by the current process
• wait – wait for a process to finish
• kill – send a signal (interrupt-like notification) to another process
• sigaction – set handlers for signals
pid.c
```c
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
int main(int argc, char* argv[]) {
pid_t pid = getpid();
printf("My pid: %d\n", pid);
exit(0);
}
```
Q: What if we let main return without ever calling exit?
• The OS Library calls exit() for us!
• The entrypoint of the executable is in the OS library
• OS library calls main
• If main returns, OS library calls exit
• You’ll see this in Project 0: init.c
Process Management API
- `exit` – terminate a process
- `fork` – copy the current process
- `exec` – change the program being run by the current process
- `wait` – wait for a process to finish
- `kill` – send a signal (interrupt-like notification) to another process
- `sigaction` – set handlers for signals
Creating Processes
- `pid_t fork()` – copy the current process
- New process has different pid
- New process contains a single thread
- Return value from `fork()`:
- When > 0:
» Running in (original) Parent process
» Return value is pid of new child
- When = 0:
» Running in new Child process
- When < 0:
» Error! Must handle somehow
» Running in original process
- State of original process duplicated in both Parent and Child!
- Address Space (Memory), File Descriptors (covered later), etc...
```
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
int main(int argc, char *argv[]) {
pid_t cpid, mypid;
pid_t pid = getpid(); /* get current processes PID */
printf("Parent pid: %d\n", pid);
cpid = fork();
if (cpid > 0) { /* Parent Process */
mypid = getpid();
printf("[%d] parent of [%d]\n", mypid, cpid);
} else if (cpid == 0) { /* Child Process */
mypid = getpid();
printf("[%d] child\n", mypid);
perror("Fork failed");
} else { /* Parent Process */
mypid = getpid();
printf("[%d] parent of [%d]\n", mypid, cpid);
} else if (cpid == 0) { /* Child Process */
mypid = getpid();
printf("[%d] child\n", mypid);
} else {
perror("Fork failed");
}
}
```
### fork1.c
```c
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
int main(int argc, char *argv[]) {
pid_t cpid, mypid;
pid_t pid = getpid(); /* get current process PID */
printf("Parent pid: \%d\n", pid);
cpid = fork();
if (cpid > 0) { /* Parent Process */
mypid = getpid();
printf("[\%d] parent of [\%d]\n", mypid, cpid);
} else if (cpid == 0) { /* Child Process */
mypid = getpid();
printf("[\%d] child\n", mypid);
} else {
perror("Fork failed");
}
}
```
### Mystery: fork_race.c
```c
int i;
int cpid = fork();
if (cpid > 0) {
for (i = 0; i < 10; i++) {
printf("Parent: \%d\n", i);
// sleep(1);
}
} else if (cpid == 0) {
for (i = 0; i > -10; i--) {
printf("Child: \%d\n", i);
// sleep(1);
}
}
```
- **What does this print?**
- **Would adding the calls to sleep() matter?**
### Process Management API
- `exit` – terminate a process
- `fork` – copy the current process
- `exec` – change the *program* being run by the current process
- `wait` – wait for a process to finish
- `kill` – send a *signal* (interrupt-like notification) to another process
- `sigaction` – set handlers for signals
### Starting new Program: variants of exec
```c
int cpid = fork();
if (cpid > 0) { /* Parent Process */
tcpid = wait(&status);
} else if (cpid == 0) { /* Child Process */
char *args[] = {"ls", "-l", NULL};
execv("/bin/ls", args);
/* execv doesn’t return when it works.
So, if we got here, it failed! */
perror("execv");
exit(1);
}
```
fork2.c – parent waits for child to finish
```c
int status;
pid_t tcpid;
if (cpid > 0) {
mypid = getpid();
printf("[%d] parent of [%d]\n", mypid, cpid);
tcpid = wait(&status);
printf("[%d] bye %d(%d)\n", mypid, tcpid, status);
} else if (cpid == 0) {
mypid = getpid();
printf("[%d] child\n", mypid);
exit(42);
}
```
Process Management: The Shell pattern
```
child
pid=fork();
if (pid==0)
exec(...);
else
wait(&stat)
parent
pid=fork();
if (pid==0)
exec(...);
else
wait(&stat)
```
Process Management API
- exit – terminate a process
- fork – copy the current process
- exec – change the program being run by the current process
- wait – wait for a process to finish
- kill – send a signal (interrupt-like notification) to another process
- sigaction – set handlers for signals
inf_loop.c
```
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <signal.h>
void signal_callback_handler(int signum) {
printf("Caught signal!\n");
exit(1);
}
int main() {
struct sigaction sa;
sa.sa_flags = 0;
sigemptyset(&sa.sa_mask);
sa.sa_handler = signal_callback_handler;
sigaction(SIGINT, &sa, NULL);
while (1) {}
}
```
inf_loop.c
Q: What would happen if the process receives a SIGINT signal, but does not register a signal handler?
A: The process dies!
For each signal, there is a default handler defined by the system.
Common POSIX Signals
- SIGINT – control-C
- SIGTERM – default for kill shell command
- SIGSTP – control-Z (default action: stop process)
- SIGKILL, SIGSTOP – terminate/stop process
- Can't be changed with sigaction
- Why?
Recall: UNIX System Structure
User Mode
Kernel Mode
Hardware
Recall: OS Library (libc) Issues Syscalls
- OS Library: Code linked into the user-level application that provides a clean or more functional API to the user than just the raw syscalls
- Most of this code runs at user level, but makes syscalls (which run at kernel level)
Unix/POSIX Idea: Everything is a “File”
- Identical interface for:
- Files on disk
- Devices (terminals, printers, etc.)
- Regular files on disk
- Networking (sockets)
- Local interprocess communication (pipes, sockets)
- Based on the system calls `open()`, `read()`, `write()`, and `close()`
- Additional: `ioctl()` for custom configuration that doesn't quite fit
- Note that the “Everything is a File” idea was a radical idea when proposed
- Dennis Ritchie and Ken Thompson described this idea in their seminal paper on UNIX called “The UNIX Time-Sharing System” from 1974
- I posted this on the resources page if you are curious
Aside: POSIX interfaces
- POSIX: Portable Operating System Interface (for UNIX?)
- Interface for application programmers (mostly)
- Defines the term “Unix,” derived from AT&T Unix
- Created to bring order to many Unix-derived OSes, so applications are portable
- Partially available on non-Unix OSes, like Windows
- Requires standard system call interface
The File System Abstraction
- File
- Named collection of data in a file system
- POSIX File data: sequence of bytes
- Could be text, binary, serialized objects, ...
- File Metadata: information about the file
- Size, Modification Time, Owner, Security info, Access control
- Directory
- “Folder” containing files & directories
- Hierarchical (graphical) naming
- Path through the directory graph
- Uniquely identifies a file or directory
- `/home/ff/cs162/public_html/fa14/index.html`
- Links and Volumes (later)
Connecting Processes, File Systems, and Users
- Every process has a *current working directory* (CWD)
- Can be set with system call:
```c
int chdir(const char *path); //change CWD
```
- Absolute paths ignore CWD
- `/home/oski/cs162`
- Relative paths are relative to CWD
- `index.html`, `~/index.html`
- Refers to index.html in current working directory
- `../index.html`
- Refers to index.html in parent of current working directory
- `~/cs162/index.html`
- Refers to index.html in the home directory
I/O and Storage Layers
Application / Service
High Level I/O
- Streams (buffered I/O)
- File Descriptors
- open(), read(), write(), close(), ...
- Open File Descriptions
- Files/Directories/Indexes
Low Level I/O
- Syscall
File System
- Commands and Data Transfers
- Disks, Flash, Controllers, DMA
C High-Level File API – Streams
- Operates on “streams” – unformatted sequences of bytes (whether text or binary data), with a position:
```
#include <stdio.h>
FILE *fopen(const char *filename, const char *mode);
int fclose(FILE *fp);
```
<table>
<thead>
<tr>
<th>Mode</th>
<th>Text</th>
<th>Binary</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>r</td>
<td>rb</td>
<td>Open existing file for reading</td>
<td></td>
</tr>
<tr>
<td>w</td>
<td>wb</td>
<td>Open for writing; created if does not exist</td>
<td></td>
</tr>
<tr>
<td>a</td>
<td>ab</td>
<td>Open for appending; created if does not exist</td>
<td></td>
</tr>
<tr>
<td>r+</td>
<td>rb+</td>
<td>Open existing file for reading & writing.</td>
<td></td>
</tr>
<tr>
<td>w+</td>
<td>wb+</td>
<td>Open for reading & writing; truncated to zero if exists, create otherwise</td>
<td></td>
</tr>
<tr>
<td>a+</td>
<td>ab+</td>
<td>Open for reading & writing. Created if does not exist. Read from beginning, write as append</td>
<td></td>
</tr>
</tbody>
</table>
- Open stream represented by pointer to a FILE data structure
- Error reported by returning a NULL pointer
C API Standard Streams – stdio.h
- Three predefined streams are opened implicitly when the program is executed.
- FILE *stdin – normal source of input, can be redirected
- FILE *stdout – normal source of output, can too
- FILE *stderr – diagnostics and errors
- STDIN / STDOUT enable composition in Unix
- All can be redirected
- cat hello.txt | grep “World!”
- cat’s stdout goes to grep’s stdin
C High-Level File API
```c
// character oriented
int fputc(int c, FILE *fp); // rtn c or EOF on err
int fputs(const char *s, FILE *fp); // rtn > 0 or EOF
int fgetc(FILE *fp);
char *fgets(char *buf, int n, FILE *fp);
```
```c
// block oriented
size_t fread(void *ptr, size_t size_of_elements, size_t number_of_elements, FILE *a_file);
size_t fwrite(const void *ptr, size_t size_of_elements, size_t number_of_elements, FILE *a_file);
```
```c
// formatted
int fprintf(FILE *restrict stream, const char *restrict format, ...);
int fscanf(FILE *restrict stream, const char *restrict format, ...);
```
### C Streams: Char-by-Char I/O
```c
int main(void) {
FILE* input = fopen("input.txt", "r");
FILE* output = fopen("output.txt", "w");
int c;
c = fgetc(input);
while (c != EOF) {
fputc(output, c);
c = fgetc(input);
}
fclose(input);
fclose(output);
}
```
### C Streams: Block-by-Block I/O
```c
#define BUFFER_SIZE 1024
int main(void) {
FILE* input = fopen("input.txt", "r");
FILE* output = fopen("output.txt", "w");
char buffer[BUFFER_SIZE];
size_t length;
length = fread(buffer, sizeof(char), BUFFER_SIZE, input);
while (length > 0) {
fwrite(buffer, sizeof(char), length, output);
length = fread(buffer, sizeof(char), BUFFER_SIZE, input);
}
fclose(input);
fclose(output);
}
```
### C High-Level File API
```c
// character oriented
int fputc(int c, FILE *fp); // rtn c or EOF on err
int fputs(const char *s, FILE *fp); // rtn > 0 or EOF
int fgetc(FILE *fp);
char fgets(char *buf, int n, FILE *fp);
// block oriented
size_t fread(void *ptr, size_t size_of_elements,
size_t number_of_elements, FILE *a_file);
size_t fwrite(const void *ptr, size_t size_of_elements,
size_t number_of_elements, FILE *a_file);
// formatted
int fprintf(FILE *restrict stream, const char *restrict format, ...);
int fscanf(FILE *restrict stream, const char *restrict format, ...);
```
### Aside: Check your Errors!
- Systems programmers should always be paranoid!
- Otherwise you get intermittently buggy code
- We should really be writing things like:
```c
FILE* input = fopen("input.txt", "r");
if (input == NULL) {
perror("Failed to open input file")
}
``
- **Be thorough about checking return values!**
- Want failures to be systematically caught and dealt with
- I may be a bit loose with error checking for examples in class (to keep short)
- Do as I say, not as I show in class!
C High-Level File API: Positioning The Pointer
```c
int fseek(FILE *stream, long int offset, int whence);
long int ftell (FILE *stream)
void rewind (FILE *stream)
```
- For `fseek()`, the `offset` is interpreted based on the `whence` argument (constants in `stdio.h`):
- `SEEK_SET`: Then `offset` interpreted from beginning (position 0)
- `SEEK_END`: Then `offset` interpreted backwards from end of file
- `SEEK_CUR`: Then `offset` interpreted from current position
- Overall preserves high-level abstraction of a uniform stream of objects
---
I/O and Storage Layers
<table>
<thead>
<tr>
<th>Application / Service</th>
<th>Streams (buffered I/O)</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>High Level I/O</strong></td>
<td><strong>Low Level I/O</strong></td>
</tr>
<tr>
<td>Syscall</td>
<td>File System</td>
</tr>
<tr>
<td>File System</td>
<td>I/O Driver</td>
</tr>
<tr>
<td>Open File Descriptions</td>
<td>Commands and Data Transfers</td>
</tr>
<tr>
<td>Disks, Flash, Controllers, DMA</td>
<td></td>
</tr>
</tbody>
</table>
---
C Low-Level (pre-opened) Standard Descriptors
```c
#include <fcntl.h>
#include <unistd.h>
#include <sys/types.h>
int open (const char *filename, int flags [], mode_t mode)
int creat (const char *filename, mode_t mode)
int close (int filedes)
```
- Integer return from `open()` is a **file descriptor**
- Error indicated by `return < 0`: the global `errno` variable set with error (see man pages)
- Operations on **file descriptors**:
- Open system call created an **open file description** entry in system-wide table of open files
- **Open file description** object in the kernel represents an instance of an open file
- Why give user an integer instead of a pointer to the file description in kernel?
---
Low-Level File I/O: The RAW system-call interface
```c
#include <fcntl.h>
#include <unistd.h>
#include <sys/types.h>
int open (const char *filename, int flags [], mode_t mode)
int creat (const char *filename, mode_t mode)
int close (int filedes)
```
- Bit vector of:
- Access modes (Rd, Wr, ...)
- Open Flags (Create, ...)
- Operating modes (Appends, ...)
---
```c
// Get file descriptor inside FILE *
int fileno (FILE *stream)
```
// Make FILE * from descriptor
FILE * fdopen (int filedes, const char *opentype)
Low-Level File API
- Read data from open file using file descriptor:
```c
ssize_t read (int filedes, void *buffer, size_t maxsize)
```
- Reads up to maxsize bytes – **might actually read less!**
- Returns bytes read, 0 => EOF, -1 => error
- Write data to open file using file descriptor
```c
ssize_t write (int filedes, const void *buffer, size_t size)
```
- Returns number of bytes written
- Reposition file offset within kernel (this is independent of any position held by high-level FILE descriptor for this file!)
```c
off_t lseek (int filedes, off_t offset, int whence)
```
Example: **lowio.c**
```c
int main()
{
char buf[1000];
int fd = open("lowio.c", O_RDONLY, S_IRUSR | S_IWUSR);
ssize_t rd = read(fd, buf, sizeof(buf));
int err = close(fd);
ssize_t wr = write(STDOUT_FILENO, buf, rd);
}
```
- How many bytes does this program read?
POSIX I/O: Design Patterns
- Open before use
- Access control check, setup happens here
- Byte-oriented
- Least common denominator
- OS responsible for hiding the fact that real devices may not work this way (e.g. hard drive stores data in blocks)
- Explicit close
POSIX I/O: Kernel Buffering
- Reads are buffered inside kernel
- Part of making everything byte-oriented
- Process is **blocked** while waiting for device
- Let other processes run while gathering result
- Writes are buffered inside kernel
- Complete in background (more later on)
- Return to user when data is “handed off” to kernel
- This buffering is part of global buffer management and caching for block devices (such as disks)
- Items typically cached in quanta of disk block sizes
- We will have many interesting things to say about this buffering when we dive into the kernel
**Low-Level I/O: Other Operations**
- Operations specific to terminals, devices, networking, ...
- e.g., ioctl
- Duplicating descriptors
- int dup2(int old, int new);
- int dup(int old);
- Pipes – channel
- int pipe(int pipefd[2]);
- Writes to pipefd[1] can be read from pipefd[0]
- File Locking
- Memory-Mapping Files
- Asynchronous I/O
**Low-Level vs High-Level file API**
- Low-level direct use of syscall interface: open(), read(), write(), close()
- Opening of file returns file descriptor: int myfile = open(...);
- File descriptor only meaningful to kernel
- Index into process (PDB) which holds pointers to kernel-level structure ("file description") describing file.
- Every read() or write() causes syscall no matter how small (could read a single byte)
- Consider loop to get 4 bytes at a time using read():
- Each iteration enters kernel for 4 bytes.
- High-level buffered access: fopen(), fread(), fwrite(), fclose()
- Opening of file returns ptr to FILE: FILE *myfile = fopen(...);
- FILE structure is user space contains:
- a chunk of memory for a buffer
- the file descriptor for the file (fopen() will call open() automatically)
- Every fread() or fwrite() filters through buffer and may not call read() or write() on every call.
- Consider loop to get 4 bytes at a time using fread():
- First call to fread() calls read() for block of bytes (say 1024). Puts in buffer and returns first 4 to user.
- Subsequent fread() grab bytes from buffer
**Low-Level vs High-Level File API**
- Streams are buffered in user memory:
- printf("Beginning of line \n");
- sleep(10); // sleep for 10 seconds
- printf("and end of line\n");
- Prints out everything at once
- Operations on file descriptors are visible immediately
- write(STDOUT_FILENO, "Beginning of line ", 18);
- sleep(18);
- write("and end of line \n", 16);
- Outputs "Beginning of line" 10 seconds earlier than "and end of line"
Conclusion
- System Call Interface is “narrow waist” between user programs and kernel
- Must enter kernel atomically by setting PC to kernel routine at same time that CPU enters kernel mode
- Processes consist of one or more threads in an address space
- Abstraction of the machine: execution environment for a program
- Can use fork, exec, etc. to manage threads within a process
- We saw the role of the OS library
- Provide API to programs
- Interface with the OS to request services
- Streaming IO: modeled as a stream of bytes
- Most streaming I/O functions start with “f” (like “fread”)
- Data buffered automatically by C-library function
- Low-level I/O:
- File descriptors are integers
- Low-level I/O supported directly at system call level
|
{"Source-Url": "https://cs162.org/static/lectures/4.pdf", "len_cl100k_base": 7262, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 44406, "total-output-tokens": 8339, "length": "2e12", "weborganizer": {"__label__adult": 0.0003814697265625, "__label__art_design": 0.0003085136413574219, "__label__crime_law": 0.0002980232238769531, "__label__education_jobs": 0.0030345916748046875, "__label__entertainment": 8.028745651245117e-05, "__label__fashion_beauty": 0.0001424551010131836, "__label__finance_business": 0.00011414289474487303, "__label__food_dining": 0.0004374980926513672, "__label__games": 0.0009241104125976562, "__label__hardware": 0.0024871826171875, "__label__health": 0.0003662109375, "__label__history": 0.0002532005310058594, "__label__home_hobbies": 0.00017952919006347656, "__label__industrial": 0.0005335807800292969, "__label__literature": 0.0002753734588623047, "__label__politics": 0.0002112388610839844, "__label__religion": 0.0005316734313964844, "__label__science_tech": 0.01329803466796875, "__label__social_life": 0.00015974044799804688, "__label__software": 0.005329132080078125, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.00043582916259765625, "__label__transportation": 0.0006794929504394531, "__label__travel": 0.0002124309539794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27871, 0.00625]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27871, 0.23822]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27871, 0.76808]], "google_gemma-3-12b-it_contains_pii": [[0, 1879, false], [1879, 3416, null], [3416, 5798, null], [5798, 6876, null], [6876, 8442, null], [8442, 9553, null], [9553, 11227, null], [11227, 12840, null], [12840, 14272, null], [14272, 14839, null], [14839, 16942, null], [16942, 19294, null], [19294, 21219, null], [21219, 23391, null], [23391, 25164, null], [25164, 27103, null], [27103, 27871, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1879, true], [1879, 3416, null], [3416, 5798, null], [5798, 6876, null], [6876, 8442, null], [8442, 9553, null], [9553, 11227, null], [11227, 12840, null], [12840, 14272, null], [14272, 14839, null], [14839, 16942, null], [16942, 19294, null], [19294, 21219, null], [21219, 23391, null], [23391, 25164, null], [25164, 27103, null], [27103, 27871, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27871, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27871, null]], "pdf_page_numbers": [[0, 1879, 1], [1879, 3416, 2], [3416, 5798, 3], [5798, 6876, 4], [6876, 8442, 5], [8442, 9553, 6], [9553, 11227, 7], [11227, 12840, 8], [12840, 14272, 9], [14272, 14839, 10], [14839, 16942, 11], [16942, 19294, 12], [19294, 21219, 13], [21219, 23391, 14], [23391, 25164, 15], [25164, 27103, 16], [27103, 27871, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27871, 0.01946]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
6ec6188440d174ce782918878f48c4942be5d10e
|
Introducing DocumentDB
A NoSQL Database for Microsoft Azure
## Contents
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Why DocumentDB?</td>
<td>3</td>
</tr>
<tr>
<td>The DocumentDB Data Model</td>
<td>4</td>
</tr>
<tr>
<td>Working with Data</td>
<td>5</td>
</tr>
<tr>
<td>RESTful Access Methods</td>
<td>6</td>
</tr>
<tr>
<td>DocumentDB SQL</td>
<td>7</td>
</tr>
<tr>
<td>Executing Logic in the Database</td>
<td>8</td>
</tr>
<tr>
<td>Stored Procedures</td>
<td>8</td>
</tr>
<tr>
<td>Triggers</td>
<td>9</td>
</tr>
<tr>
<td>User-Defined Functions</td>
<td>10</td>
</tr>
<tr>
<td>Consistency Options</td>
<td>11</td>
</tr>
<tr>
<td>Conclusion</td>
<td>12</td>
</tr>
<tr>
<td>About the Author</td>
<td>13</td>
</tr>
</tbody>
</table>
Why DocumentDB?
Suppose you’re responsible for creating a new application. You’re not entirely sure what kinds of data it will work with, although you know that it will be used by a variety of clients. You’re also not sure how that data should be structured; things are bound to change over the application’s lifetime. You’re not even sure how much data the application will need to handle.
You do know some things, however. You know you want the application to run in the public cloud, for all of the usual reasons: fast deployment, low cost, scalability, and more. You know that the application needs to be available all the time, which means that downtime for things like schema changes isn’t an option. You know that you’ll need powerful queries and atomic transactions—simple reads and writes aren’t enough. And you know that you’d like to build on your existing knowledge rather than be forced to grapple with an entirely unfamiliar technology.
You could use a relational database for this application. On Microsoft Azure, for example, you might use SQL Database, which is a managed relational service, or you might run your own database server in a virtual machine. But going this route requires defining a schema up front, then probably accepting downtime whenever you modify that schema to handle changes in the structure of your data. Relational databases can also be hard to scale for lots of data, and using one means addressing the challenge of object/relational mapping.
Is there another option? There is; instead of a relational database, your application can use DocumentDB, a managed NoSQL database service provided by Microsoft Azure.
DocumentDB is designed for situations like the one just described. It doesn’t require any kind of schema, opting instead to store data as JavaScript Object Notation (JSON). This frees application developers from being locked into a hard-to-change structure for their data. It also frees them from worrying about object/relational mapping, since the state of their application’s objects can typically be stored directly as JSON. And because DocumentDB is a managed Azure service, a developer can create a new database in minutes, then let DocumentDB handle much of the management. All of this makes development, deployment, and updating simpler and faster.
To support applications with lots of users and lots of data, DocumentDB is designed to scale: a single database can be spread across many different machines, and it can contain hundreds of terabytes of data. DocumentDB also provides a query language based on SQL, along with the ability to run JavaScript code directly in the database as stored procedures and triggers with atomic transactions.
The truth is that applications are different today, and the way they work with data is different, too. Database technologies are evolving to reflect these changes, as DocumentDB shows. And this NoSQL database service isn’t difficult to understand—you just need to grasp a few fundamental concepts. Those concepts include:
- The DocumentDB data model.
- How applications work with data.
- The options applications have for balancing performance with data consistency.
What follows looks at each of these.
The DocumentDB Data Model
DocumentDB’s data model is simple: all data is stored in JSON documents\(^1\). For example, suppose you’re creating a DocumentDB application that works with customers. Information about each of those customers would typically be described in its own JSON document, and so the document for the customer Contoso might look like this:
```json
{
"name": "Contoso",
"country": "Germany",
"contacts":
[
{"admin": "Johann Schmidt", "email": "johschmidt@contoso.com"},
{"purchasing": "Anusha Swami", "email": "anusha@contoso.com"}
],
"salesYTD": 49003.23
}
```
The document for the customer Fabrikam would likely be similar, but it needn’t be identical. It might look like this:
```json
{
"name": "Fabrikam",
"country": "USA",
"contacts":
[
{"ceo": "Mary Chen", "email": "mary@fabrikam.com",
"phone": "510-555-3443"},
{"purchasing": "Frank Allen", "email": "franka@fabrikam.com",
"email": "fallen@fabrikam.com"}
],
"salesRank": 3,
"salesYTD": 1399450.22
}
```
As these simple documents show, JSON data is modelled as name/value pairs. In this example, each customer has elements for name and country, with an appropriate value for each one. Each also has a contacts element, the value for which is an array of name/value pairs wrapped in square brackets. An element’s values can be character strings, integers, floating point numbers, or another JSON type.
Notice that while these two customer documents are similar, their structure isn’t identical. This is fine; DocumentDB doesn’t enforce any schema. In this example, both customers have several common elements, such as name and country. There are also differences, however. The contacts for Fabrikam include the CEO—they’re a big customer—along with her phone number. Also, the purchasing manager for Fabrikam has two email addresses, rather than the single address for Contoso’s purchasing manager, and the Fabrikam document contains an element describing its sales rank. This is all perfectly legal in DocumentDB. Since there's no schema, there's no requirement that all documents conform to the same structure.
\(^1\) As its name suggests, DocumentDB fits in the NoSQL category known as document databases. It’s not a key/value store like Azure Tables or a column family store like HBase.
DocumentDB groups JSON documents into collections. A single DocumentDB database can contain many collections; to grow the database, you just add a new collection. Figure 1 shows how this looks.

**Figure 1:** A DocumentDB database contains collections of JSON documents.
As the figure suggests, the documents in a particular collection might all look quite similar, with each one containing, say, the information for a specific customer in the style shown earlier. It’s also possible for each document in a collection to look completely different—DocumentDB doesn’t constrain this. Unlike a relational table, where every row holds data in a fixed set of columns, a document can contain whatever the application needs. And although it’s not shown in Figure 1, documents can have attachments such as videos that are accessible via DocumentDB but are physically stored in Azure Blobs or elsewhere.
With DocumentDB, an application typically keeps all of the data about some entity, such as a customer, in a single document. Unlike a relational database, which would probably spread that data across several different tables, applications using DocumentDB commonly keep it all together. While the style used by a relational database has some advantages, storing all of an object’s data in one place can make life simpler for application developers. Rather than accessing their data using complex queries with one or more joins, for example, they can instead work directly with a document containing everything they need. This approach also speeds up access, since a DocumentDB request can often look at just one document to find what’s needed.
**Working with Data**
DocumentDB clients can be written in multiple languages, including C#, JavaScript, and Python. Whatever choice a developer makes, the client accesses DocumentDB through RESTful access methods. A developer can use these to work with documents in a collection in a few different ways. The options are:
- Using these access methods directly for create/read/update/delete (CRUD) operations.
- Submitting requests expressed in *DocumentDB SQL*.
Defining and executing logic that runs inside DocumentDB, including stored procedures, triggers, and user-defined functions (UDFs).
Figure 2 illustrates these options.
Figure 2: Clients access documents in collections via RESTful access methods and can also run logic in the database itself.
RESTful Access Methods
If an application has the necessary permissions, it can use DocumentDB’s RESTful access methods to perform CRUD operations on documents and other resources. Like every RESTful interface, DocumentDB uses the standard HTTP verbs:
- A GET request returns the value of a resource, such as a document.
- A PUT request replaces a resource.
- A POST request creates a new resource. POSTs are also used to send DocumentDB SQL requests and to create new stored procedures, triggers, and UDFs.
- A DELETE request removes a resource.
A developer using this interface is free to construct requests manually—it’s just REST. But to make life easier, DocumentDB provides several client libraries. As Figure 2 shows, the options include .NET (with LINQ support), JavaScript, Node.js, and Python.
**DocumentDB’s Native Tongue: JavaScript**
Whether an application uses DocumentDB’s client libraries or directly invokes its RESTful access methods, the code can be written in many different languages. Still, it’s fair to say that the native tongue of DocumentDB is JavaScript.
One reason for this is that DocumentDB returns results in JSON. A JavaScript application can use a JSON parser to turn these results directly into JavaScript variables, modify those variables, then send the changed data back to the database. This is a simple, natural way to work; there’s no impedance mismatch, which means there’s also no need for mapping, object/relational or otherwise.
Also, logic that executes within DocumentDB itself, including stored procedures, triggers, and UDFs, must be written in JavaScript, as described later. While developers working in C# or other languages can certainly use DocumentDB successfully, writing code that runs in the database will require learning JavaScript or finding somebody who knows it. Given the large (and growing) number of developers who already work in this language, it shouldn’t be surprising that DocumentDB’s creators chose to make it a fundamental part of this cloud database service.
---
**DocumentDB SQL**
A DocumentDB client can read and write data using the service’s RESTful access methods. But a real database needs a real query language, something that lets applications work with data in more complex ways. This is what DocumentDB SQL provides.
This language is an extended subset of SQL, a technology that many developers already know. For example, suppose the simple JSON documents shown earlier are contained in a collection called `customers`. Here’s a query on that collection:
```sql
SELECT c.salesYTD
FROM customers c
WHERE c.name = "Fabrikam"
```
As anybody who knows SQL can probably figure out, `SELECT` requests the value of the element `salesYTD`, `FROM` indicates that the query should be executed against documents in the `customers` collection, and `WHERE` specifies the condition that documents within that collection should meet. The query’s result is year-to-date sales for Fabrikam formatted as JSON data:
```json
{
"salesYTD": 1399450.22
}
```
Executing Logic in the Database
DocumentDB SQL lets a client issue a request that’s parsed and executed when it’s received. But there are plenty of situations where it makes more sense to run logic stored in the database itself. DocumentDB provides several ways to do this, including stored procedures (commonly called sprocs), triggers, and user-defined functions (UDFs). A collection can contain any or all of them.
Stored Procedures
Stored procedures implement logic, which means they must be written in some programming language. Relational databases commonly create their own language for doing this, such as SQL Server’s T-SQL. But what should this language look like for a database that stores JSON documents? The answer is obvious: stored procedures should be written in JavaScript, which is exactly what DocumentDB does.
To execute a stored procedure, a client application issues a POST request indicating which sproc to run and passing in any input parameters. Figure 2 illustrates how the sproc works with documents in a collection.
Indexing in DocumentDB
Indexes are an important aspect of database technologies. Creating an index makes lookups faster, and so operations on indexed elements will have better performance. Some NoSQL databases, such as many key/value stores, provide just a single index. Other approaches, such as relational databases and some document databases, let their users explicitly create indexes on particular elements.
DocumentDB takes neither of these paths. Instead, it by default creates an index on every document in a collection. In the example documents shown earlier, for example, DocumentDB would automatically create indexes on name, country, contacts, and more. Developers don't need to decide up front which JSON elements they’re likely to query on, then create indexes only for those elements. DocumentDB automatically indexes all of them (and advanced users can configure and tune these indexes as needed). This gives ad hoc queries speedy access to everything in the database.
Figure 3: A stored procedure is JavaScript code that works directly with document elements as variables.
As the figure shows, sprocs work with documents in a straightforward way. When the sproc begins, elements of the JSON document (or documents) it's working with are copied into JavaScript variables (step 1). The sproc's code then works with those variables, changing them as needed (step 2). When the sproc completes, any modified variables can have their values written back to the JSON document (step 3). The goal is to make writing sprocs as simple and natural as possible—they're just JavaScript.
Every sproc is wrapped in an atomic transaction. If the sproc ends normally, all of the changes it has made to documents in this collection will be committed. If it throws an exception, however, all of the changes it has made to these documents will be rolled back. And while the sproc is executing, its work is isolated—no other requests to this database will see partial results.
Stored procedures can make life easier for developers, since logic that might otherwise be replicated in multiple applications can instead be encapsulated in the database. Stored procedures can also have performance advantages. Rather than requiring an application to issue multiple requests to accomplish a task, for example, with the round trips this implies, a sproc can do all of this work with a single call. While stored procedures aren't right for every situation, they're definitely a useful and important part of modern databases.
**Triggers**
DocumentDB triggers are similar in some ways to stored procedures: they're invoked via a POST request, and they're written in JavaScript. They also materialize JSON documents into JavaScript variables and are automatically wrapped in an atomic transaction. Unlike sprocs, however, a trigger runs when a specific event happens, such as data being created, changed, or deleted.
DocumentDB supports pre-triggers, which run before the event occurs, and post-triggers, which run after the event has finished. For example, a pre-trigger executed when a document is changed might do data validation, making sure that the new data conforms to a specific format. A post-trigger run when a document is created might update another document in the collection that tracks all newly created information. If an application creates and registers these two triggers, future requests to change or add documents can indicate that the database should also run the appropriate trigger. Rather than requiring the application to explicitly perform data validation and document tracking itself, it can rely on the triggers to handle these.
If a trigger throws an exception, the transaction it’s part of aborts, and everything gets rolled back. This includes the work done by the trigger itself and the work done by whatever request caused the trigger to execute. For example, if a post-trigger run on document creation aborts, the new document will not be created.
Triggers are a useful way to carry out common database functions, and like stored procedures, they’re an integral part of modern databases.
**User-Defined Functions**
Like stored procedures and triggers, user-defined functions are written in JavaScript, and they run within DocumentDB itself. UDFs can’t make changes to the database, however—they’re read-only. Instead, a UDF provides a way to extend DocumentDB SQL with custom code.
For example, suppose the customers collection contained a UDF called calculateTax that computed the tax on sales. To find all customers where the tax is more than $1,000, an application might issue a query like this:
```sql
SELECT *
FROM customers c
WHERE calculateTax(c.salesYTD) > 1000
```
Putting this calculation in a UDF makes it easier to use, since it acts like part of the query language. It also makes the logic simpler to share, since it’s stored in the database rather than in a single application.
UDFs can do quite a bit more than this. Since they’re written in JavaScript, they make it straightforward to add standard JavaScript functions to DocumentDB SQL. They can also be used to check for the presence of elements in a document, such as returning all customer documents that have a salesRank element, or implementing geospatial queries, such as NEAR, or many other things. The ability to extend the query language with custom JavaScript code can make life substantially easier for the people who use DocumentDB.
Consistency Options
DocumentDB is designed to be both scalable and reliable. To achieve this, it maintains at least three copies of all data, storing each copy on a different physical server. This replication is done at the granularity of collections: a single server can store multiple collections, but a collection is never split across servers.
Replication helps scalability because different clients reading data in the same collection can potentially have their requests handled by any of the replicas—a single server won’t be a bottleneck. Replicating each collection also helps reliability by ensuring that data is still available even if one or two servers become inaccessible.
But replication isn’t free. In any replicated system, the big challenge happens when clients write data. How can a document be changed while still keeping everything consistent? Propagating a write across all replicas takes some time, and while it’s happening, either the same application or another one might read this just-modified data. If this read is handled by a replica that’s already been updated, life is good; the application will see the correct data. But suppose the read is handled by one of the replicas that hasn’t yet been informed of the latest write. In this case, the read will return out-of-date data.
What’s the best way to handle this situation? The answer depends on the application. In some cases, applications absolutely need to see the most current data on every read. But some applications can accept reading slightly out-of-date information, which can improve the application’s performance and availability.
Because different applications have different requirements, DocumentDB doesn’t mandate a choice. Instead, it defines four distinct consistency options, each with different tradeoffs between data correctness and performance. The choices are:
Strong: A DocumentDB client always sees completely consistent data. The tradeoff is that reads and writes are slower than with the other three options. An application that must always see the most current data, such as banking software that moves money between documents, might choose this option.
Bounded Staleness: A client might see old data, but it’s guaranteed to see changes in the order in which they were made. In other words, clients will never see out-of-order data. A client can also specify a limit for how old that data can be, e.g., one second. In a multi-player game, for instance, which requires great performance and strict ordering of events, Bounded Staleness might be the right choice.
Session: A client will always read its own writes correctly, but other clients reading this same data might see older values or out-of-order updates. An application that works on behalf of a specific user, such as a blogging application, might choose this option. In cases like these, each user expects to see the changes she makes, i.e., everything done in her session, right away. Yet she probably doesn’t care whether there’s a slight delay in seeing changes made by other users. This turns out to be the sweet spot for many applications—it has the best trade-off between correctness and performance—and so it’s the default in DocumentDB.
Eventual: This option has the highest performance, but a client might sometimes read out-of-date information or see updates out of order.
DocumentDB databases default to Session consistency, but developers can change a database’s default if necessary. They can also override the default consistency level on a per-request basis. But because different applications really do have different requirements, DocumentDB doesn’t mandate a choice. Developers are free to use the consistency option that’s best for their situation.
Conclusion
DocumentDB is a relatively simple and scalable database—it’s a NoSQL technology—that also provides more advanced data management capabilities such as a SQL-based query language, stored procedures, and atomic transactions. You should consider using it whenever your application needs any or all of the following:
- The programming ease provided by native JSON and JavaScript support.
- The flexibility of not being locked into a schema.
- The scale and availability allowed by replicating data across multiple machines.
- The simplicity of a managed database service on a public cloud platform.
As computing continues its move to the cloud, more and more applications can benefit from this approach. In fact, a cloud platform that doesn’t offer a document database today is probably behind the times.
About the Author
David Chappell is Principal of Chappell & Associates (www.davidchappell.com) in San Francisco, California. Through his speaking, writing, and consulting, he helps people around the world understand, use, and make better decisions about new technologies.
|
{"Source-Url": "http://www.davidchappell.com/writing/white_papers/Introducing-DocumentDB--Chappell-v1.1.pdf", "len_cl100k_base": 4809, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 32682, "total-output-tokens": 5299, "length": "2e12", "weborganizer": {"__label__adult": 0.00020933151245117188, "__label__art_design": 0.0002187490463256836, "__label__crime_law": 0.0002048015594482422, "__label__education_jobs": 0.0003235340118408203, "__label__entertainment": 6.431341171264648e-05, "__label__fashion_beauty": 9.512901306152344e-05, "__label__finance_business": 0.0006780624389648438, "__label__food_dining": 0.0002334117889404297, "__label__games": 0.00036263465881347656, "__label__hardware": 0.0008902549743652344, "__label__health": 0.00023496150970458984, "__label__history": 0.0001308917999267578, "__label__home_hobbies": 7.200241088867188e-05, "__label__industrial": 0.00022041797637939453, "__label__literature": 0.00015532970428466797, "__label__politics": 0.0001170039176940918, "__label__religion": 0.0002135038375854492, "__label__science_tech": 0.01434326171875, "__label__social_life": 5.626678466796875e-05, "__label__software": 0.043487548828125, "__label__software_dev": 0.93701171875, "__label__sports_fitness": 0.00014352798461914062, "__label__transportation": 0.00022518634796142575, "__label__travel": 0.00013434886932373047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23318, 0.01225]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23318, 0.5292]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23318, 0.90706]], "google_gemma-3-12b-it_contains_pii": [[0, 60, false], [60, 857, null], [857, 4074, null], [4074, 6531, null], [6531, 8668, null], [8668, 9768, null], [9768, 11994, null], [11994, 14030, null], [14030, 15951, null], [15951, 18489, null], [18489, 20357, null], [20357, 23047, null], [23047, 23318, null]], "google_gemma-3-12b-it_is_public_document": [[0, 60, true], [60, 857, null], [857, 4074, null], [4074, 6531, null], [6531, 8668, null], [8668, 9768, null], [9768, 11994, null], [11994, 14030, null], [14030, 15951, null], [15951, 18489, null], [18489, 20357, null], [20357, 23047, null], [23047, 23318, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23318, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23318, null]], "pdf_page_numbers": [[0, 60, 1], [60, 857, 2], [857, 4074, 3], [4074, 6531, 4], [6531, 8668, 5], [8668, 9768, 6], [9768, 11994, 7], [11994, 14030, 8], [14030, 15951, 9], [15951, 18489, 10], [18489, 20357, 11], [20357, 23047, 12], [23047, 23318, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23318, 0.09524]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
cd7013a903cd230d212e6f2c45a5cd23e2f44972
|
Efficient Encoding of SystemC/TLM in Promela
Kevin Marquet, Matthieu Moy, Bertrand Jeannet
To cite this version:
Kevin Marquet, Matthieu Moy, Bertrand Jeannet. Efficient Encoding of SystemC/TLM in Promela. DATICS-IMECS, Mar 2011, Hong Kong SAR China. hal-00557515
HAL Id: hal-00557515
https://hal.science/hal-00557515
Submitted on 19 Jan 2011
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Efficient Encoding of SystemC/TLM in Promela
Kevin Marquet
Verimag
Univ. Joseph Fourier
Grenoble, France
Kevin.Marquet@imag.fr
Bertrand Jeannet
INRIA Rhônes-Alpes
Grenoble, France
Bertrand.Jeannet@inrialpes.fr
Matthieu Moy
Verimag
Grenoble INP
Grenoble, France
Matthieu.Moy@imag.fr
Abstract—To deal with the ever growing complexity of Systems-on-Chip, designers use models early in the design flow. SystemC is a commonly used tool to write such models. In order to verify these models, one thriving approach is to encode its semantics into a formal language, and then to verify it with verification tools. Various encodings of SystemC into formal languages have already been proposed, with different performance implications. In this paper, we investigate a new, automatic, asynchronous means to formalize models. Our encoding supports the subset of the concurrency and communication constructs offered by SystemC used for high-level modeling. We increase the confidence in the fact that encoded programs have the same semantics as the original one by model-checking a set of properties. We give experimental results on our formalization and compare with previous works.
I. INTRODUCTION
As the complexity of embedded systems grows, the need for new methods has appeared for the co-design of hardware and software. Indeed, low-level hardware description languages such as VHDL and Verilog simulate slowly, can hardly be used to design complex systems and therefore make early software development difficult. Consequently, higher-level modeling tools have appeared, allowing hardware and software descriptions.
Transaction-Level Modeling [4] (TLM) is an approach in which the architecture and the behavior of a System-on-Chip (SoC) are described in an executable model, but the micro-architecture details and precise timing behavior are abstracted away. SystemC [20] has become the de facto standard for TLM modeling. It contains a simulation kernel that can execute concurrent processes communicating through channels and shared variables, using C++ libraries. In this paper, we are interested in TLM programs, written in SystemC. We focus on the subset of SystemC needed for TLM modeling, leaving apart the constructs originally introduced in SystemC to write lower-level programs (like RTL).
SystemC descriptions are C++ concurrent programs that can be tested and/or verified in order to detect design flaws. Verifying a concurrent program can be done with various approaches. One thriving approach is to describe its semantics formally, and then to verify this semantics using verification tools. The first step is called model extraction and leads to the translation of the program into a formal representation, and the second step is the verification performed on the formal representation. Different representations can be chosen, that model differently time and concurrency, and that are connected to different verification tools.
This paper focuses on the issue of model extraction, in the context of the verification of SoC modeled as SystemC concurrent programs. Our contributions are as follows:
1) We present new encoding principles in section IV for the extraction of formal representations from SystemC programs, and in particular for modeling the semantics of SystemC scheduler. We argue that this encoding is simple and elegant, although it involves some subtle points. Its main goal is however to favor the efficiency of verification tools. This extraction is performed in a fully automatic way by our verification chain.1
2) In order to validate their correctness, we define properties that must hold for an encoding to be valid. These properties and how they are tested are detailed in section V.
3) At last, section VI presents experimental results on SystemC examples translated to Promela, the asynchronous formalism used as input to the SPIN model-checker. Our results show major improvements over past similar works, thanks to the fact that our encoding does not introduce complex behaviors limiting the applicability of formal verification tools. We show in particular a tremendous reduction of the number of states that SPIN needs to explore.
Before presenting these, we present SystemC in section II and compare our approach to related works in section III.
II. SYSTEMC
We give a very partial overview of SystemC, focusing on the points that are relevant for this paper.
A SystemC program defines an architecture, i.e. a set of components and connections between them, and a behavior, i.e. components have a behavior defined by one or several processes and communicate with each other through ports. Once the architecture is defined (by the elaboration phase performed at the beginning of execution), the simulation phase starts: processes execute according to the SystemC scheduling policy. As an example, figure 1 shows a SystemC module containing two processes, one waiting for an event, the other notifying it.
We do not consider here the notion of δ-cycles [20], inspired from traditional HDL languages, since it is not useful for
1The implementation is open-source and available from http://gitorious.org/pinavm.
SC_MODULE(mytop) {
sc_event e;
SC_CTOR(mytop) {
SC_THREAD(myFctP); SC_THREAD(myFctQ);
}
void myFctP() {...; wait(e); ... }
void myFctQ() {...; e.notify(); ... }
}
Fig. 1. A basic SystemC module
TLM models (this implies that we do not support SystemC constructs like `wait(SC_ZERO_TIME)`, which makes a process wait until the next evaluation phase, or components `sc_signal` and `sc_fio`). We focus on the following constructs of SystemC, which are the basis for TLM modeling:
- `wait(d: int)` Stops executing the current process, yields back the control to the scheduler and makes the current process to wait for the given duration.
- `wait(e: event)` Stops executing the current process, yields back the control to the scheduler and makes the current process to wait for the event to occur. SystemC allows the constructs `wait(e1 & e2)` and `wait(e1 | e2)` to wait for conjunctions and disjunctions of events.
- `event.notify()` Makes processes waiting for the specified event eligible (without stopping the current process).
- `event.notify(delay: int)` Triggers a notification after the given delay. In SystemC, only the earliest timed notification is kept, which simplifies the semantics of this primitive.
SystemC scheduling follows a non-preemptive scheduling policy. When several processes are eligible at the same time, the scheduler runs them in an unspecified order.
Concerning communications between process, we use shared variables to model several threads belonging to the same module communicating by accesses to the fields of the module. Concerning TLM ports, our implementation does not manage them explicitly; it requires the function calls to be done directly from modules to modules instead of relying on port/socket bindings [21], which is a (useful) syntactical sugar. We therefore focus on the notion of method calls.
Restricting ourselves to a strict subset of SystemC is not a limitation as far as we are focus on TLM models. Of course it implies that we cannot handle more general SystemC programs, but it also makes our approach more general in the sense that it could easily be adapted to other discrete-event cooperative simulator (like the cooperative version of jTLM [2]).
III. OVERVIEW OF THE PROBLEM AND RELATED WORKS
General overview: The challenge raised by formal verification of SystemC models is that SystemC has not been designed for this purpose. An option could be to consider them as regular C++ programs, but few verification tools are available for them, especially when the goal is to check functional properties. Moreover, a general verifier would have to analyze the SystemC class library and to rediscover by itself its high-level semantics. For these reasons, most related work proceeds differently: the user’s code is translated and abstracted to the formal model accepted by the targeted verification tool, whereas the high-level semantics of SystemC/TLM class libraries is hand-coded in the formal model. The verification tool is then applied to the resulting model.
Representation of the SystemC scheduler: Modeling the semantics of the SystemC library reduces mainly to modeling the SystemC scheduler. Three options can be imagined to represent the scheduler in a formal representation: (1) model the deterministic behavior of the reference implementation described in the SystemC standard [20]; or (2) model a more general non-deterministic scheduler, either (2a) as an explicit additional process, or (2b) by incorporating it in the semantics of the synchronization instructions (typically the ones described above). Choosing arbitrarily a specific, deterministic scheduler allows only to explore a subset of the behaviors. We do not want such restriction and therefore do not consider solution 1.
Solution 2a is interesting as it does not restrict the set of possible behaviors. This is the solution considered in [17]. However, encoding the scheduler as a special process interacting with the SystemC processes complexifies the behavior of the global system. Typically, such an encoding induces additional communications between processes, compared to the original SystemC semantics. For instance, the encoding of the `event.notify()` primitive is likely to induce a context-switch (as it changes the state of the scheduler), which does not occur in the original SystemC semantics. The bad consequence is that such additional communications may prevent verification tools to perform powerful optimizations. Typically, partial-order reduction relies on a notion of “independent transitions”, and cannot be applied if the notion of “transition” of the model does not correspond to the notion of atomic sections in SystemC.
Consequently, we have chosen the approach of point 2b: we do not encode the scheduler as an explicit process composed in parallel with the SystemC processes. Instead, we integrate the scheduler in the semantics of the synchronization primitives that are used sequentially inside each SystemC process, without introducing any “artificial” context-switches.
Related work: The related work based on encoding of SystemC programs in other formalisms we are aware of (see Fig. 2) are all based on solution 2a, but they can be further classified according to the considered formal model, which may be synchronous or asynchronous.
LusSy [17] is a prototype of a complete verification chain. It encodes the processes and the scheduler in synchronous automata. The intermediate formalism is called HPIOM. The main drawback of this formalism is that it breaks down relevant information into lower-level ones, making the task harder for verification tools, that are unable to handle real case studies. A similar work [7] describes how to generate UPPAAL models from SystemC programs. Several other translation-based approaches have been proposed [19], [10], also introducing a lot of complexity in the encoding.
Other works considers asynchronous formalisms. We actually show in section IV-C that SystemC’s time semantics is encoded naturally and efficiently with deadline variables (similar to “clocks”) evolving asynchronously, unlike the semantics of timed automata used in UPPAAL, in which clocks evolve synchronously. In [13], a SystemC process is encoded with a MicMac automaton which distinguishes micro-states and macro-states. Micro-states represent points where the process can not yield, contrarily to macro-states that are yielding points (typically following a wait()). MicMac automata can be composed in parallel using dedicated product exploiting the notion of micro-states. This approach cannot be used directly in existing verification tools that are not aware of micro-states. [22] proposes first to encode a SystemC programs into MicMac automata and then to encode MicMac automata into Promela. However, the last translation loses the specific benefits of MicMac formalism. Moreover, we show that some SystemC notions are encoded naturally in Promela (in particular, atomic sections of SystemC correspond to directly to the atomic statement in Promela), while using MicMac as an intermediate formalism prevents such direct translation and introduces unnecessary complexity in the encoding. To sum up, the approach implies the re-encoding in an explicit and asynchronous way of some mechanisms that verification tools, including SPIN, can tackle very efficiently when the corresponding native mechanisms are used.
Our approach: asynchronous formalism + shared variables: This paper proposes a solution based on an asynchronous model (namely Promela) to encode TLM concurrent programs, that consists in modeling the asynchronous communications and the semantics of the scheduler by inserting synchronization primitives manipulating shared variables into the code of the processes. The expected gain of this approach is to minimize the interactions between processes, so as to let verification tools freely apply reduction techniques such as symmetry or partial order reductions.
Other Validation Approaches: Alternatives to formal verification are based on code execution, for instance standard testing, run-time verification [6] or explicit model-checking [5]. In [5] the original C++ code is instrumented so as to enable an on-the-fly state-space exploration of the model, based on the techniques of the CADP [1] toolbox to execute native code. These methods showed to be very efficient to explore the possible schedulings of a system, but are fundamentally limited to explicit-state exploration, and cannot be extended to perform symbolic model-checking or abstract interpretation. A hybrid approach is presented in [3], which executes C++ code natively for SC_METHODS, but relies on translation for SC_THREADS. This work is probably the closest to the one presented in this paper, as the encoding does not rely on a separate process for the scheduler.
IV. Translation from C++ and Encoding of SystemC Scheduler
We first remind the general principles of our tool chain for SystemC, then we describe precisely the encoding of SystemC synchronization primitives, and last we discuss some alternatives. Among the primitives mentioned in section II, we will not consider delayed notifications, or waiting for conjunctions or disjunctions of events, but discuss in section IV-C how to extend our encoding to handle such constructs.
A. Translating User Processes from C++ with PinaVM
Translating SystemC automatically requires the use of a complete SystemC front-end. Borrowing some ideas from Pinapa [16], we set up a SystemC front-end called PinaVM [15] able to take as input a SystemC program and produce an intermediate representation. This front-end is based on the compiler infrastructure LLVM [12] and the intermediate representation is mainly composed of basic blocks containing SSA (Static Single Assignment) instructions. PinaVM executes the elaboration phase like Pinapa, and uses a Just-In-Time compiler to retrieve SystemC information on events or ports to enrich intermediate representation obtained from LLVM.
From the intermediate representation produced by our front-end, a back-end produces automatically a Promela program. Each SSA instruction is translated into an equivalent in Promela instruction. Although Promela provides some of the structuring mechanisms of a call definition, these mechanism provide no benefit for the verification engine compared to a static inlining, therefore, we chose to inline directly all function calls.
In this translation, each SystemC thread generates a Promela process. We do not consider in this paper dynamic creation of processes, that are seldom encountered in SoC models.
B. Encoding synchronization primitives
In the encoding of SystemC synchronization primitives, we rely on three features related to concurrency that are provided by Promela:
1) The ability to use shared variables.
2) The blocked(cond) primitive, which stops the execution of the current process until condition cond on shared variables becomes true, and gives the control to another process (the actual syntax in Promela is simply {cond}).
3) The notion of atomic section, that can be interrupted with the blocked primitive.
In the sequel we denote by $E^k$ the event $k$, with $1 \leq k \leq N_e$ and the set of $N_p$ processes is denoted $P$.
**Events:** SystemC events are non persistent: the instruction wait($E^k$) is blocking, and takes into account only notifications taking place after its execution: if the event $E^k$ is notified before the execution of a wait($E^k$) instruction, it will be ignored by this instruction. An important consequence is that a process can be waiting for at most one event (we currently ignored by this instruction. An important consequence is that a process can be waiting for at most one event (we currently ignored by this instruction. An important consequence is that a process can be waiting for at most one event (we currently ignored by this instruction. An important consequence is that a process can be waiting for at most one event (we currently ignored by this instruction. An important consequence is that a process can be waiting for at most one event (we currently ignored by this instruction. An important consequence is that a process can be waiting for at most one event (we currently ignored by this instruction. An important consequence is that a process can be waiting for at most one event (we currently ignored by this instruction.
For encoding events, we thus associate to each process $p$ a bounded integer $0 \leq W_p < N_e$ such that:
- $W_p = k$ when process $p$ waits for $E^k$;
- $W_p = 0$ when process $p$ is not waiting for an event and is eligible;
and we define the wait and notify instructions in Tab. I. We need for this encoding $N_p \log_2 (1 + N_e)$ bits.
**Time:** SystemC time management internally assumes a discrete time semantics, although in the API timed functions use floating-point durations. We thus assume that we have a specific construct wait($d$:int) to wait for the discrete duration $d$ to elapse.
For encoding time, we attach an internal deadline variable $T_p : int$ to each process $p$. It represents the next deadline for $p$ when $p$ is waiting, and the current date when $p$ is running. It is not necessary to examine the state of the process $p$ for each value of $T_p$, we only need to respect the schedulings allowed by the durations waited for by the processes. Consequently, we define the encoding wait($d$) in Tab. II:
- $T_p$ is incremented with $d$;
- $p$ becomes eligible if its deadline variable is the minimum of all deadline variables.
Alternatively, we could maintain a global clock $T_g$ to $\min(T_i)$ and replace the blocking condition by blocked($T_p == T_g$). The advantages and drawbacks of this option w.r.t. the efficiency of the verification process is hard to assess a priori.
**Interaction between time and events:** Events and time interact together, and things become subtle when some processes are waiting for events and others for a time duration. We propose the encoding given on table III, based on the following principles:
1. The value of a deadline variable $T_p$ is meaningful only if $W \neq 0$ (process $p$ is not waiting for an event). When a process is waiting for an event, $T_p$ is not updated. The main invariant becomes thus: “the deadline variable of a running or eligible process is the minimum of the deadline variables of processes not waiting for an event.”
2. Concerning the wait($d$) instruction, the blocked process becomes eligible as soon as its deadline variable is the minimum of deadline variables of processes not waiting for an event, according to principle 1).
3. When process $p$ notifies an event $E^k$, not only should the variables $W_i$ be reset (for processes $i$ waiting for $E^k$), but also should their deadline variable be updated to the current date (which is equal to the deadline variable $T_p$ of the running process $p$). This is because of principle (1): these deadline variables becomes meaningful again, and the invariant above should be maintained. This is important to make a sequence wait($E^k$); wait($d$) behave correctly in a process $p$.
Fig. 3 depicts the Promela code corresponding to the pseudo-code of Tab. III.
C. Discussion and Improvements
Our encoding implements in some way an asynchronous time semantics, as opposed to the synchronous time semantics of timed automata used in tools like UPPAAL [11], in which clocks evolves synchronously. Our approach thus does not enable the use of these tools. Notice however that we hardcode in our approach the fact that we only need to know the next deadlines, and not all the possible intermediate values that a discrete synchronous clock would take between the current time and the next deadline. As a result, multiplying all the durations by a constant factor does not impact the size of the reachable state-space with our encoding.
Finite-state model-checkers like SPIN [8] do not support unbounded deadline variables. However, it is easy to modify our encoding by exploiting the fact that two global states agreeing on the differences \( T_i - T_j \) between deadline variables are equivalent w.r.t. the synchronization primitives of Tab. III.
In the resulting relative time encoding, the invariant: “the minimum of the deadline variables of processes not waiting for an event is zero” is ensured by shifting accordingly those deadline variables in wait(d) instructions.
Implementing delayed notification on a single event could be done with the principles we followed in this section. This would require to add another deadline variable in each process. Implementing waiting for conjunction or disjunction of events would require the following modifications:
- The bounded integer variables \( 0 \leq W_p \leq N_e \) should be replaced by \( N_e \) Boolean variables \( W_{p,k} \) with \( 1 \leq k \leq N_e \), denoting the event \( E^k \), because a process \( p \) can know wait for a set of events.
- We should also add a Boolean variable per process to distinguish whether the process is waiting for a conjunction or a disjunction of events.
To sum up, our approach can easily model such constructs, at the cost of additional finite-state variables.
V. Validating the Encoding Principles
The encoding of SystemC primitives defined above may seem intuitively correct, but experience shows that concurrent systems are often faulty!
The ideal solution would be to prove that our encoding is correct for any program using it. Such a quantification on programs requires the use a proof-assistant, which is a very demanding task. This would require to give a formal semantics to SystemC (which implies C++) and to Promela, and to prove that the two programs are equivalent.
The approach we have chosen is to construct a set of properties and to verify them on instances of the translation, in order to get confidence in the correctness of the encoding, just like certifying compilers [18] verify the result of each compilation. Those verifications were actually very useful, allowing us to detect bugs in several preliminary versions of our encoding.
We considered three invariants (see [14]). (i) the invariant stated in section IV-B: (ii) “If process \( i \) notifies event \( E^k \) for which process \( j \) is waiting, then \( T_i \geq T_j \);” (iii) “When a process \( p \) waiting for an event is made eligible by a notifying process (line (7) of Fig. III), the deadline \( T_p \) does not change until its election as the running process.” These can be easily translated to a relative time setting discussed in section IV-C.
Two techniques were used to verify them with SPIN: direct assertions in the code, or a “monitoring” process for properties not related to a specific line number. This process only contains assertions, which can be detected as violated in the automata product performed by SPIN. As the examples we considered are deadlock-free, we also verified that the encoding does not introduce deadlocks (for instance, by scheduling processes in the wrong order).
The examples on which we checked these properties are the following. First, we experimented on an adaptation of the reader/writer problem in which two writers and one reader access a FIFO. Second, we considered a model of a communication between a Memory, a DMA, a bus and a CPU. Third, we considered the example used in a previous translation from SystemC to SPIN [22], described in the appendices of [14].
VI. Experiments and Efficiency of Our Encoding
The aim of the previous section was to check that our encoding actually reflects SystemC semantics. However, our motivation for the encoding we propose is to enable better performances of model-checkers, compared to other encoding approaches described in section III. We now compare experimentally the efficiency of our encoding w.r.t. model-checking with the encoding proposed in [22] applied to the same example.
A. A SystemC example
Our test model is the one used in [22] and detailed in [14]. It consists of a chain of modules. The first module triggers an interrupt in the next one. This interrupt notifies an event, allowing the module to trigger an interrupt in the next module, and so on. The last module contains an assertion which is either always false (bug) or always true (no-bug). The latter forces SPIN to compute the whole state space when checking for invalid assertions. While this program may seem artificial, it exhibits the characteristics found in more complex real-world models and leading to state explosion: many processes, synchronized by SystemC events, which can thus be lost depending on the execution order of the various statements. Such study allows to experiment on how the state space that needs to be explored grows depending on parameters. As this test model is untimed, we test here only the efficiency of the encoding of events.
B. Results
The results presented in Fig. 4 focuses on the main parameters which is the number of modules. It shows the number of states computed by SPIN during the model-checking of the example presented above.
Those results show a reduction by a factor of about 10 compared to previous results presented in [22]. The comparison between the two approaches, in the case where there is no
bug is shown in figure 4. We can see that, with our encoding, SPIN is able to model check up to 21 processes, compared to 15 in the other approach.

**Fig. 4.** Experimental results of the two approaches
### VII. CONCLUSION
We investigated the formalization of models of SoC in the form of asynchronous automata. We proposed an encoding of synchronization primitives related to events and time using shared variables and sequential instrumentation of processes. This choice contrasts with other approaches in which parallel instrumentation is used, under the form of an additional process modeling the SystemC scheduler added to the system. We ensured that the encoding principles are correct by verifying a number of invariants. The given principles are general and are applicable to different back-end languages.
We experimented on the SPIN model-checker, showing on a typical example that our encoding leads SPIN to explore ten times less states during model-checking of the encoded model, compared to an encoding based on parallel instrumentation. This confirms the conjecture we express in section III. In addition, the translation has been fully automated: our tool reads SystemC code directly, and generates Promela code without human intervention. Our results are thus due to our encoding and not to some specific optimizations. The tool can be downloaded freely from [http://gitorious.org/pinavm](http://gitorious.org/pinavm).
Besides experimenting with a wider set of cases studies, we see at least two point to investigate in the future. First we have yet to compare our time management to other approaches. We intend to compare this solution to approaches based on timed automata and relying on the UPAAAL [7] tool for model-checking to validate our discussion of section IV-C on the asynchronous encoding of time in SystemC. A second perspective to evaluate the relevance and the efficiency of static analysis tools such as CONCURINTERPROC [9] for checking safety properties of timed SystemC models.
### REFERENCES
|
{"Source-Url": "https://hal.science/hal-00557515/document", "len_cl100k_base": 6086, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24056, "total-output-tokens": 7942, "length": "2e12", "weborganizer": {"__label__adult": 0.0006337165832519531, "__label__art_design": 0.0005941390991210938, "__label__crime_law": 0.000579833984375, "__label__education_jobs": 0.0006422996520996094, "__label__entertainment": 0.0001170039176940918, "__label__fashion_beauty": 0.00029158592224121094, "__label__finance_business": 0.00037598609924316406, "__label__food_dining": 0.0005736351013183594, "__label__games": 0.00118255615234375, "__label__hardware": 0.004535675048828125, "__label__health": 0.0008921623229980469, "__label__history": 0.00047707557678222656, "__label__home_hobbies": 0.00017082691192626953, "__label__industrial": 0.0011301040649414062, "__label__literature": 0.00032782554626464844, "__label__politics": 0.0005364418029785156, "__label__religion": 0.000881195068359375, "__label__science_tech": 0.13134765625, "__label__social_life": 0.00010699033737182616, "__label__software": 0.00521087646484375, "__label__software_dev": 0.8466796875, "__label__sports_fitness": 0.0005950927734375, "__label__transportation": 0.001739501953125, "__label__travel": 0.00033283233642578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34181, 0.01637]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34181, 0.2522]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34181, 0.89326]], "google_gemma-3-12b-it_contains_pii": [[0, 889, false], [889, 6074, null], [6074, 11134, null], [11134, 17250, null], [17250, 21332, null], [21332, 27382, null], [27382, 34181, null]], "google_gemma-3-12b-it_is_public_document": [[0, 889, true], [889, 6074, null], [6074, 11134, null], [11134, 17250, null], [17250, 21332, null], [21332, 27382, null], [27382, 34181, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34181, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34181, null]], "pdf_page_numbers": [[0, 889, 1], [889, 6074, 2], [6074, 11134, 3], [11134, 17250, 4], [17250, 21332, 5], [21332, 27382, 6], [27382, 34181, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34181, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
0bc6f3425b98a84ea8e01b749d426c80948a2f71
|
# Contents
## 1 Prerequisites
- 1.1 Knowledge requirements ........................................ 4
- 1.2 Getting the source code ........................................ 4
- 1.2.1 Branches and Tags .......................................... 4
- 1.3 Starting without installation .................................. 4
## 2 Directory structure
- 2.1 Top directory .................................................. 5
- 2.1.1 maintenance script ........................................ 5
- 2.1.2 perform-pylint ............................................. 5
- 2.1.3 setup.py .................................................. 5
- 2.1.4 cx_Freeze.py .............................................. 5
- 2.1.5 copyright related ......................................... 5
- 2.2 bin directory .................................................. 5
- 2.2.1 ceed-gui .................................................. 5
- 2.2.2 ceed-mic .................................................. 5
- 2.2.3 ceed-migrate ............................................. 6
- 2.2.4 runwrapper.sh ............................................ 6
- 2.3 build directory ................................................ 6
- 2.4 cegui directory .............................................. 6
- 2.4.1 action subpackage ....................................... 6
- 2.4.2 cegui subpackage ...................................... 6
- 2.4.3 compatibility subpackage ................................ 6
- 2.4.4 editors subpackage .................................... 6
- 2.4.5 metaimageset subpackage ................................ 6
- 2.4.6 propertytree subpackage ................................ 6
- 2.4.7 settings subpackage .................................... 6
- 2.4.8 ui subpackage .......................................... 6
- 2.5 data directory ................................................ 6
- 2.6 doc directory ................................................ 7
## 3 Core API
- 3.1 TabbedEditor .................................................. 8
- 3.1.1 Responsibilities .......................................... 8
- 3.1.2 Life cycle ............................................... 8
- 3.1.3 Derived classes .......................................... 9
- 3.2 Undo / Redo .................................................... 9
- 3.2.1 Principles ............................................... 9
- 3.2.2 Moving in the undo stack ............................... 10
- 3.3 Property editing ............................................... 10
- 3.3.1 Usage .................................................... 10
- 3.4 Settings API .................................................. 10
- 3.5 Action API .................................................... 11
- 3.6 Embedded CEGUI .............................................. 11
- 3.6.1 PyCEGUI bindings ....................................... 11
- 3.6.2 Shared CEGUI instance .................................. 11
- 3.7 Compatibility layers ......................................... 12
- 3.7.1 Testing compatibility layers ......................... 12
- 3.8 Model View (Controller) .................................. 12
- 3.9 Qt designer .ui files ....................................... 13
- 3.9.1 Compiling ............................................... 13
Chapter 1
Prerequisites
1.1 Knowledge requirements
Because of size constrains, I will not cover Python, PySide, Qt and CEGUI API.
1.2 Getting the source code
$ hg clone https://bitbucket.org/cegui/ceed
1.2.1 Branches and Tags
- default - unstable forward development, likely to be based on unstable CEGUI
- snapshotX - development snapshots, based on unstable CEGUI, should be considered tech previews
- *-devel - feature branches, are expected to be closed and merged into default at some point
1.3 Starting without installation
This section is UNIX only!
It is extremely valuable to start the editor without installing it. You can do so by using the runwrapper.sh script in the repository. This script will spawn a new shell that will have environment set so that CEED finds its own modules and PyCEGUI. By default it assumes the following directory structure:
$prefix/CEED/bin/runwrapper.sh
$prefix/cegui/build/lib/PyCEGUI.so
If your directory structure looks differently you need to alter the script.
Chapter 2
Directory structure
2.1 Top directory
2.1.1 maintenance script
Provides means to compile Qt .ui files, build documentation, fetch newest CEGUI datafiles and make a tarball for CEED releases.
_maintenance-temp_ is a directory with various temporary data that maintenance script needs to run.
2.1.2 perform-pylint
Runs _pylint_ over the codebase, results will be stored in pylint-output. It is imperative to run this script, especially before releases, it often uncovers nasty bugs. Even though _pyflakes_ has no helper script to run it, you can run it as well, there are no configuration or such files required.
2.1.3 setup.py
Used to install CEED system-wide. Running python setup.py install as root will get the job done. Make sure you already have all the dependencies installed.
Can also be used to create tarballs, the maintenance script may be better for that though, see Section 2.1.1.
2.1.4 cx_Freeze.py
This is a setup.py script that is adapted for freezing the application into a bundle using cx_Freeze. The resulting bundle does not need any dependencies, not even _Python_. Tested on Windows 7 and GNU/Linux distros, both 32bit and 64bit.
Might need copying of some dependencies the script fails to pick up!
Please see the cx_Freeze documentation [4] for more information.
2.1.5 copyright related
Also includes the _AUTHORS_ file with CEED contributors and several _COPYING_ files of libraries we bundle in Windows and MacOS X builds.
2.2 bin directory
All contents are executable, these are entry points to various functionality of CEED.
2.2.1 ceed-gui
Starts the CEED interface. Provides several CLI options that may be very useful for development, especially auto opening of projects and files after start, see ./ceed-gui -help.
2.2.2 ceed-mic
This is the CLI metameset compiler, see the _User manual_ for more info.
2.2.3 ceed-migrate
CLI interface to the compatibility machinery in CEED, can be useful for testing newly developed layers, see ./ceed-migrate --help for more info.
2.2.4 runwrapper.sh
Can be used to start CEED without having to install it, see Section 1.3 for more info.
2.3 build directory
Contains results of cx_Freeze build process, see Section 2.1.4 for more info.
2.4 ceed directory
This is where the bulk of the codebase resides. The directory is a Python package and none of its files should be executable.
2.4.1 action subpackage
Implements the Action API and defines basic global actions.
2.4.2 cegui subpackage
Wraps Embedded CEGUI (see Section 3.6 for more details). Also provides base classes for CEGUI widget manipulators and all the machinery that they require - GraphicsScene, GraphicsView, ...
2.4.3 compatibility subpackage
Implements the Compatibility API, contains implementations of all the stock Type Detectors and Compatibility Layers.
2.4.4 editors subpackage
This subpackage encapsulates all editing functionality within CEED. All classes that inherit from TabbedEditor except the convenience wrapper classes should be implemented inside this subpackage.
You can find implementation of imageset editing in the imageset subpackage, layout editing in the layout subpackage, ...
2.4.5 metaimageset subpackage
Classes required for metaimageset parsing, saving and compiling are implemented in this subpackage. This is what ceed-mic (see Section 2.2.2) uses internally to compile a metaimageset.
2.4.6 propertytree subpackage
UI to inspect and change properties of any class inheriting CEGUI::PropertySet.
2.4.7 settings subpackage
Implements the Settings API, defines basic global settings entries.
2.4.8 ui subpackage
Contains .ui files created using Qt Designer. The maintenance script is used to compile these into Python modules. See Section 3.9 for more info.
2.5 data directory
Contains icons, the splashscreen, stock property mappings, sample CEGUI datafiles and sample project files.
2.6 doc directory
Contains LyX source code for developer manual, quickstart guide and user manual. Also contains the PDF versions after ./maintenance build-docs has been executed (see Section 2.1.1).
Chapter 3
Core API
The whole code is divided into folders where the root folder provides basic reusable functionality (project management, undo view, tab management, …) and the editors themselves are providing editing facilities for various file types.
3.1 TabbedEditor
A base class for editors hosted in a tab. If you are writing new editing functionality for CEED you definitely need to inherit from this class.
3.1.1 Responsibilities
The most important part of a TabbedEditor is its widget. The widget represents the central part in Figure 3.1. TabbedEditors also often add toolbars, dock widgets and other elements.
3.1.2 Life cycle
Each tabbed editor goes through the following cycle:
1. Construction of the class
2. Initialisation
(a) all the supporting widgets get created
(b) the file is loaded and processed
3. Activation
(a) this puts the tabbed editor “on stage”
4. User interaction
5. Deactivation
6. Finalisation
(a) the editor is no longer shown in the interface
7. Destruction
(a) all held data and widgets are destructed
3.1.3 Derived classes
To avoid repeating code and adhere to the DRY principle [1], there are 2 very important classes that add functionality to TabbedEditor that you want to inherit if applicable to avoid reinventing.
**UndoStackTabbedEditor**
Very useful in case you are already using the Qt’s UndoStack. This connects all the necessary calls and exposes undo and redo of the undo stack to the rest of the application.
**MultiModeTabbedEditor**
Useful when you want multiple editing modes. As an example, let us take the layout editor. It has three modes - visual, code and live preview. You can freely switch between them and they each offer a different look at the same data. At any point in time you are viewing/editing in one mode only. Please note that you must be using UndoStack in this situation as switching modes is an undo action.
Each mode has its own life cycle and depends on the life cycle of its host tabbed editor. First the tabbed editor gets on “the stage” and then the editor’s mode is asked to activate itself.
```
# the host tabbed editor gets constructed and activated
A.deactivate()
B.activate()
# the user merrily edits in the B edit mode
```
Figure 3.2: process of switching from edit mode A to B
The actual mode switch process is a bit more involved because of the necessity to make mode switch an undoable action. You can see the full implementation of it in `ceed.editors.multi.MultiModeTabbedEditor.slot_currentChanged`.
### 3.2 Undo / Redo
One of the cornerstones of CEED is the ability to undo everything. This is implemented using Qt’s QUndoCommand class. Each TabbedEditor has its own independent undo stack, undo commands are never shared across editors.
#### 3.2.1 Principles
- everything that changes data has to be an UndoCommand
- all data that undo command stores in itself must be “independent”, storing references to widgets would not work if there is a DestroyCommand that invalidates them
- state switching that would make some undo commands not applicable have to be undo commands themselves
Figure 3.3: example of an undo stack
3.2.2 Moving in the undo stack
Let us consider the undo stack shown in Figure 3.3. If user clicks the <empty> line, all the undo commands will get .undo() called in the bottom-up order. If now the user clicks the Move 'ButtonPushedFill2' line again, the commands will get .redo() called in the top-down order. It is important to notice that the undo commands are always acted upon sequentially and that order of the calls matter! Some of the commands might not even make any sense if they are called out of order. Consider a Create Image 'XYZ' command followed by Move 'XYZ'. They need to be acted upon in the right order otherwise the Move command is asked to move a non-existent image.
3.3 Property editing
A lot of CEGUI classes provide basic introspection via property strings. CEED has a set of classes to reuse when you want to edit properties of widgets or any other classes that inherit from PropertySet.
3.3.1 Usage
Even though the propertytree subpackage (see Section 2.4.6) gives you access to its very internals and allows very advanced uses, including using it on classes that do not even inherit from the CEGUI::PropertySet, only the basic usage scenarios will be discussed in this document.
```python
from ceed import propertysetinspector
from ceed import mainwindow
# parent is a QWidget and can be None
inspector = propertysetinspector.PropertyInspectorWidget(parent)
self.inspector.ptree.setupRegistry(propertytree.editors.PropertyEditorRegistry(True)
pmap = mainwindow.MainWindow.instance.project.propertyMap
self.inspector.setPropertyManager(propertysetinspector.CEGUIPropertyManager(pmap))
```
Figure 3.4: creating a property inspector widget
```python
# inspector is a property inspector as created previously
inspector.setPropertySets([propertySetToInspect])
```
Figure 3.5: inspecting a PropertySet using a property inspector
3.4 Settings API
Whenever you want users to be able to change some value to affect behavior of the application, consider using the Settings API. You only need to define the settings entry and the UI that allows changing it will be auto-generated for you.
```python
category = settings.createCategory(name = "layout", label = "Layout editing")
visual = category.createSection(name = "visual", label = "Visual editing")
visual.createEntry(name = "continuous_rendering",
type = bool,
label = "Continuous rendering",
help = "Check this if you are experiencing redraw issues...",
defaultValue = False, widgetHint = "checkbox",
sortingWeight = -1)
```
Figure 3.6: defining a settings entry
It is recommended to query the settings entry once and keep the reference stored to avoid having to look it up frequently.
```python
entry = settings.getEntry("layout/visual/continuous_rendering")
# entry is a reference to SettingsEntry class
# we get the fresh value whenever we use entry.value later in the code
print("Continuous rendering is \$s\" % ("on" if entry.value else "off"))
```
3.5 Action API
Whenever there is an action needed you are advised to use the action API, see ceed.action module. The actions inherit from QAction and offer the same functionality but shortcuts are handled automatically for the developer, including UI for the user to remap them.
To use the Action API you have to define your actions first, this is usually done in a separate file to keep things clean. See editors/imageset/action_decl.py and editors/layout/action_decl.py. Then you query for this action in your code and connect your signals to it. You can use the convenience ConnectionMap to ease mass connects and disconnects.
```python
cat.createAction(
name = "align_hleft",
label = "Align &Left (horizontally)",
help = "Sets horizontal alignment of all selected widgets to left.",
icon = QtGui.QIcon("icons/layout_editing/align_hleft.png"))
cat.createAction(
name = "snap_grid",
label = "Snap to &Grid",
help = "When resizing and moving widgets, if checked this makes sure..."
,
icon = QtGui.QIcon("icons/layout_editing/snap_grid.png"),
defaultShortcut = QtGui.QKeySequence(QtCore.Qt.Key_Space)).setCheckable(True)
```
You can check the shortcut remap UI generated for you in Settings » Shortcuts.
3.6 Embedded CEGUI
To make sure everything is rendered exactly as it will appear in CEGUI it is used in the editor. This also ensures that whatever custom assets you have, they will be usable in the editor exactly as they are in CEGUI itself.
3.6.1 PyCEGUI bindings
As CEGUI is a C++ library, making it accessible from Python is not trivial. I have written python bindings for CEGUI called PyCEGUI using py++ and boost::python for this purpose. It is important to realise though that even though I tried to make it pythonic and reasonably safe, mistreating PyCEGUI can still cause segfaults and other phenomena usually prevented by using a scripting language.
3.6.2 Shared CEGUI instance
There is only one CEGUI instance in CEED. This makes tabbed editor switches slightly slower but CEED uses less memory. The main reason for this design decision is that CEGUI did not have multiple GUI contexts at the time CEED was being designed.
Furthermore, the shared instance is wrapped in a "container widget" which provides convenience wrappers. That way developer can avoid dealing with OpenGL and QGLWidget directly.
ceguiContainerWidget = mainwindow.MainWindow.instance.ceguiContainerWidget
cguiContainerWidget.activate(parentWidget, self.scene)
cguiContainerWidget.setViewFeatures(wheelZoom = True, continuousRendering = True)
# you can then use CEGUI directly through PyCEGUI, the result will be rendered
# to the host widget specified previously
PyCEGUI.System.getSingleton().getDefaultGUIContext().setRootWindow(self.
rootPreviewWidget)
# ... rendering, interaction, etc.
# after your work is done, deactivate the container widget
cguiContainerWidget.deactivate(self.ceguiPreview)
Figure 3.9: accessing and using the CEGUI instance
Always clean up!
The CEGUI container widget is shared, therefore the whole CEGUI instance and the default GUIContext are shared. CEGUI resources are not garbage collected, they are created in the C++ world and have to have their life cycles managed manually. Make sure you always destroy all your widgets and other resources after use. They will not get cleaned up until the whole editor is closed!
Beware of name clashes!
Because the CEGUI instance is shared there can be name clashes for many resources - images, animation definitions, ... A good way to circumvent this is to generate unique games with an integer suffix and hide the fact from the user. This is what the Animation list editor does internally, for more details see cceed.editors.animation_list.
3.7 Compatibility layers
Compatibility is only dealt with on data level. The editor itself only supports one version of each format and layers allow to convert this raw data to other formats. Here is an example of how to do that:
```python
# we want to migrate and imageset from data format "foo" to "bar"
# data is a string containing imageset in "foo" format
from cceed.compatibility import imageset as compat
convertedData = compat.manager.transform("foo", "bar", data)
```
There are also facilities to guess types of arbitrary data. See API reference of CompatibilityManager for more info.
3.7.1 Testing compatibility layers
Running the GUI and loading files manually by clicking is not practical for compatibility layer development and testing. Use the cceed-migrate executable instead. See Section 2.2.3.
3.8 Model View (Controller)
As most editing applications we have the MVC paradigm [3]. When I say something is the model I mean that it encapsulates and contains the data we are editing. The view on the other hand encapsulates the facility to view the data we are editing in their current state. The controller allows the user to interact with the data. Most of the time view meshes with controller as it does in the Qt world so we are using one class instance for both view and control.
Separating model from view helps make the code more maintainable and cleaner. It also makes undo command implementation easier.
3.9 Qt designer .ui files
Qt designer allows RAD so it pays off to keep as much GUI layout in .ui files as possible. Whenever you are creating a new interface, consider creating it with the Qt designer instead of coding it manually.
3.9.1 Compiling
The files have to be compiled into Python modules.
Development mode
The preferred method if you want to continuously develop CEED. Allows automatic recompilation of all ui files.
```
$ vim ceed/version.py
# make sure the DEVELOPER_MODE line is set to True
```
Figure 3.10: turning the developer mode on
Maintenance script
If you only want to compile the ui files rarely you are better off with the maintenance script. See Section 2.1.1.
```
./maintenance compile-ui-files
```
Figure 3.11: recompiling ui files via the maintenance script
Chapter 4
Editing implementation
4.1 Imageset editing
Lives in the `ceed.editors.imageset` package. Provides editing functionality for CEGUI imagesets. Please see the CEGUI imageset format documentation [2] for more details about the format.
4.1.1 Data model
Classes from the `ceed.editors.imageset.elements` package are used to model the data instead of using CEGUI in this editor. The reason is relative simplicity of the data and big changes to the image API between CEGUI 0.7 and 0.8. Compatibility layers are used to convert given data to the native format before they are loaded into the data model. See Section 3.7 for more details.
4.1.2 Undo data
Undo data are implemented using string for image definition reference and Python’s builtin types to remember geometry.
4.1.3 Multiple modes
It is a multi-mode editor with visual and code modes. The code mode always uses and displays native CEGUI data.
4.1.4 Copy / Paste
Copy paste is implemented using custom MIME type and bytestreams. It is even possible to copy image definitions across editor instances.
4.2 Layout editing
Located in the `ceed.editors.layout` package. CEGUI Window is used to model the entire layout hierarchy. We use WidgetManipulator class to add serialisation (for undo/redo), resizing handles and more to windows. It is a multimode editor with visual, code and live preview modes. The live preview mode does no editing, instead it just views the current layout and allows user to interact with it to test it.
4.2.1 Data model
Layout editing operates of widget hierarchies, a data model natively implemented in CEGUI that we use directly. Since CEGUI does not have global window names since version 0.8 we do not even have to worry about name clashes.
4.2.2 Undo data
Undo data are implemented using strings for widget path reference and widget properties are serialised using Python’s builtin types.
LookNFeel property caveat
When you change the LookNFeel property the auto child widgets get destroyed and constructed anew. This breaks undo history and is not allowed at the moment. I don’t it is worth the effort to support this. Either way we would have to “alter history” in some cases. Changing it in code mode will of course work because the entire hierarchy will be reconstructed from scratch.
WindowRenderer property caveat
Similar to the LookNFeel case it makes changes to the window that break undo history. Right now it is disallowed to change it from the editor. Changing it in code mode will of course work because the entire hierarchy will be reconstructed from scratch.
4.2.3 Multiple modes
Visual, Code and Live preview modes are provided. Code is a simple XML editing mode but the other two are implemented using embedded CEGUI.
4.2.4 Copy / Paste
Copy paste is implemented using custom MIME type and bytestreams. It is even possible to copy widget hierarchies across editor instances.
4.3 Animation editing
Located in `ceed.editors.animation_list` package. We use wrappers to deal with the fact that CEGUI has no model for a list of animations.
KeyFrames had to have indices added because comparing floats for equality is unreliable. So in the end we sort all keyframes by position and figure out their indices from that. To avoid placing two keyframes at the exact same position we add a small epsilon until we have no clashes whenever we encounter this possibility.
Chapter 5
Contributing
5.1 Coding style
CEED does not follow the PEP8 style recommendation when it comes to method and variable naming. The reason I chose to use camelCase for methods and variables is that PySide and CEGUI both use that and CEED calls a lot of methods form these 2 APIs. The code looked much better with camelCase naming.
Use the following rules for all contributed code to CEED:
- use 4 spaces for indentation
- use CamelCase for class naming
- do not use wildcard imports
- use camelCase for method and variable naming
- document methods and classes with the triple quote docstyle syntax
- comment all other things with # prefix only
5.2 Communication channels
You can reach the CEGUI team using:
- IRC: #cegui on irc.freenode.net
- email: team@cegui.org.uk
5.3 DVCS - forking
Create a fork of https://bitbucket.org/cegui/ceed on http://bitbucket.org or elsewhere. Start each feature or substantial fix in a separate branch, this makes it easy to review and possibly reject some parts without rejecting everything. When you are finished with your branch make sure you merge all upstream changes if any. Having to deal with merge conflicts makes the reviewers more likely to postpone integration. After all of this is done, simply contact upstream developer to merge your changes into the main repository. You can usually reach someone through IRC (freenode/#cegui), mantis bug tracker or email (team@cegui.org.uk).
5.4 The old fashioned way - patches
You can alternatively just send unified diff patches by email if you so desire. Use the team@cegui.org.uk email address. Make sure you state what the patchset is based on.
---
1 from package import * cannot appear anywhere in the code.
2 See http://freenode.net for more information about the network.
Bibliography
|
{"Source-Url": "http://static.cegui.org.uk/docs/ceed-0.8.0/developer-manual.pdf", "len_cl100k_base": 5823, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 35544, "total-output-tokens": 6831, "length": "2e12", "weborganizer": {"__label__adult": 0.0003952980041503906, "__label__art_design": 0.00037288665771484375, "__label__crime_law": 0.00019693374633789065, "__label__education_jobs": 0.0005202293395996094, "__label__entertainment": 6.99162483215332e-05, "__label__fashion_beauty": 0.00012624263763427734, "__label__finance_business": 0.00013947486877441406, "__label__food_dining": 0.00026035308837890625, "__label__games": 0.0008425712585449219, "__label__hardware": 0.0005049705505371094, "__label__health": 0.0001430511474609375, "__label__history": 0.00013816356658935547, "__label__home_hobbies": 6.902217864990234e-05, "__label__industrial": 0.0001962184906005859, "__label__literature": 0.00018787384033203125, "__label__politics": 0.0001386404037475586, "__label__religion": 0.0002846717834472656, "__label__science_tech": 0.0007910728454589844, "__label__social_life": 7.510185241699219e-05, "__label__software": 0.0059661865234375, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002689361572265625, "__label__transportation": 0.00023984909057617188, "__label__travel": 0.00015163421630859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26233, 0.03874]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26233, 0.24964]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26233, 0.81945]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 3348, false], [3348, 3348, null], [3348, 4362, null], [4362, 6225, null], [6225, 8250, null], [8250, 8451, null], [8451, 9343, null], [9343, 11599, null], [11599, 14241, null], [14241, 17002, null], [17002, 19831, null], [19831, 20628, null], [20628, 21703, null], [21703, 23535, null], [23535, 24020, null], [24020, 25806, null], [25806, 26233, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 3348, true], [3348, 3348, null], [3348, 4362, null], [4362, 6225, null], [6225, 8250, null], [8250, 8451, null], [8451, 9343, null], [9343, 11599, null], [11599, 14241, null], [14241, 17002, null], [17002, 19831, null], [19831, 20628, null], [20628, 21703, null], [21703, 23535, null], [23535, 24020, null], [24020, 25806, null], [25806, 26233, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26233, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26233, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 3348, 3], [3348, 3348, 4], [3348, 4362, 5], [4362, 6225, 6], [6225, 8250, 7], [8250, 8451, 8], [8451, 9343, 9], [9343, 11599, 10], [11599, 14241, 11], [14241, 17002, 12], [17002, 19831, 13], [19831, 20628, 14], [20628, 21703, 15], [21703, 23535, 16], [23535, 24020, 17], [24020, 25806, 18], [25806, 26233, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26233, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
c27cc72ea4ec5b664e55cf263351347e05116b72
|
A New Multiple-Pattern Matching Algorithm for the Network Intrusion Detection System
Nguyen Le Dang, Dac-Nhuong Le, and Vinh Trong Le
Abstract—String matching algorithms are essential for network application devices that filter packets and flows based on their payload. Applications like intrusion detection/prevention, web filtering, anti-virus, and anti-spam all raise the demand for efficient algorithms dealing with string matching. In this paper, we present a new algorithm for multiple-pattern exact matching. Our approach reduces character comparisons and memory space based on graph transition structure and search technique using dynamic linked list. Theoretical analysis and experimental results, when compared with previously known pattern-matching algorithms, show that our algorithm is highly efficient in both space and time.
Index Terms—Pattern matching, multi-pattern matching, network intrusion detection system.
I. INTRODUCTION
String matching algorithms in software applications like virus scanners (anti-virus) or intrusion detection systems are commonly used for improving data security over the internet [1]. String-matching techniques are used for sequence analysis, gene finding, evolutionary biology studies and analysis of protein expression. Other fields, such as music technology, computational linguistics, artificial intelligence, artificial vision, have been using string matching algorithms as their integral part of theoretical and practical tools. There are various problems in string matching appeared as a result of such continuous, exhaustive use, which in turn were promptly solved by the computer scientists.
There are many good solutions have been presented for exact string matching of multiple patterns, such as: Aho-Corasick, Commentz-Walte, Navarro and Raffinot, Rabin Karp, Muth and Manber algorithms with their variations [2]. However, most of the earlier algorithms have been designed for pattern sets of moderate size, i.e. a few dozens, and they unfortunately do not scale very well to larger pattern sets. The multi-pattern matching problem has many applications. It is used in data filtering (data mining) to find selected patterns, for example, anti-virus scanning, intrusion detection, content scanning and filtering, and specific data mining problems.
A. Multiple Pattern Matching Problem
String matching is a technique to find out a pattern from given text. Let $P = \{p_1, p_2, ..., p_m\}$ be a set of patterns, which are strings of characters from a fixed alphabet. Let $T = \{t_1, t_2, ..., t_n\}$ be a large text, again consisting of characters from the above alphabet. The problem is to find all occurrences of all the patterns of $P$ in $T$. Given a pattern set $P$ and a text $T$, report all occurrences of all the patterns in the text. The text $T$ is a string of $n$ characters drawn from the alphabet $\Sigma$ (of size $\sigma$). The pattern set $P$ is a set of $m$ patterns each of which is a string of characters over the alphabet $\Sigma$. For simplicity we assume that all patterns have the same length $m$. We are especially interested in searching for large pattern sets. For example, the UNIX fgrep and egrep programs support multi-pattern matching through the -f option [3]-[6].
Pattern matching algorithms have two main objectives: reduce the number of character comparisons and reduce the time requirement in the worst and average case analysis. Most of the algorithms operate in two stages. The first stage is a preprocessing of the set of patterns. Applications that use a fixed set of patterns for many searches may benefit from saving the preprocessing results in a file (or even in memory). This step is quite efficient and in most cases it can be done on the fly. The second stage is searching phase to find the pattern by the information collected in the pre-processing stage.
B. Single and Multiple Pattern Matching
In a standard problem, we are required to find all occurrences of a pattern in a given input text, known as single pattern matching. Suppose, if more than one pattern are matched against the given input text simultaneously, then it is known as, multiple pattern matching. Whereas single pattern matching is widely used in network security environments. Multiple pattern matching algorithms can search multiple patterns in a text at the same time. They have a high performance and good practicability, and are more useful than the single pattern matching algorithms.
C. Exact and Inexact Pattern Matching
Exact pattern matching algorithms will lead to either successful or unsuccessful search. The problem can be stated as: Given a pattern $P$ of length $m$ and a string/text $T$ of length $n$ ($m \leq n$). Find all the occurrences of $P$ in $T$. The matching is to be exact, which means that the exact word or pattern needs to be found. Some exact matching algorithms are Naïve Brute-force, Boyer-Moore, KMP [1]. Approximate (Inexact) pattern matching is sometimes referred as approximate pattern matching or matches with $k$ mismatches/ differences. This problem in general can be stated as: Given a pattern $P$ of length $m$ and a string/text $T$ of length $n$ ($m \leq n$). Find all the occurrences of sub string $X$ in $T$ that are similar to $P$, allowing a limited number, say $k$ different characters in similar matches. The edit/transformation operations are insertion,
deletion and substitution. Approximate string matching algorithms are classified into: Dynamic programming approach, Automata approach, Bit-parallelism approach, Filtering and Automation Algorithms. Inexact sequence data arises in various fields and applications such as computational biology, signal processing and text processing, etc.
D. Pattern Matching for NIDS
Matching patterns in a NIDS (Network Intrusion Detection System) is a problem more specialized than the general patterns matching problem. In the context of signature matching in a NIDS the signature database corresponds to the pattern set and the network packets, which the system scans, correspond to the text input for a pattern matching algorithm. Pattern-matching problems in NIDS have several different forms as follows:
1) Searching for Large Sets of Patterns: the number of known intrusions is growing and is almost surely to continue to do so. This growth was observed in the past in the rapid expansion of the size of the signature database for the Snort NIDS [16].
2) Searching With a Large Alphabet Size: NIDSs’ input and signatures have no restrictions on the alphabet. In short, any byte of input can contain any of the 256 possible values, and hence we are dealing with an alphabet of size 256. With respect to most string matching literature this is a large alphabet. Typical alphabet sizes considered in string matching literature are: 4, for DNA/RNA sequences; 52, for the English dictionary; or 128 for ASCII. However, it may be used to search for binary patterns in network packets resulting in requiring them to work on a larger alphabet of size 256.
3) Searching With a Wide Range of Pattern Lengths: The lengths of individual keywords within a keyword set can have great consequences on the performance and memory requirements of an algorithm used for matching. A requirement of a NIDS signature matching is that the algorithm must be capable of handling patterns of various lengths.
In this paper, we present a new algorithm for multiple-pattern exact matching. The paper is organized as follows. Section II surveys on the most significant algorithms for multiple pattern matching algorithms like Aho-Corasick, Commentz Walter, and Wu Manber. Section III presents our proposed algorithm. In Section IV, a comparative study of various algorithms is described. Finally, Section V is for conclusion and our further works.
II. RELATED WORKS
A. Aho-Corasick Algorithm (AC)
The Aho-Corasick [2] algorithm was proposed in 1975 at Bell Labs by Alfred Aho and Corasick is an extension of the KMP algorithm and remains, to this day, one of the most effective pattern matching algorithms when matching pattern sets. The idea of AC algorithm is that a finite automaton is constructed using a set of keywords during the pre-computation phase of the algorithm and the matching involves the automaton scanning the input text string, reading every character in input string exactly once and taking constant time for each read of a character.
Initially, the AC algorithm combines all the patterns in a set into a syntax tree which is then converted into a non-deterministic automaton (NFA) and, finally, into a deterministic automaton (DFA). The resulting finite state machine is then used to process the text one character at a time, performing one state transition for every text character. A pattern in patterns set $P$ has matched whenever the finite state machine reaches designated "final" states. The pseudo-code for the matching phase of the AC algorithm is given by Algorithm 1.
Building the AC automaton takes running time linear in the sum of the lengths of all keywords. This involves constructing a keyword tree for the set of keywords and then converting the tree to an automaton by defining the functions $g$ and $f$ and labeling states in $A$ with the keyword(s) matched. The space or memory requirements of the AC algorithm can be taken directly from the automaton built during the pre-computation because it is the only structure used in the matching. Unfortunately the space can be quite large depending on the alphabet and keyword set. In the worst case it would be $O(M|\Sigma|)$ where $|\Sigma|$ is the size of the alphabet $\Sigma$.
\begin{algorithm}
1. procedure AC($P$, $n$, $g_0$)
2. Input:
3. $P$ --- array of $n$ bytes representing the text input
4. $n$ --- integer representing the text length
5. $g_0$ --- initial state
2. $\text{state} \leftarrow g_0$
3. for $i = 1 \rightarrow n$ do
4. while $g(\text{state}, y[i]) = \text{fail}$ do
5. $\text{state} \leftarrow f(\text{state})$
6. end while
7. $\text{state} \leftarrow g(\text{state}, y[i])$
8. if $\text{state} \neq \text{@}$ then
9. output $i$
10. end if
11. end for
\end{algorithm}
Once the automaton is built, the matching is straightforward and simply involves stepping through the input characters one at a time and changing the state of the automaton- which happens in constant time. At every step we check if there's a match by observing if the current state is an accepting state. Using this simple functionality the AC matcher always operates in $O(n)$ running time, where $n$ is the length of the text, regardless of the number of patterns or their length. The AC algorithm has a significant advantage that every text character is examined only once. A major disadvantage of the AC algorithm is the high memory cost required to store the transition rules of the underlying DFA.
B. Commentz Walter Algorithm (CW)
The popular GNU fgrep utility uses the CW [3] algorithm for multiple string search. CW algorithm combines the Boyer-Moore technique with the AC algorithm. In preprocessing stage, differing from AC algorithm, CW algorithm constructs a converse state machine from the patterns to be matched. Each pattern to be matched adds states to the machine, starting from the right side and going to
the first character of the pattern, and combining the same node. In searching stage, CW algorithm uses the idea of Boyer-Moore algorithm. The length of matching window is the minimum pattern length. In matching window, CW scans the characters of the pattern from right to left beginning with the rightmost one. In case of a mismatch (or a complete match of the whole pattern) it uses a recomputed shift table to shift the window to the right. A multiple string matching algorithm is used to compare from the end of the pattern, like Boyer-Moore, using a finite state machine, like AC. In computer science, the CW algorithm is a string searching algorithm invented by Beate CW. Like the AC string matching algorithm, it can search for multiple patterns at once. The pseudo-code for the CW algorithm is given below:
```
Algorithm 2 Comments-CWalter Algorithm
1: procedure CW(y, n, m, p, root)
| y ← array of n bytes representing the text input
| m ← integer representing the text length
| p ← array of keyword lengths
| M ← number of keywords
| root ← root node of the trie
| n ← root.Count
2: | i ← i-min{m[0], m[1], ..., m[p − 1]}
| j ← 0
3: | while i ≤ n do
4: | while y has child v labeled y[i] = j do
5: | v ← v child
6: | j ← j + 1
7: | if out(y) ≠ then
8: | output i = j
9: | end if
10: | end while
11: | i ← i + 1
| end while
end procedure
```
CW also noted that the quadratic \((O(n \cdot m))\) worst-case running time behavior of the Boyer-Moore algorithm could be improved upon to be linear in \(n\). As such, CW derived two different algorithms called A and B1 which have quadratic \((O(n \cdot \max{m[0], m[1], ..., m[p − 1]}))\) and linear \((O(n))\) worst-case running times respectively. Algorithm A is the main work of the simpler of CW’s literature and has a simpler pre-computation phase than B1. Furthermore, B1 takes more memory during the pre-computation and search than A by remembering the input text bytes that were already scanned. Both algorithms maintain a pre-computation phase that is linear in the total length of all keywords or \(O(M)\), and both achieve slightly sub linear \((O(n))\) worst-case running times on average which could be as good as \(O(w/\min{m[0], m[1], ..., m[p − 1]})\) in the best case. Algorithm B will be described first starting with the functions created during the pre-computation phase [3], [4]. In CW’s algorithm B [3] some substrings of the input text \(y\) are scanned over and over in the worst case which leads to the quadratic behavior of the running time during the matching phase. In algorithm B1 [4] we have the exact same tire as for algorithm B; however, in order to reduce the worst case matching phase running time to linear in \(n\), we use a stack that remembers the characters of the input that have just been scanned. The size of the stack could, in theory, grow as large as \(n\) times the length of \(y\), but fortunately only the last \(w_{max}\) (where \(w_{max}\) is the length of the longest pattern in \(x\)) entries of the stack are needed. This means the memory or space requirement during matching is still proportional to the pattern set or \(M\) in particular.
C. Wu-Manber Algorithm (WM)
Wu and Manber created the UNIX tool agrep [5] to search for many patterns in files. Wu-Manber algorithm extended BM to concurrently search multiple strings. Instead of using bad character heuristic to compute the shift value, WM uses a character block including 2 or 3 characters. WM stores the shift values of these blocks in SHIFT table and builds HASH table to link the blocks and the related patterns. The SHIFT table and the HASH table are both hash tables which enable efficient search. Moreover, in order to further speed up the algorithm, WM also builds another hash table, the PREFIX table, with the two-byte prefixes of the patterns. This algorithm has excellent average time performance in practical usage. But, its performance is limited by minimum pattern length \(m\) since the maximum shift value in SHIFT table equals to \(m - 1\) [6]. However, when the pattern set is comparatively large, the average shift value in WM algorithm will decrease and thus the searching performance will be compromised.
The pseudo-code for the matching phase of the WM algorithm is given below:
```
Algorithm 3 Wu and Manber Algorithm
1: procedure WM(y, n, b, IF, SHIFT, HASH, PREFIX, PATPOINT)
| y ← array of n bytes representing the text input
| IF ← integer representing the suffix block length
| B ← integer representing the prefix block length
| SHIFT ← shift table (see description above)
| HASH ← hash table (see description above)
| PREFIX ← prefix table (see description above)
| PATPOINT ← table of pointers to keywords (like out \(x\) it has \(m\) keywords)
2: | m ← min{length of all keywords}
3: | i ← m − 1
4: | while i ≤ n do
5: | if shift = 0 then
6: | Suffix block matches
7: | text[shift + i] = hash[shift + i] = hash[i]
8: | hash[shift + i] = hash[i]
9: | hash[i] ← hash[i]
10: | end if
11: | p ← PREVIOUS[patPoint]
12: | if p = 0 then
13: | prefix matches
14: | p ← PREVIOUS[p]
15: | end if
16: | end while
end procedure
```
This algorithm uses three tables built during the pre-computation phase: a SHIFT table, a HASH table, and a PREFIX table. The SHIFT table is similar to the Boyer-Moore bad character skip table, and the other two tables are only used when the SHIFT table indicates not to shift-with a shift value of zero because there’s a potential match at the current position under examination in the input. As with the Boyer-Moore shifting, the size of the shift is
limited to the length of the pattern and in this case, the length of the minimum length pattern (call it minlen). Therefore, short patterns in the keyword set inherently make this algorithm less efficient [6]. The analysis of the expected running-time complexity of the main matching phase is shown by WM to be slightly less than linear in n, the length of the input text. This analysis assumes both an input text and pattern that are random byte strings with uniform distribution.
D. Other Algorithms
In [7], Michael O. Rabin and Richard M. Karp proposed the Rabin–Karp algorithm is a string searching algorithm in 1987 that uses hashing to find any one of a set of pattern strings in a text. The Rabin-Karp string searching algorithm calculates a hash value for the pattern, and for each M-character subsequence of text to be compared. If the hash values are unequal, the algorithm will calculate the hash value for next M-character sequence. If the hash values are equal, the algorithm will do a Brute Force comparison between the pattern and the M-character sequence. In this way, there is only one comparison per text subsequence, and Brute Force is only needed when hash values match. In [8], J. Kytojoki, L. Salmela, and J. Tarhioin also presented a q-Grams based BoyerMoore-Horspool algorithm (BMH). This algorithm cuts a pattern into several q-length blocks and builds q-Grams tables to calculate the shift value of the text window. This algorithm shows excellent performance on moderate size of pattern set. However, when coming into large-scale scope, it is not good enough both in searching time and memory requirement. Iere are also some other popular Backward algorithms which combine the BM heuristic idea and AC automaton idea. In [9], C. Coit, S. Staniford, and J. McAlerney proposed AC_BM algorithm. This algorithm constructs a prefix tree of all patterns in preprocessing stage, and then takes both BM bad character and good suffix heuristics in shift value computation. A similar algorithm called Setwise Boyer Moore Horspool (SBMH) [10] is proposed by M. Fisk and G. Varghese. It utilizes a tire structure according to suffixes of all patterns and compute shift value only using the bad character heuristic. However, these two algorithms are also limited by the memory consumption when the pattern set is large. In [11], C. Allauzen and M. Raffinot introduced Set Backward Oracle Matching Algorithm (SBOM). Its basic idea is to construct a more lightweight data structure called factor oracle, which is built only on all reverse suffixes of minimum pattern length m window in every pattern. It consumes reasonable memory when pattern set is comparatively large.
In [12], B. Xu and J. Li proposed the Recursive Shift Indexing (RSI) algorithm for this problem. RSI engages a heuristic with a combination of the two neighboring suffix character blocks in the window. It also uses bitmaps and recursive tables to enhance matching efficiency. These ideas are enlightening for large-scale string matching algorithms. In [13], Zhou proposed MDH algorithm which optimized WM algorithm with multi-phase hash and dynamic-cut heuristics strategies. According to Zhou’s experiments, the performance of MDH is superior to WM and some other algorithms. In 2010, Baeza-Yates and Gonnet introduced the Bit-parallelism technique [14]. In which takes advantage of the intrinsic parallelism of the bit operations inside a computer word, allowing cutting down the number of operations that an algorithm performs by a factor up to w, where w is the number of bits in the computer word. Bit parallelism is particularly suitable for the efficient simulation of nondeterministic (suffix) automata. In 2013, Zhenlong Yuan et al propose a multi-pattern matching algorithm named TFD for large-scale and high-speed URL filtering [15]. TFD employs Two-phase hash, Finite state machine and Double-array storage to eliminate the performance bottleneck of blacklist filter.
III. OUR PROPOSED ALGORITHM
Our work is different from these previous efforts as it focuses on building a graph transition structure and dynamic linked list search technique for multipattern matching that can handle a large number of patterns, and can easily be combined with any existing multi-pattern matching application.
To illustrate the process of the algorithm, we consider the following example:
Patterns set $P=\{"search", "ear", "arch", "chart"\}$
$T=\{"strrmatecadnsearchof"\}$
A. Preprocessing stage
According to AC algorithm approach, we will build an automaton as follows Fig. 1.

The CW algorithm creates a basic tire data structure using the reversed keywords. Each node $v$, except the root node, is
labeled with a character (byte) from a pattern. A basic CW style tire for pattern set \( P \) shown in Fig. 2. The output receives a tire node \( v \) and returns whether or not the path to the root from node \( v \) represents a keyword. If so, out returns the keyword. Otherwise it returns nothing (the empty set is denoted \( \emptyset \)), and the path from \( v \) to the root is simply a proper suffix of one or more pattern in \( P \).
In preprocessing stage, the WM algorithm builds three tables, a SHIFT table, a HASH table, and a PREFIX table. The HASH and PREFIX tables are used when the shift value is 0. Fig. 3 shows the SHIFT table and HASH table for \( B=2 \).
In preprocessing stage, we create a graph transition structure representing pattern set \( P \). Our graph \( G \) has \( n \) levels (\( n \) is the maximum length of patterns in \( P \)), at every level we just have to keep the different characters in each patterns. Our graph structure for patterns set \( P = \{ "search", "ear", "arch", "chart" \} \) shows in Fig. 4 belows:
In the searching stage, we use a list of pointers for searching to minimize the memory space. The maximum number of element in the pointer is equal the number of patterns. We initialize the pointer value by the length of each pattern. The structure of pointer is shown below:
First, following our approach, the storage space is reduced by just storing the different characters at each level. This memory requirements equal to or less than the storage space of DFA matching and state machine and goto function of the AC, WM, CW algorithm. Second, as our algorithm does not use the SHIFT and HASH table, it should reduce the construction time tables and table storage space.
### Searching Stage
To analyze the search process, we assume the input string is \( T= \) “srmcdadsearchof”. The searching stage of Aho Corasick is to walk through the automata for any transition; if so, the transition takes place, otherwise check the failure function. The CW algorithm use 15 steps to detect three patterns output= \{ear, arch, search\}, as seen in Fig. 5. While, the CW algorithm use 9 steps to detect two patterns output= \{ear, search\}.
In the searching stage, we use a list of pointers for searching to minimize the memory space. The maximum number of element in the pointer is equal the number of patterns. We initialize the pointer value by the length of each pattern. The structure of pointer is shown below:
While browsing on the input string, at every step we initialize pointer \( P_i \) (\( i \) corresponds to the current character position). If the current character matches the character of any pattern \( P_i \) then the pointer value will be reduced by 1, otherwise the value will be removed. If the current character does not match in graph \( G \) then remove \( P_i \), we continue to maintain the operation of the \( P_i \). The operation of searching stage of our algorithms is illustrated in Fig. 6.
The number of steps in our algorithm is the length of the input string \( T \). For a text of length \( n \), maximum length of the pattern is \( L \), and \( m \) is number of patterns. The worst-case running time is \( O(n+m+L) \), though the average case is often much better as we do not always maintain all \( m \) pointers \( P_i \) at the same time.
C. Our Proposed Algorithm
The pseudo-code for our algorithm is given below:
**Algorithm 4. Our Algorithm**
1. Procedure DNL(T, n, m, p, G)
Input:
- T -- array of a byte representing the text input
- n -- integer represent the text length
- P[j] -- array of patterns
- P_t -- array of keyword lengths P_t[j] (j = 1, ..., m)
- m -- number of patterns
- G -- graph of pattern P
- S -- Set of P
2. S = φ;
3. for i = 1 to n do
4. Int pointer P_i = P;
5. s = S U {P_i}
6. if (T[i] in G) and (P_i in S) then
7. P[position of T[i] in P_i] = P[position of T[i] in P_i] - 1;
8. if (P_t[k] = 0) then
9. Output P[k] detected;
10. Remove P_i;
11. endif
12. if (P_i in S) and (P_i not change) then Remove P_i;
13. endif
14. endfor
15. End procedure
---
IV. EXPERIMENT AND RESULT
Experiments are designed to verify the performance of our proposed algorithm both on searching time and space occupation, and to compare it with AC, CW and WM algorithms. We have implemented in pure C source code of AC, CW and WM excerpted from Snort version 2.8.3.1 [16]. All the experimental results reported were obtained on PC with Intel Pentium 3 GHz CPU Dual Core and 2 GB memory.
Comparisons are done from two aspects that complement each other: one with fixed number of patterns and varying pattern lengths; the other with fixed pattern length and varying pattern numbers. All patterns are generated randomly with equal length, but the text is self-correlated, which means part of the text is generated randomly but the rest is generated according to that part. For instance, if we need a text of the length 10,000, we generate the first 500 characters randomly, and the rest of the text is generated like this: each time we pick out several characters from the first 500 characters and append them to the end of the text until the length arrives at 10,000; the position and the length of each pick are both random. The aim to derive such text is to guarantee that there exist a number of matches, as we know random patterns and text will result in few matches. Thus we can better simulate the patterns and traffic in real network. Though these are only meaningless characters, they provide a meaningful reference of the performance. Fig. 7 shows the total searching time for a text with 50,000 characters and 500 patterns with a varying length.
The time in Fig. 8 is the accumulative time of 1,000 repeated times of lookup.
---
V. CONCLUSION
In this paper, we have presented a new algorithm for multiple pattern exact matching. Our approach reduces character comparisons and memory space based on graph transition structure and search technique using dynamic linked list. Theoretical analysis and experimental results, when compared with previously known pattern-matching algorithms, shows that our algorithm is highly efficient in both space and time.
ACKNOWLEDGMENT
This research is partly supported by the QG.12.21 project of Vietnam National University, Hanoi.
REFERENCES
Nguyen Dang Le received the BSc degree in computer science and the MSc degree in information technology from College of technology, Vietnam National University in Hanoi, Vietnam, in 1996 and 2005, respectively. He currently works in Haiphong University, Vietnam. His research interests include algorithm theory, network and wireless security.
Vinh Trong Le received the MSc degree in information technology from Faculty of Mathematics, Mechanics and Informatics, Hanoi University of Science, Vietnam National University in 1997, PhD degree in computer science from Japan Advanced Institute of Science and Technology in 2006, respectively. He is currently an associate professor at the Faculty of Mathematics, Mechanics and Informatics, Hanoi University of Science, Vietnam National University. His research interests include algorithm theory, network and wireless security.
Dac-Nhuong Le received the BSc degree in computer science and the MSc degree in information technology from College of Technology, Vietnam National University, Vietnam, in 2005 and 2009, respectively. He is currently a lecture at the Faculty of information technology in Haiphong University, Vietnam. His research interests include algorithm theory, computer network and networks security.
|
{"Source-Url": "http://www.ijetch.org/vol8/865-ST001.pdf", "len_cl100k_base": 6618, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22073, "total-output-tokens": 8154, "length": "2e12", "weborganizer": {"__label__adult": 0.0004949569702148438, "__label__art_design": 0.0004477500915527344, "__label__crime_law": 0.0013523101806640625, "__label__education_jobs": 0.0008687973022460938, "__label__entertainment": 0.00014889240264892578, "__label__fashion_beauty": 0.00023090839385986328, "__label__finance_business": 0.0003402233123779297, "__label__food_dining": 0.00043892860412597656, "__label__games": 0.0010442733764648438, "__label__hardware": 0.0026035308837890625, "__label__health": 0.001270294189453125, "__label__history": 0.00041031837463378906, "__label__home_hobbies": 0.0001285076141357422, "__label__industrial": 0.000743865966796875, "__label__literature": 0.0004210472106933594, "__label__politics": 0.0005612373352050781, "__label__religion": 0.0006155967712402344, "__label__science_tech": 0.38525390625, "__label__social_life": 0.00014078617095947266, "__label__software": 0.0157470703125, "__label__software_dev": 0.58544921875, "__label__sports_fitness": 0.0004448890686035156, "__label__transportation": 0.0006566047668457031, "__label__travel": 0.0001984834671020508}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32257, 0.03157]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32257, 0.57996]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32257, 0.86846]], "google_gemma-3-12b-it_contains_pii": [[0, 5391, false], [5391, 11275, null], [11275, 17091, null], [17091, 21837, null], [21837, 25172, null], [25172, 29837, null], [29837, 32257, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5391, true], [5391, 11275, null], [11275, 17091, null], [17091, 21837, null], [21837, 25172, null], [25172, 29837, null], [29837, 32257, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32257, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32257, null]], "pdf_page_numbers": [[0, 5391, 1], [5391, 11275, 2], [11275, 17091, 3], [17091, 21837, 4], [21837, 25172, 5], [25172, 29837, 6], [29837, 32257, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32257, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
9a869d22ae1f72afa5e11cc019b47bd1809cbaf1
|
Toward Formal Data Set Verification for Building Effective Machine Learning Models
Jorge López, Maxime Labonne and Claude Poletti
Airbus Defence and Space, Issy-Les-Moulineaux, France
{jorge.lopez-c, maxime.labonne, claude.poletti}@airbus.com
Keywords: Machine Learning, Data Set Collection, Formal Verification, Trusted Artificial Intelligence
Abstract: In order to properly train a machine learning model, data must be properly collected. To guarantee a proper data collection, verifying that the collected data set holds certain properties is a possible solution. For example, guarantee that data set contains samples across the whole input space, or that the data set is balanced w.r.t. different classes. We present a formal approach for verifying a set of arbitrarily stated properties over a data set. The proposed approach relies on the transformation of the data set into a first order logic formula, which can be later verified w.r.t. the different properties also stated in the same logic. A prototype tool, which uses the z3 solver, has been developed; the prototype can take as an input a set of properties stated in a formal language and formally verify a given data set w.r.t. the given set of properties. Preliminary experimental results show the feasibility and performance of the proposed approach, and furthermore the flexibility for expressing properties of interest.
1 INTRODUCTION
In the past few decades, Machine Learning (ML) has gained a lot of attention, partially due to the creation of software libraries (e.g., (Pedregosa et al., 2011)) that ease the usage of complex algorithms. In this context, the volume of stored data has dramatically increased over the last few years. However, an often overlooked task is the data extraction and collection to create proper data sets to train efficient machine learning models.
When retrieving information for the data set collection, there are key points to take into consideration. The reason is that ML models generalize their output based on the training (seen) data. However, a problem that is commonly encountered is that a model is expected to generalize well unseen regions of the input space while such regions do not behave in accordance to the provided training data. Another problem that often occurs is that there is a class in the data set which is underrepresented (e.g., for an anomaly detection data set, 99% of the examples are normal events). In general, many data biases can occur in a collected data set. A simple strategy while collecting data sets is to collect a large number of entries, conjecturing that important data are likely to be found if more data are available. However, this strategy yields incorrect results, and moreover, large data sets can cause ML models to be trained for longer than necessary; this in turn can make certain algorithms which may yield accurate results unusable for such cases. Additionally, with the proliferation of machine generated data sets, for example via Generative Adversarial Networks, assuring that the generated data set holds some properties of interest is of utmost importance.
In order to guide the collection of a proper data set to effectively train a ML model, verifying that a partially collected data set holds certain properties of interest is a possible solution. This verification can be done with the use for formal methods, such as deductive verification (Barrett and Tinelli, 2018). Considering a formal specification of a data set, a formal proof that the data set holds certain properties can be provided. Whenever this specification is violated (certain properties do not hold), identifying the properties that do not hold may help to diagnose the missing or incorrect information. This paper is devoted to the formal verification of machine learning data sets through the use of Satisfiability Modulo Theories (SMT) (Barrett and Tinelli, 2013) (for preliminary concepts on ML and SMT, see Section 2). The approach is based on the encoding of a data set into a Many-Sorted First Order Logic (MSFOL) formula which is later verified together with the desired set of properties (see Section 3).
A tool for the verification of data sets has been
developed. The tool relies on the use of the widely-known z3 (De Moura and Bjørner, 2008) solver. Preliminary experimental results show that in spite of the high computational complexity of SMT procedures, for the verification of data sets, these properties can be verified in a reasonable amount of time (see Section 4).
It is important to note that verifying certain properties over a data set is a task which is consistently considered as necessary, and a norm for many practitioners. However, in the literature very few researchers focus on automatic validation of data sets (see for example Carvalho et al., 2017). Furthermore, to the best of our knowledge, there is no work which aims at providing means for the verification of arbitrarily stated properties, and moreover, in a formal manner. In this light, this paper aims at exploring this direction.
2 PRELIMINARIES
In order to make our paper as self-contained as possible, we have included a brief description of some preliminary concepts required in our work.
2.1 Machine learning and structured data sets
We consider that a structured machine learning data set contains examples alongside with their expected outputs. Given the inputs and expected outputs, the final goal of a supervised ML algorithm is to learn how to map a training example to its expected output. For an unsupervised ML algorithm the goal is to learn patterns from the data; thus, the expected output does not exist. In our work, we consider that the expected outputs are always present, and thus, a data set for supervised machine learning (where there are no expected outputs) contains the feature vectors and their associated expected outputs for a data set of cardinality \( m \). The final goal of a supervised ML algorithm is to learn how to map a training example to its expected output. Given the inputs and expected outputs, the expected outputs for a data set of cardinality \( m \) is denoted as \( D \), and its associated expected output as \( O \).
Likewise, the \( j \)-th feature (column vector) is denoted as \( D^j \) (\( D^j \) denotes the transpose of the matrix \( D \)). Finally, the \( j \)-th parameter of the \( i \)-th training example is denoted by the matrix element \( d_{ij} \).
2.2 Satisfiability Modulo Theories (SMT)
SMT is a decision problem, that for a given first order logic formula \( \phi \) searches if \( \phi \) is satisfiable w.r.t. a set of background theories. For example, w.r.t. integer linear arithmetic, the following formula is satisfiable: \( \Phi = (x \in \mathbb{Z}) \land (y \in \mathbb{Z}) \land (x < y) \land (x < 0) \land (y > 0) \land (x + y > 0) \). The formula can be satisfied for instance by the interpretation \( x = -1, y = 2 \). The importance of restricting an interpretation of certain functions and predicate symbols in a first-order logic formula (according to a background theory \( T \)), is that specialized decision procedures have been proposed; thus, making the problem of checking the satisfiability of such formulas decidable.
It is important to note that many of the applications that use SMT involve different data types (Barrett and Tinelli, 2018). Therefore, SMT usually works with a sorted (typed) version of first order logic (Manzano, 1993). Essentially, in SMT there exists a finite set of sort symbols (types) \( S \) and an infinite set of variables \( X \) for the (sorted) formulas, where each variable has a unique associated sort in \( S \). This is an oversimplification of a many-sorted first order logic (MSFOL). As MSFOL is useful to express our formulas of interest, in the next subsection we provide a formal definition of its syntax (Pinkseiner and Zarba, 2006; Barrett and Tinelli, 2018; Barrett et al., 2009).
2.2.1 Many-sorted First-order Logic Syntax
A signature is a tuple \( \Sigma = (S,C,F,P) \), where \( S \) is a non-empty and finite set of sorts, \( C \) is a countable set of constant symbols whose sorts belong to \( S \), \( F \) and \( P \) are countable sets of function and predicate symbols correspondingly whose arities are constructed using sorts that belong to \( S \). Predicates and functions have an associated arity in the form \( c_1 \times c_2 \times \ldots \times c_n \rightarrow \sigma \), where \( n \geq 1 \) and \( c_1, c_2, \ldots, c_n, \sigma \in S \).
A \( \Sigma \)-term of sort \( \sigma \) is either: (i) each variable \( x \) of sort (type) \( \sigma \), where \( x \in S \); (ii) each constant \( c \) of sort (type) \( \sigma \), where \( c \in S \); and (iii) \( f \in F \) with arity \( c_1 \times c_2 \times \ldots \times c_n \rightarrow \sigma \), is a term of sort \( \sigma \), thus, for \( f(t_1, \ldots, t_n) \), \( t_i \) (for \( i \in \{1, \ldots, n\} \)) is a \( \Sigma \)-term of sort \( \sigma \).
A $\Sigma$-atom ($\Sigma$-atomic formula) is an expression in the form $s = t$ or $p(t_1, t_2, \ldots, t_n)$, where $=$ denotes the equality symbol, $s$ and $t$ are $\Sigma$-terms of the same sort, $t_1, t_2, \ldots, t_n$ are $\Sigma$-terms of sort $\sigma_1, \sigma_2, \ldots, \sigma_n \in S$, respectively, and $p$ is a predicate of arity $\sigma_1 \times \sigma_2 \times \ldots \times \sigma_n$.
A $\Sigma$-formula is either: (i) a $\Sigma$-atom; (ii) if $\phi$ is a $\Sigma$-formula, then $\neg \phi$ is a $\Sigma$-formula, where $\neg$ denotes negation; (iii) if both $\phi, \psi$ are $\Sigma$-formulas, then, $\phi \land \psi$ and $\phi \lor \psi$ are $\Sigma$-formulas (likewise, the short notations $\phi \rightarrow \psi$ and $\phi \leftrightarrow \psi$ for $\neg \phi \lor \psi$ and $(\phi \land \psi) \lor (\neg \phi \land \neg \psi)$); finally, (iv) if $\phi$ is a $\Sigma$-formula and $x$ is a variable of sort $\sigma$, then, $\exists x \in \sigma \phi$ ($x \in \sigma$ is used to indicate that $x$ has the sort $\sigma$) is a $\Sigma$-formula (likewise, the short notation $\forall x \in \sigma \phi$ for $\neg \exists x \in \sigma \neg \phi$, where $\exists$ denotes the existential quantifier and $\forall$ denotes the universal quantifier, as usual.
We leave out the formal semantics of MSFOL formulas, their interpretations and satisfiability as we feel it can unnecessarily load the paper with unused formalism. However, we briefly discuss some aspects of MSFOL formula satisfiability. As previously mentioned, for some signatures, there exist decision procedures, which help to determine if a given formula is satisfiable. For example, consider the signature with a single sort $\mathbb{R}$, all rational number constants, functions $+, -, \times$ and the predicate symbol $\leq$; SMT will interpret the constants, symbols and predicates as in the usual real arithmetic sense $\mathbb{R}$. The satisfiability of $\Sigma$-formulas for this theory (real arithmetic) is decidable, even for formulas with quantifiers.
3 DATA SET ENCODING AND FORMAL VERIFICATION
As previously mentioned (see Section 2), a ML data set is composed of a matrix $D_{mxn}$ and a vector $O_m$, where $m$ is the number of training examples, $n$ the number of features, $D$ contains the training examples, and $O$ the expected outputs. However, note that in our definition of this matrix we never mentioned the type of each feature in the data set. In general, there is no theoretical limitation over the type of these features, nonetheless, for practical reasons, we consider that all features are real valued. The main reason is that otherwise additional information would be required for each of the features. Moreover, in practice, well-known libraries work with real-valued features. As usual, for those features which are not naturally real, an encoding must be found (for example, one hot encoding for categorical features, etc.). Thus, we consider that $d_{i,j}, o_i \in \mathbb{R}$ for $i \in \{1, \ldots, m\}, j \in \{1, \ldots, n\}$. Additionally, we assume that $O$ is always present in the data sets, independently if this data set is meant for supervised or unsupervised machine learning. If a data set is not labeled, then $\forall i, k \in \{1, \ldots, m\} o_i = o_k$.
Encoding a ML dataset as a MSFOL formula. Having a convenient formal description for a data set eases the encoding of this data set as a MSFOL formula. To encode the data as a formula, we make use of the theory of arrays\footnote{Often such procedures seek to “eliminate” the quantifiers and obtain an equivalent quantifier-free formula}. We denote that an object $a$ is of sort array with indices of type (sort) $\mathbb{T}$ and holding objects of type $\mathbb{T}$ as $a \in A_{\mathbb{T}_1 \mathbb{T}_2}$. Indeed, a data set can be encoded using Algorithm\footnote{The theory of arrays considers basic read and write axioms}.
3.1 Formal verification of data sets
Indeed, a data set can be formally defined as an MSFOL formula $\phi_{ds}$ which holds the following properties: $\phi_{ds}$ is a conjunction of five main parts, that is, i) the assertion that an integer variable $m$ is of the size of the number of training examples, a variable $n$ is of the size of the features and a variable $l$ is of the size of the distinct labels, ii) the assertion that $D$ is a two-dimensional (integer indexed) real-valued array (of size $m \times n$) and $O, L$ are integer indexed real-valued arrays (of size $m$, and $l$, respectively) iii) $D[i][j]$ contains the $j$-th feature value for the $i$-th training example; iv) $O[i]$ contains the expected output for the $i$-th training example; and, v) $L[i]$ contains the $i$-th (distinct) label.
We assume that we want to verify $k$ properties over the data set, and furthermore, that these properties are expressed also in MSFOL. Indeed, MSFOL allows to express many properties of interest (in Section 3.2 we showcase its expressiveness). Therefore, we assume that we are given $\pi_1, \ldots, \pi_k$ MSFOL formulas to verify. These properties involve the variables in $\phi_{ds}$. Additionally, we assume that these formulas should all hold independently over the data set, and
their conjunction is satisfiable. Thus, impose a restriction that \( \pi_x \land \pi_y \) is satisfiable, for \( x, y \in \{1, \ldots, k\} \); we call this set of properties the data set specification \( \sigma \). This means that two properties may not contradict each other. For example, it cannot be required that the data set has more than 30 training examples and at the same time that it must have at most 20 ((\( \pi_1 \leftrightarrow (m > 30)) \land (\pi_2 \leftrightarrow (m \leq 20)))). Additionally, the conjunction of properties must be satisfiable means that there is an interpretation that makes this formula (the conjunction) evaluate to TRUE, i.e., there exists a data set which can satisfy this specification. Otherwise, the verification of any data set is useless as no data set can hold such set of properties.
The formal data set verification problem can be reduced to the following: given a data set formula \( \phi_{ds} \) (created using Algorithm 1 from \( D \) and \( O \)) and a data set specification \( \sigma = \bigwedge_{i=1}^{k} \pi_i \), is \( \phi_{ds} \land \sigma \) satisfiable? If the conjunction of these formulas is satisfiable then, each of the properties must hold for the data set as the conjunction of all properties is satisfiable by itself; if the conjunction is satisfiable we say that the data set holds the properties \( \pi_1, \ldots, \pi_k \) or that the data set conforms to the specification \( \sigma \). Perhaps this is quite an abstract view of the problem. For that reason, in the following subsection we provide concrete examples that should help the reader to better understand.
### 3.2 Example data set and properties
First, let us consider a very small data set:
\[
D = \begin{pmatrix}
0.051267 & 0.69956 \\
-0.092742 & 0.68494 \\
-0.21371 & 0.69225 \\
-0.375 & 0.50219 \\
-0.51325 & 0.46564 \\
-0.52477 & 0.2098 \\
-0.39804 & 0.034357 \\
-0.30588 & -0.19225 \\
0.016705 & -0.40424 \\
0.13191 & -0.51389
\end{pmatrix},
\]
\[
O = \begin{pmatrix}
1 \\
0 \\
-1 \\
-1 \\
-1 \\
-1 \\
-1 \\
-1 \\
-1 \\
-1
\end{pmatrix}.
\]
After applying Algorithm 1 to \( D \) and \( O \) as shown before, the output \( \phi_{ds} \) is:
\[
(m, n, l \in \mathbb{Z}) \land (m = 10) \land (n = 2) \land (D \in \mathbb{A}_{Z}, \mathbb{A}_{Z,R}) \land (O \in \mathbb{A}_{Z,R}) \land (L \in \mathbb{A}_{Z,R})
\]
\[
\land (D[0][0]=0.051267) \land (D[0][1]=0.69956) \land (O[0]=1) \land (L[0]=1) \land (D[1][0]=-0.092742) \land (D[1][1]=0.68494) \land (O[1]=0) \land (L[1]=0) \land (D[2][0]=-0.21371) \land (D[2][1]=0.69225) \land (O[2]=-1) \land (L[2]=-1) \land (D[3][0]=-0.375) \land (D[3][1]=0.50219) \land (O[3]=-1) \land (D[4][0]=-0.51325) \land (D[4][1]=0.46564) \land (O[4]=-1) \land (D[5][0]=-0.52477) \land (D[5][1]=0.2098) \land (O[5]=-1) \land (D[6][0]=-0.39804) \land (D[6][1]=0.034357) \land (O[6]=-1) \land (D[7][0]=-0.30588) \land (D[7][1]=-0.19225) \land (O[7]=-1) \land (D[8][0]=0.016705) \land (D[8][1]=-0.40424) \land (O[8]=-1) \land (D[9][0]=0.13191) \land (D[9][1]=-0.51389) \land (O[9]=-1) \land (L=3)
\]
Let us start by showcasing very simple properties and how their formal verification works. Suppose the specification consists of a single property: “the data set must contain at least 100 training examples,” this property can be expressed in MSFOL simply as \( \pi_\# \leftrightarrow (m \geq 100) \). Notice how \( \phi_{ds} \land \pi_\# \) is not satisfiable
as there does not exist an interpretation that makes it evaluate to TRUE; particularly, if \( m \) is greater than 99, then the clause (in \( \phi_d \)) \( m = 10 \) cannot evaluate to TRUE and since this is a conjunction, \( \phi_d \land \pi_e \) evaluates to FALSE. Similarly, if \( m \) is 10, then the \( \pi_e \) makes the conjunction evaluate to FALSE. Thus, we say that the data set does not hold the property \( \pi_e \).
Let us start examining more complex properties that can be formally verified over the data set. A slightly more complex property to verify is: “the data set must be min-max normalized,” which can be expressed in MSFOL as \( \pi_\pm \triangleq \exists i (i \in \mathbb{Z}) ( (i \geq 0) \land (i < n) \land (j \geq 0) \land (j < m) \land (\exists [i] [j] < \min) \lor (\exists [i] [j] > \max)) \). Certainly \( \min \) and \( \max \) are defined constants (e.g., -1 and 1) and either these variables must be defined or the value must be replaced; for example, the fact that there is no notion of loops in first order logic. There are many particularities that must be considered; for example, the fact that there is no notion of loops in first order logic and we require to define a function to count the number of instances where a given label appears. To overcome this particular problem a recursive function can be defined. In order to keep the paper readable, we avoid this definitions and simply denote defined functions in mathematical bold-font. The interested reader can refer to the prototype implementation section (Section 4) and correspondingly to the tool’s repository to check the full property implementation. We state the aforementioned property as: \( \pi_\pm \triangleq \exists i (i \in \mathbb{Z}) ( (i \geq 0) \land (i < l) \land (S(O,L,i,m) < \frac{m}{p})) \), where \( S(A,v,s) \) is a function that returns the number of times the value \( v \) is found in an array \( A \) up to index \( s \); that is, that is how many times the label is found in the label array.
We have exemplified different properties that can be formally verified in data sets. We do not focus on an extensive list of properties but, rather on providing means for formally verifying any property in a given data set. We could state much more properties, for example, there are no contradicting training examples in the data set, i.e., there does not exist two equal elements in \( D \) with different indices for which the corresponding elements in \( O \) differ. We limit this section with these examples. However, we note that as shown in the previous examples, the formalism is quite flexible for expressing real properties of interest.
### 4 Tool Development and Experimental Results
In order to assess the feasibility and efficiency of the proposed approach, a prototype tool has been developed in Julia (Bezanson et al., 2017). Generally, speaking, the tool takes as an input: a Comma Separated Values (CSV) file as a data set, assuming that the last column of each row must be the expected output for the training example (remainder of the columns);
a directory, where the properties to be checked are stored, one per file in the SMT-LIB language.
SMT-LIB is a language that many SMT solvers can take as an input and its syntax is quite intuitive. For example, for expressing the property $\forall (i, j \in \mathbb{Z})(i \geq 0 \land (i < n) \land (j \geq 0) \land (j < m) \land ((D[i][j] < \min) \lor (D[i][j] > \max)))$ can be simply done in SMT-LIB as shown in Listing 1.
```
Listing 1: $\pi_\pm$ in SMT-LIB
(assert (not (exists ((i Int) (j Int))
(>= i 0)
(< i n)
(>= j 0)
(< j m)
(or
(< (select (select D i) j) min)
(> (select (select D i) j) max)
))))
```
The tool works as described in Algorithm 2. Note that, SMT is an SMT procedure call to determine if the given formula is satisfiable. In our tool, we use the z3 (De Moura and Bjørner, 2008) solver (which takes as an input the SMT-LIB format). The interested reader can check the properties stated in SMT and more information about our tool in the tool’s repository (López, 2021).
4.1 Preliminary experimental results
All experiments were executed with commodity hardware with the intention to showcase the performance of the proposed approach. The experiments were performed with an Ubuntu 20.04 LTS with 4 Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz, and 8GB of RAM.
In order to evaluate the feasibility of our proposed solution, the properties $\pi_\#, \pi_\pm, \pi_\ast, \pi_\equiv$ have been encoded in SMT-LIB, and a data set was incrementally tested. We present the results of both the performance and satisfiability of properties w.r.t. the data sets in Figures 1, 2, respectively. As can be seen, the performance of the proposed approach is acceptable: as any formal verification approach, the decision procedures are often exponential in the worst case. For formally guaranteeing that the data set holds certain properties of interest, this procedure can be executed once, in which case the running time is not much of a constraint. Our preliminary experimental evaluation shows that properties are solved fast (milliseconds per hundreds of training examples), specially simple properties (e.g., $\pi_\#$).
Algorithm 2: Data Set Verification
```
Input: A CSV data set file $f$ (with $n \geq 1$ features, and $m \geq 1$ training examples), and a directory $d$ containing property files
Output: Verdicts for each property $\pi \in d$
Step 0: Read $f$ and store it into the arrays $D$ and $O$, and set $m$ and $n$, correspondingly;
Step 1: Use Algorithm 1 to obtain $\phi_{ds}$ from $D, O, m,$ and $n$;
Step 2: foreach $p \in d$ do
Read the contents of $p$ into the formula $\pi$;
if SMT($\phi_{ds} \land \pi$) is satisfiable then
display($\pi$ holds in the data set $f$)
else
display($\pi$ does not hold for the data set $f$)
```
Figure 1: Performance of formal data set verification
It is interesting to observe the satisfiability of the properties. It is normal that when adding more training examples the data set may get balanced or unbalanced ($\pi_\equiv$); it is also normal that all data sets which have less than 100 training examples fail the property $\pi_\delta$. One can conclude that the example data set is also well min/max normalized as $\pi_\pm$ is always satisfiable. Finally, note that even if the language allows it and solver can read the property $\pi_\ast$, the property is very complicated as it is quantified over an array; the solver cannot process such complex formulation and so the property always returns an unknown sta-
direction is to consider the formal verification of unstructured data for machine learning.
REFERENCES
|
{"Source-Url": "https://export.arxiv.org/pdf/2108.11220", "len_cl100k_base": 6368, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 30843, "total-output-tokens": 7735, "length": "2e12", "weborganizer": {"__label__adult": 0.0004649162292480469, "__label__art_design": 0.0006122589111328125, "__label__crime_law": 0.0006451606750488281, "__label__education_jobs": 0.0020427703857421875, "__label__entertainment": 0.00014448165893554688, "__label__fashion_beauty": 0.00029921531677246094, "__label__finance_business": 0.0006566047668457031, "__label__food_dining": 0.000621795654296875, "__label__games": 0.0007829666137695312, "__label__hardware": 0.0012655258178710938, "__label__health": 0.001583099365234375, "__label__history": 0.0004131793975830078, "__label__home_hobbies": 0.0002359151840209961, "__label__industrial": 0.000964641571044922, "__label__literature": 0.0005345344543457031, "__label__politics": 0.0005254745483398438, "__label__religion": 0.000701904296875, "__label__science_tech": 0.479248046875, "__label__social_life": 0.00020062923431396484, "__label__software": 0.0101776123046875, "__label__software_dev": 0.496337890625, "__label__sports_fitness": 0.0004267692565917969, "__label__transportation": 0.0009388923645019532, "__label__travel": 0.0002486705780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26389, 0.08335]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26389, 0.53992]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26389, 0.84831]], "google_gemma-3-12b-it_contains_pii": [[0, 4205, false], [4205, 8999, null], [8999, 14208, null], [14208, 17634, null], [17634, 20720, null], [20720, 24261, null], [24261, 26389, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4205, true], [4205, 8999, null], [8999, 14208, null], [14208, 17634, null], [17634, 20720, null], [20720, 24261, null], [24261, 26389, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26389, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26389, null]], "pdf_page_numbers": [[0, 4205, 1], [4205, 8999, 2], [8999, 14208, 3], [14208, 17634, 4], [17634, 20720, 5], [20720, 24261, 6], [24261, 26389, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26389, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
fdd5ab20469a966c85ab8f2ae6a637316745fc88
|
Midterm Exam: Introduction to Database Systems: Solutions
1. Entity-Relationship Model [16 points]
Below is the preferred solution:
The following variants were also accepted:
a. [12 points] Complete the diagram above to be a valid E-R diagram reflecting the following constraints. (Be sure to make your bold lines very bold!)
- A student, uniquely identified by her SID, takes an exam (for example, Midterm #1) on exactly one date. A student may take any number of exams (for example, Midterm #1, Midterm #2, and a final exam), and every exam is taken by at least one student. An exam is uniquely identified by the combination of a course and a semester.
• Every exam has at least one overseer (for example, Prof. Hellerstein, Derrick, and Bill). An overseer is uniquely identified by an exam and the overseer’s name.
• There is at least one question on every exam, and a question appears on at most one exam. A question on an exam may be answered by any number of students, and a student may answer multiple questions on an exam.
**Points for question 1(a) were assigned according to the following rubric:**
<table>
<thead>
<tr>
<th>Points</th>
<th>Criteria</th>
</tr>
</thead>
<tbody>
<tr>
<td>+1</td>
<td>“name” is underlined with a dotted line and connected to “Overseer” with a regular line.</td>
</tr>
<tr>
<td>+1</td>
<td>“Overseer” and “has” are bolded, and there is a bold arrow from “Overseer” to “has”.</td>
</tr>
<tr>
<td>+1</td>
<td>“Exam” and “has” are connected with a bold line</td>
</tr>
<tr>
<td>+1</td>
<td>“Exam” and “takes” are connected with a bold line</td>
</tr>
<tr>
<td>+1</td>
<td>“Exam” and “on” are connected with a bold line</td>
</tr>
<tr>
<td>+1</td>
<td>“course” and “semester” are underlined with solid lines. “Exam” and “course” are connected with a solid line. “Exam” and “course” are connected with a solid line.</td>
</tr>
<tr>
<td>+1</td>
<td>“Question” is connected to “on” with a regular arrow</td>
</tr>
<tr>
<td>+1</td>
<td>“date” is connected to “takes” (preferred) or to “Exam” with a regular line, and is not underlined.</td>
</tr>
<tr>
<td>+1</td>
<td>“takes” is connected to “Student” with a regular line</td>
</tr>
<tr>
<td>+1</td>
<td>“sid” is underlined and connected to “Student” with a regular line</td>
</tr>
<tr>
<td>+1</td>
<td>“Student” is connected to “answers” with either a regular or a bold line</td>
</tr>
<tr>
<td>+1</td>
<td>“answers” is either connected to “Exam”, “Question”, or to an aggregate surrounding “Exam”, “on”, and “Question”, with a regular line.</td>
</tr>
<tr>
<td>-1</td>
<td>Extraneous markings were included, such as bolding a relation other than Overseer or underlining a relation.</td>
</tr>
</tbody>
</table>
• [2 points] Consider the following E-R diagram, which is a fragment of a simple board game schema that captures the legal moves available in each position on a board:

We want to translate “Moves” into an SQL table and maintain the constraints in the ER diagram. Correctly complete the SQL statement below (note that “--” begins a comment in SQL):
```sql
CREATE TABLE Moves
(directon CHAR(2), -- one of N/S/E/W/NE/NW/SE/SW
LeftRight CHAR, -- one of A-P
UpDown INTEGER, -- one of 1-16
PRIMARY KEY(Direction, LeftRight, UpDown), (1 pt)
FOREIGN KEY (LeftRight, UpDown) REFERENCES Position (1/2 pt)
ON DELETE CASCADE (1/2 pt)
);
```
• [2 points] The figure with ovals and circles below illustrates a relationship set between two entity sets. Each circle is an entity or relationship, and the edges connect them. The ER diagram right below that figure needs to be completed with constraints that are consistent with the edges in the diagram. For each of the following, circle the correct answer: (1/2 pt each)
- A bold edge between E1 and R is:
- A) needed
- B) optional
- C) disallowed
- An arrowhead between E1 and R is:
- A) needed
- B) optional
- C) disallowed
- A bold edge between E2 and R is:
- A) needed
- B) optional
- C) disallowed
- An arrowhead between E2 and R is:
- A) needed
- B) optional
- C) disallowed
2. **Query Processing [15 points]**
Consider the following relations:
```sql
CREATE TABLE Employee (SSN integer, DepartmentID integer,
PRIMARY KEY (SSN),
FOREIGN KEY (DepartmentID) REFERENCES Department);
(100,000 tuples; 1100 pages)
```
```sql
CREATE TABLE Department (DepartmentID integer, Name char(40),
PRIMARY KEY (DepartmentID));
(1000 tuples; 50 pages)
```
And consider the following join query:
```sql
SELECT SSN, DepartmentID, Name
FROM Employee, Department
WHERE Employee.DepartmentID = Department.DepartmentID
```
Assume there are no indexes available, and both relations are in arbitrary order on disk. Assume that we use the refinement for sort-merge join that joins during the final merge phase. However, assume that our implementation of hash join is simple: it cannot perform recursive partitioning, and does not perform hybrid hash join. The optimizer will not choose hash join if it would require recursive partitioning.
For each of these questions, 2 points was given for the correct algorithm and 3 points for the correct cost. If the algorithm was incorrect but the cost was correct for the given algorithm, 1 point was given. One point was deducted if the name of an algorithm wasn’t quite right. If the cost calculation contained a few minor errors, or was incomplete, 1 or 2 points were deducted.
- **[5 points]** Assume you have B=3 memory buffers, enough to hold 3 disk pages in memory at once. (Remember that one buffer must be used to buffer the output of a join). What is the best join algorithm to compute the result of this query? What is its cost, as measured in the number of I/Os (i.e., pages requested. Do not include the cost of writing the final output. You may ignore buffer pool effects in this question.)
- **Sort-Merge Join:** Supposing number of passes to sort Employee is \( P_E \) and number of passes to sort Department is \( P_D \), cost is \( (2P_E - 1)|Employee| + (2P_D - 1)|Department| \) with the refinement and \( (2P_E + 1)|Employee| + (2P_D + 1)|Department| \) without the refinement. Either answer was accepted, since the problem asked you to use the refinement but it’s not technically possible in this case (not enough buffers).
In this case \( P_E = 10 \) (at the beginning of each pass there are 1100, 367, 184, 92, 46, 23, 12, 6, 3, 2 sorted runs). Similarly \( P_D = 6 \) (runs are 50, 17, 9, 5, 3, 2). Credit was given if you were off by one in either case. Consequently the following are all valid answers for cost in this problem:
\[ 19150, 19250, 19350, 21350, 21450, 21550, 21650, 23550, 23650, 23750, 23850, 25850, 25950, 26050 \]
(The actual answer is 23750)
- **Hash Join:** Not applicable, since \( B^2 = 9 < 50 = \min(|Employee|, |Department|) \). Could only be done with recursive partitioning, which was excluded.
- **Doubly-nested loop join:** cost is \( \text{NumTuples(Department)} \times |Employee| + |Department| = (1000)(1100) + 50 = 1100050 > 23750 \)
- **Page-oriented doubly-nested loop join:** cost is \( |Department| \times |Employee| + |Department| = (50)(1100) + 50 = 55050 > 23750 \)
- **Block nested loops join:** \( B - 2 = 1 \), so same as page-oriented doubly-nested loop join.
• [5 points] Suppose that instead of having 3 memory buffers we have B=11 memory buffers. What is the best join algorithm now, and what is its cost (again, without writing final output or considering buffer hits)?
• **Hash Join:** Since $B^2 = 121 > 50 = \min(|Employee|, |Department|)$, we can use hash join in this problem. We partition Employee into 10 partitions, then partition Department into 10 partitions (average size 5), then load each Department partition into memory while streaming through the corresponding larger Employee partition. Since there is no recursive partitioning, total cost is $3(|Department| + |Employee|) = 3(1100 + 50) = 3450$.
• **Sort-Merge Join:** Supposing number of passes to sort Employee is $P_E$ and number of passes to sort Department is $P_D$, cost is $(2P_E - 1)|Employee| + (2P_D - 1)|Department|$ with the refinement and $(2P_E + 1)|Employee| + (2P_D + 1)|Department|$ without the refinement. Either answer was accepted, since the problem asked you to use the refinement but it’s not technically possible in this case (not enough buffers). Another option was to use the refinement on the Department relation but not the Employee relation, for a cost of $(2P_E + 1)|Employee| + (2P_D - 1)|Department|$. In this case $P_E = 3$ (at the beginning of each pass there are 1100, 100, 10 sorted runs) and $P_D = 2$ (runs are 50, 5). The following are valid answers for cost: 5650, 7850, 7950. These are more than the cost of hash join, but were still accepted because that solution was not known at the time of grading.
• **Block nested loops join:** Cost is $(\text{ceiling}(|Department|/(B-2)) * |Employee|) + |Department| = (6)(1100) + 50 = 6650$. This is faster than Sort-Merge Join (since the refinement giving 5650 cost can’t actually be used), but slower than Hash Join.
• **Doubly-nested loop join, page-oriented doubly-nested loop join:** costs same as in first part above, both far more than the above options.
• [5 points] Suppose we raise the number of memory buffers to B=52, and increase the size of the Departments relation to 500 pages. What is the best join algorithm now, and what is its cost (no writing final output, ignoring buffer hits)?
• **Hash Join:** Since $B^2 = 2704 > 500 = \min(|Employee|, |Department|)$, we can use hash join in this problem. Since there is no recursive partitioning, total cost is $3(|Department| + |Employee|) = 3(1100 + 500) = 4800$.
• **Sort-Merge Join:** Number of buffers is now large enough to sort each relation in two passes, with enough room to do the refinement for both relations $(1100/52 + 500/52 = 32 < 52)$. So cost is $3(1100 + 500) = 4800$, same as Hash Join.
• **Doubly-nested loop join:** cost is $\text{NumTuples}(Department) * |Employee| + |Department| or about $(10000)(1100) + 500 = 11000500 > 4800$
• **Page-oriented doubly-nested loop join:** cost is $|Department| * |Employee| + |Department| = (500)(1100) + 500 = 550500 > 4800$
• **Block nested loops join:** Cost is $(\text{ceiling}(|Department|/(B-2)) * |Employee|) + |Department| = (500(52-2))(1100) + 500 = 11500 > 4800$.
3. **Files and Buffer Management [10 points]**
- [3 points] Consider a buffer pool of 3 frames, and a heap file of 100 sequential pages with pageIDs from 1 to 100. Assume we scan the heap file **twice** from start to finish. Starting with an empty buffer pool, using an MRU replacement strategy, which pages will be in the buffer pool after the second scan?
1, 99, 100. *Initially the pool fills with 1, 2, 3. Then the 3 position is overwritten until the end of the first pass, at which point it is 1, 2, 100. During the second pass, 1 and 2 are hits, then the 2 position is overwritten until 99 is reached. Finally 100 is a hit.*
- [1 point] What is the hit rate (#hits/#requests) in the scenario of part (a)?
3/200 or 1.5%. *Only 1, 2, and 100 are hit, once each.*
- [3 points] To save a random I/O, your friend suggests that we scan the file once from pageID 1 to pageID 100, and then switch into reverse and scan from pageID 100 back down to 1. Again starting with an empty buffer pool and using MRU, what pages will be in memory at the end of this scan?
1, 2, 3. *Initially the pool fills with 1, 2, 3. Then the 3 position is overwritten until the end of the first pass, at which point it is 1, 2, 100. During the second pass, 100 is a hit, then the 100 position is overwritten until 3 is reached. Finally 1 and 2 are hits.*
- [1 point] What is the hit rate (#hits/#requests) in the scenario of part (c)?
3/200 or 1.5%. *Only 100, 2, and 1 are hit, one each.*
- [1 point] Consider a sorted file organization as we studied in class, with N tightly packed pages of records. Write an expression for expected (average case) number of I/O requests for an equality lookup in the file.
\[ \log_2 N \text{. This can be achieved with binary search on the pages, followed by searching the page containing the record in memory.} \]
- [1 point] Again using MRU and starting with an empty buffer pool, what is the expected hit rate in the buffer pool for the scenario of part (e)?
0%. *No page is read more than once.*
4. **B+-Trees [15 points]**
- [5 points] The B+-tree drawn below has order 3 (i.e. max 6 pointers per internal node), and contains a number of errors. Circle the errors and draw a new correct B+-tree over the same data entries (the entries in the leaves).
There are two errors above and one non-error:
- (1 pt) The internal node containing only “4” is underfull. It has 2 pointers, less than the minimum of 3 (which is half the maximum of 6).
- (1 pt) The value “7” is in the left subtree of the root, but should be in the right subtree, since the key in the root node is 6 and 7 > 6.
- (1 pt) The internal node containing “11 15” is not underfull, because it contains 3 pointers to child nodes, which is exactly half the maximum (6 pointers). Additionally, the presence of the key “11” which is not present in any leaf is not an error (internal nodes may contain values which were once present in leaves but have since been deleted). This point was deducted if this internal node was marked erroneous.
Additionally, the root node is not underfull because the root node is specially permitted to be less than half full. Following is a correct new B+-tree over the same data entries:
This is the unique correct B+-tree with the exact same leaf nodes. It was also valid to compact the data values into a smaller number of leaf nodes, if the values in the root are adjusted accordingly. It is not possible to use more than one internal node.
• [5 points] Consider the B+-tree with order 2 (max 4 pointers per node) drawn below. Draw what the tree would look like after inserting 9*
When 9 is inserted, it goes into the node containing “5 6 7 8”. However, this node is now overfull and must be split into two nodes “5 6 7” and “8 9”. The value 7 is added to the parent internal node, which is now in turn overfull, so it is split into “5 7” and “10 12”. Finally, a new root node is created and “7” is moved to the root node.
Alternatively, we can make all our splits with the smaller side on the left. In this case, the overfull leaf is split as “5 6” and “7 8 9”, 8 is added to the parent node, and “10” is moved to the new root node instead of the new value. Both trees are shown below.
Points were given as follows:
| +1 | Result is valid B+-tree. No node is underfull and no keys in internal nodes are incorrect. |
| +1 | Leaf split was performed correctly. |
| +1 | Internal node split and creation of new root node was performed correctly. |
| +1 | Always split with either the smaller side on the left or the smaller side on the right; do not mix both strategies. |
[5 points] Assume we have a B+tree with room for 30 bytes of data on each internal page: so half-full at 15 bytes. Assume that a pageID (a “pointer” in the tree) takes up 4 bytes, and a character takes 1 byte. Consider the 1st B+-tree below. Draw a correct B+-tree for this data that uses the leaf level provided in the 2nd picture below, but employs prefix key compression above.
New root node uses a total of $4(4) + 2 + 3 + 2 = 23 < 30$ bytes (assuming string prefixes are null-terminated or length-prefixed with a single byte), so it all fits in one node.
Points were given as follows:
| +1 | Result is valid B+-tree. No node is underfull and no keys in internal nodes are incorrect. |
| +1 | Some keys were compressed (shortened) by at least one character |
| +1 | All keys in internal nodes were compressed (shortened) as short as possible |
| +2 | Final tree has only two levels, root and leaves |
|
{"Source-Url": "https://hkn.eecs.berkeley.edu/examfiles/cs186_sp10_mt1_sol.pdf", "len_cl100k_base": 4403, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 21686, "total-output-tokens": 4931, "length": "2e12", "weborganizer": {"__label__adult": 0.0006885528564453125, "__label__art_design": 0.0008034706115722656, "__label__crime_law": 0.0010862350463867188, "__label__education_jobs": 0.26708984375, "__label__entertainment": 0.00023829936981201172, "__label__fashion_beauty": 0.00045180320739746094, "__label__finance_business": 0.0017633438110351562, "__label__food_dining": 0.000964641571044922, "__label__games": 0.001674652099609375, "__label__hardware": 0.0024871826171875, "__label__health": 0.0014142990112304688, "__label__history": 0.001255035400390625, "__label__home_hobbies": 0.00044345855712890625, "__label__industrial": 0.001514434814453125, "__label__literature": 0.0011129379272460938, "__label__politics": 0.0006670951843261719, "__label__religion": 0.000972270965576172, "__label__science_tech": 0.1226806640625, "__label__social_life": 0.0006780624389648438, "__label__software": 0.04620361328125, "__label__software_dev": 0.54345703125, "__label__sports_fitness": 0.0006580352783203125, "__label__transportation": 0.001186370849609375, "__label__travel": 0.0005254745483398438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15688, 0.03641]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15688, 0.4606]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15688, 0.93334]], "google_gemma-3-12b-it_contains_pii": [[0, 662, false], [662, 3068, null], [3068, 3790, null], [3790, 7009, null], [7009, 10127, null], [10127, 12203, null], [12203, 13647, null], [13647, 14781, null], [14781, 15688, null]], "google_gemma-3-12b-it_is_public_document": [[0, 662, true], [662, 3068, null], [3068, 3790, null], [3790, 7009, null], [7009, 10127, null], [10127, 12203, null], [12203, 13647, null], [13647, 14781, null], [14781, 15688, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 15688, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15688, null]], "pdf_page_numbers": [[0, 662, 1], [662, 3068, 2], [3068, 3790, 3], [3790, 7009, 4], [7009, 10127, 5], [10127, 12203, 6], [12203, 13647, 7], [13647, 14781, 8], [14781, 15688, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15688, 0.17557]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f0b8927a97219467c5fdb06eed1f5aff4fc058f8
|
Deductive Databases and XSB
Advanced Database Project for IT4BI Masters 2013-2015
12/29/2013
Kofi Manful
# Table of Contents
Introduction .................................................................................................................. 3
Background .................................................................................................................. 3
Building Blocks ........................................................................................................ 3
Advantages .................................................................................................................. 4
Disadvantages ............................................................................................................ 5
Uses .............................................................................................................................. 5
XSB ................................................................................................................................ 6
Intro ......................................................................................................................... 6
Sample XSB Programs ............................................................................................ 6
Fibonacci ............................................................................................................. 6
Shortest Path ....................................................................................................... 7
Connection to Oracle DB ................................................................................... 9
Puzzle/Problem Solving ..................................................................................... 9
Conclusion .................................................................................................................. 12
Sources ....................................................................................................................... 13
Introduction
Relational Databases dominate the database market today. This can be attributed to their simple model and their efficient performance of everyday tasks. However deductive databases offer an interesting alternative to relational databases in areas such as recursion and information retrieval. In this paper, we will give a brief background and description of deductive databases and some of their main functionality. We will then illustrate these with an implementation of a deductive database called XSB Prolog, which is an open source deductive database being developed by several institutions (University of New York, XSB Inc, Universidade Nova de Lisboa among others).
Background
What we now call deductive databases started from research in two fields, namely Artificial Intelligence and Databases. While the former concentrated more on topics such as knowledge representation and language processing and dealt with main memory mainly, the latter focused more on the efficient storage and retrieval of data on secondary memory and the associated issues with data security and concurrent access. In the 1970s, deductive databases became a topic of their own and research in the following years has produced some foundational theories (Grant & Minker, 1989) (Zaniolo, 1990), some of which will be mentioned in this paper.
Building Blocks
Deductive databases are the combination of logic programming and database systems. This means that not only can they store data like traditional relational databases, but they make logical connections between the data provided to generate results for other queries. Information in a deductive database is specified through rules, predicates, facts and goals such as the ones below:
1. Employee(Name, Position, Salary, ReportsTo)
3. TopEarners(Name) ← Employee(Name, Position, Salary, ReportsTo), Salary>100000
4. Superior (Boss, Subordinate) ← Employee (Subordinate, Position, Salary, Boss).
Superior (Boss, Subordinate) ← Employee(EmpName, Position, Salary, Boss).
Superior (Empname, Subordinate)
5. ?TopEarners(‘Kwasi Mensah)
6. ?TopEarners(Name)
The example we see in number 1 is a specification is called a predicate. It specifies and an entity and its attributes. If we were to draw a parallel between relational databases, we would call this the schema of the entity. The number of arguments/attributes the predicate takes is referred to as the arity and in this example for Employee, the arity is 4.
When we have a predicate in which the all the attributes have taken on actual values, such as in example 2, we call it a fact. A fact specifies an instance of an entity and the values in contains. The equivalent in a relational database would be a tuple in a table. Therefore we can think of an Employee table as being made up of many facts.
Example 3 illustrates what is called a rule. A rule is made up of a head and a body separated by the ← symbol. One can think of the head of a rule as one would think of a view in a relational database. The body of the rule is made up of goals. These goals are separated by commas (,) which can be interpreted as the AND operator. The goals specify the criteria used to define the rule. In this case, the rule states that TopEarners can be defined by every tuple in the Employee “table” which has a salary greater than 100,000.
Example 4 is also a rule but in this case the head of the rule is defined twice. Because the two definitions are connected by a period (.), this means any tuples that meet either the first set of goals or the second will be in the head of the rule. This kind of definition allows us to express transitive relationships more easily than a relational database would. In this example, we are specifying that an employee is a superior to another if the first employee is either the direct boss of the second employee, or if the first employee is the direct boss of someone who is a superior to the second employee.
Examples 5 and 6 show how queries are written when interacting with deductive databases. In example 5, because a constant has been specified as a parameter, the query will return true or false to indicate if that constant meets the requirements specified by that predicate (i.e. if there is a Kwasi Mensah who is an employee who earns more than 100,000). Example 6 has a variable as an argument so the result of the query will be all the employees who meet the requirements specified by the TopEarner rule.
**Advantages**
Deductive databases provide a few advantages over relational databases in a few areas. One of the major ones is the ability to do recursion and the ease with which such recursion can be defined. Although some commercial databases do allow you to define recursive relationships, this has only been a recent development. The SQL standard only added this in 1999 and it took a few years for some of the major relational databases to implement it (MSSQL, PostgreSQL and a few others as of 2011).
Secondly the syntax and structure for doing this with deductive databases is so much more intuitive and simple than the SQL equivalent. Compare example 4 above which defines the Superior relationship with the SQL equivalent below:
CREATE RECURSIVE VIEW Subordinates(Superior, subordinate) AS
SELECT ReportsTo, Name
FROM Employee
UNION
SELECT sub.Superior, emp.name
FROM Subordinates sub, Employee emp
WHERE sub.subordinate=emp.ReportsTo
The deductive database syntax is clearly simpler to construct and much easier to maintain.
Disadvantages
One big disadvantage of deductive database technology is the lack of widespread development and support. There are very few operational deductive databases on the market and most of them are still experimental. Because there isn’t as large a community around deductive databases as there is around relational databases, there really isn’t any system that has been extensively used. Therefore it would be difficult for any company/individual to switch to a deductive database from a relational one. Even if one was able to do this, the significant lack of extensive documentation is also another obstacle.
Another weakness is the inability to deal with unstructured data. As the data in a deductive database is specified using predicates and rules and goals, there does have to be a clear relationship between the information being stored in the database. Without this it is very difficult to store data in a deductive database.
Uses
Deductive databases and their concepts are used in artificial intelligence systems and expert systems. Their ability to make connections between pieces of data and the manner in which information is specified using rules makes deductive databases ideal for this. Here are some real world applications below:
The Cyc System:- An attempt to create a program with enough understanding to learn from books
PACADE:- Protein Atomic Coordinate Analyzer with Deductive Engine, a system for searching for and analyzing proteins
XSB:- a deductive database which is used in other projects such as Semantic Inferencing on Large Knowledge (SILK), OpenSHORE (a hypertext repository that stores data about and described by documents) among others.
XSB
Intro
XSB is a logic programming and deductive database system developed by research teams from the Computer Science Department of Stony Brook University, Universidade Nova de Lisboa, XSB, Inc, Katholieke Universiteit Leuven, and Uppsala Universitet. It runs on UNIX and Windows and the code is open source allowing anyone to contribute to its development. It is written in C++ and as such one needs to have a C++ compiler to build it before being able to run it. The system itself is rather lightweight and able to run on a computer of average specifications.
The system is operated with a command line interface. Documentation on the commands needed to setup and run XSB are included in the package that can be downloaded from the source cite. The package also includes some example databases and programs. We shall highlight some of them here to demonstrate some of the features of XSB.
Sample XSB Programs
Fibonacci
We start off with a simple program that calculates the \( n \)th term in the Fibonacci series. Here is the XSB code below:
demo :-
write('Enter N : '), read(N), nl,
fib(N, Fib),
write('Fib of '), write(N), write(' is '), writeln(Fib).
fib(N, X) :- fib0(N, X, _).
/* fib0(+Arg, -Result, -MinusOneRes) */
fib0(0, 1, 0).
fib0(1, 1, 1).
fib0(N, X, X1) :-
N > 1,
N2 is N - 2,
fib0(N2, X2, X3),
X1 is X2 + X3,
X is X1+X2.
In this example we see the rule “demo” being defined as a small script that reads in a value and uses it to calculate the value of the Fibonacci number at that position before outputting it to the user. The Fibonacci function itself is defined recursively beneath that. Shown below is the output of running this program. For convenience sake, the XSB code was loaded as a file.
**Shortest Path**
The well known shortest path problem can also be solved using XSB. We show below the graph structure from which the code is based.

The following code will find the shortest path from “a” to every other node using the ability of deductive databases to discover transitive relationships between elements. Once again a recursive definition is used for the shortest path rule. Below that, the facts which make up the database are stated. Further below is the result of querying this database.
```prolog
demo :-
sp(a,X,C),
write('Best cost so far from a to '),write(X),write(' is '),writeln(C),fail.
```
```prolog
C:\Users\Jazz\Dropbox\IT4BI\Advanced Databases\XSB\examples>C:\Users\Jazz\Dropbox\IT4BI\Advanced Databases\XSB\config\x64-pc-windows\bin\xsb.exe
xsb_configuration loaded]
[syniritrbc loaded]
XSB Version 3.4.0 (Soy mILK) of May 1, 2013
[x64-pc-windows; mode: debug; engine: slg-wan; scheduling: local]
[Patch date: 2013/05/02 17:42:32]
?- [fib].
[fib Loaded]
yes
?- demo.
Enter N : 17.
Fib of 17 is 2584
yes
?-
```
demo.
sp(X,Y,C) :-
dist(X,Y,C),
none_better(X,Y,C).
sp(X,Y,C) :-
sp(X,Z,C1),
none_better(X,Z,C1),
dist(Z,Y,C2),
C is C1+C2,
none_better(X,Y,C).
none_better(X,Y,C) :-
get_calls_for_table(sp(X,_,_),Call),
subsumes_chk(Call,sp(X,Y,_)),
!,
\
[+ ( get_returns_for_call(Call,Ret),
Ret = sp(X,Y,C1),
C1 < C
).
dist(a,d,2).
dist(a,b,5).
dist(a,c,3).
dist(c,b,1).
dist(b,e,3).
dist(b,d,4).
dist(e,d,2).
dist(d,b,1).
demo.
Best cost so far from a to e is 6
Best cost so far from a to c is 3
Best cost so far from a to b is 3
Best cost so far from a to d is 5
Best cost so far from a to d is 2
**Connection to Oracle DB**
XSB is also capable of connecting to an Oracle database and performing transactions to retrieve or store data. Some of the commands used to do that are listed below:
- `db_open()` - uses the values given as parameters to connect to the Oracle database.
- `db_sql()` - executes whatever SQL code is passed as a parameter.
- `db_create_table()` - a command for creating a table in the Oracle database. Instead of writing the whole SQL create table syntax using the `db_sql()` command, one can just specify the fields and data types and this command will create table.
- `db_transaction()` - used to start, commit or abort Oracle transactions depending on the parameter that is passed.
- `db_show_schema()` - used to display the schema of a table in the Oracle database.
- `db_import()` - a command for loading content from an Oracle database into a set of facts in XSB.
**Puzzle/Problem Solving**
Since XSB is a deductive database, it can be used to deduce information when presented with facts. In the example below, by cleverly setting up goals and rules, we use XSB to find the solution to the following puzzle:
In a street, there are five houses. Each is painted with a different colour. In each house lives somebody coming from a different country. Each has a favorite pet, a favorite drink and a favorite cigarette brand. We know the following facts:
- The English lives in the red house. The dog belongs to the Spanish. One uses to drink coffee in the green house. The Ukrainian drinks tea. The green house is on the right of the white one. The Old Gold smoker breeds snails. The occupant of the yellow house smokes Kool. The occupant of the middle house drinks milk. The Norwegian lives in the first house on the left. The smoker of Chesterfield lives beside the owner of the fox. The smoker of Kool lives beside the owner of the horse. The Gitanes smoker drinks wine. The Japanese smokes Craven. The Norwegian lives beside the blue house.
**Question:** Who breeds the zebra? Who drinks water?
Shown below is the XSB code that states the problem in terms of rules and goals. As can be seen, it does require some ingenuity and experience to be able structure the problem statement properly. However when this done, the solution is easy to deduce as seen in the subsequent image.
demo :- bagof( X, go3( X ), L ), write(L), nl.
houses_iter(0) :- !.
houses_iter(_, X) :- bagof( X, go3( X ), _ ), fail.
houses_iter(N) :- M is N-1, houses_iter(M).
%% memb/2
%%-------
memb( X, [Y|_] ) :- memb( X, Y ).
memb( X, [X|_] ).
%% testp/4
%%--------
testp( X, Y, [X|], [Y|] ).
testp( X, Y, [Y|], [X|] ).
testp( X, Y, [X|], [Y|] ).
testp_seq( X, Y, [X|], [Y|] ) :- testp( X, Y, R, S ).
testp_seq( X, Y, [X|], [Y|] ) :- testp_seq( X, Y, R, S ).
%% on_right/4
%%----------
on_right( X, Y, [X|], [Y|] ).
on_right( X, Y, [Y|], [X|] ).
on_right( X, Y, [X|], [Y|] ).
on_right_seq( X, Y, [X|], [Y|] ) :- on_right_seq( X, Y, R, S ).
%% on_left/4
%%----------
on_left( X, Y, [X|], [Y|] ).
on_left( X, Y, [Y|], [X|] ).
on_left( X, Y, [X|], [Y|] ).
on_left_seq( X, Y, [X|], [Y|] ) :- on_left_seq( X, Y, R, S ).
%% beside/4
%%---------
beside( X, Y, Xs, Ys ) :- on_right( X, Y, Xs, Ys ).
beside( X, Y, Xs, Ys ) :- on_left( X, Y, Xs, Ys ).
beside_seq( X, Y, Xs, Ys ) :- on_right_seq( X, Y, Xs, Ys ).
beside_seq( X, Y, Xs, Ys ) :- on_left_seq( X, Y, Xs, Ys ).
%% go3/1
%%------
go3( config( Countries, Colours, Animals, Drinks, Cigarettes ) ) :-
Countries = [norway, , , , , ], % const 9
Colours = [ , , , , , ],
Animals = [ , , , , , ],
Drinks = [_, _, milk, _, _], % const 8
Cigarettes = [_, _, _, _, _],
testp(england, red, Countries, Colours), % const 1
testp(japan, craven, Countries, Cigarettes), % const 13
beside(norway, blue, Countries, Colours), % const 14
on_right(white, green, Colours, Colours), % const 5
testp_seq(yellow, kool, Colours, Cigarettes), % const 7
testp_seq(span, dog, Countries, Animals), % const 2
testp_seq(old_gold, snail, Cigarettes, Animals), % const 6
testp_seq(gitane, wine, Cigarettes, Drinks), % const 12
testp_seq(ukr, tea, Countries, Drinks), % const 4
testp_seq(green, coffe, Colours, Drinks), % const 3
beside_seq(chesterfield, fox, Cigarettes, Animals), % const 10
beside_seq(kool, horse, Cigarettes, Animals), % const 11
memb(zebra, Animals), % query 1
memb(water, Drinks). % query 2
From the results it is clear that the Japanese breeds the zebra and the Norwegian drinks water.
**Conclusion**
Deductive databases are an interesting alternative to relational databases for data storage and processing. Their structure and mode of data representation make them inherently better at expressing certain features (e.g. transitive closures) and calculating things like generalized aggregates.
There are a few challenges with deductive databases though. Most implementations are still experimental and not widely used enough to have a wide repository of support resources. Secondly setting up and managing a deductive database is generally a very technical task. For most deductive databases, some knowledge of a programming language is required.
In conclusion, for everyday relational database tasks, a deductive database might not be worth the effort. However for more specialized applications such as knowledge/expert systems and artificial intelligence systems, deductive databases make an excellent backbone on which to build.
Sources
Tsukamoto Y et al, (1997), “Application of a deductive database system to search for topological and similar three-dimensional structures in protein”, CABIOS, (pp183 -190)
|
{"Source-Url": "http://cs.ulb.ac.be/public/_media/teaching/infoh415/student_projects/deductive_databases.pdf", "len_cl100k_base": 4256, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 26025, "total-output-tokens": 5226, "length": "2e12", "weborganizer": {"__label__adult": 0.0003147125244140625, "__label__art_design": 0.0003745555877685547, "__label__crime_law": 0.00044417381286621094, "__label__education_jobs": 0.004528045654296875, "__label__entertainment": 8.58306884765625e-05, "__label__fashion_beauty": 0.0001678466796875, "__label__finance_business": 0.00033092498779296875, "__label__food_dining": 0.0003964900970458984, "__label__games": 0.0005273818969726562, "__label__hardware": 0.0010166168212890625, "__label__health": 0.0005221366882324219, "__label__history": 0.0002903938293457031, "__label__home_hobbies": 0.00014770030975341797, "__label__industrial": 0.0005717277526855469, "__label__literature": 0.0003690719604492187, "__label__politics": 0.0002543926239013672, "__label__religion": 0.0004911422729492188, "__label__science_tech": 0.08355712890625, "__label__social_life": 0.00018274784088134768, "__label__software": 0.0236663818359375, "__label__software_dev": 0.880859375, "__label__sports_fitness": 0.0002269744873046875, "__label__transportation": 0.0004320144653320313, "__label__travel": 0.0001685619354248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19040, 0.03263]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19040, 0.63437]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19040, 0.8479]], "google_gemma-3-12b-it_contains_pii": [[0, 106, false], [106, 2059, null], [2059, 4597, null], [4597, 7328, null], [7328, 9315, null], [9315, 11059, null], [11059, 12134, null], [12134, 12791, null], [12791, 15111, null], [15111, 16357, null], [16357, 17243, null], [17243, 18195, null], [18195, 19040, null]], "google_gemma-3-12b-it_is_public_document": [[0, 106, true], [106, 2059, null], [2059, 4597, null], [4597, 7328, null], [7328, 9315, null], [9315, 11059, null], [11059, 12134, null], [12134, 12791, null], [12791, 15111, null], [15111, 16357, null], [16357, 17243, null], [17243, 18195, null], [18195, 19040, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19040, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19040, null]], "pdf_page_numbers": [[0, 106, 1], [106, 2059, 2], [2059, 4597, 3], [4597, 7328, 4], [7328, 9315, 5], [9315, 11059, 6], [11059, 12134, 7], [12134, 12791, 8], [12791, 15111, 9], [15111, 16357, 10], [16357, 17243, 11], [17243, 18195, 12], [18195, 19040, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19040, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
509c479ff6606a30d9505967a473a8cf954de7ad
|
Using the Internet of Things to Teach Good Software Engineering Practice to High School Students
Christine Julien
The Department of Electrical and Computer Engineering
The University of Texas at Austin
Austin, TX USA
E-mail: c.julien@utexas.edu
Abstract
This paper describes a course to introduce high school students to software engineering in practice using the Internet Of Things (IoT). IoT devices allow students to get quick, visible results without watering down technical aspects of programming and networking. The course has three broad goals: (1) to make software engineering fun and applicable, with the aim of recruiting traditionally underrepresented groups into computing; (2) to make young students begin to approach problems with a design mindset; and (3) to show students that computer science, generally, and software engineering, specifically, is about much more than programming. The course unfolds in three segments. The first is a whirlwind introduction to a subset of IoT technologies. Students complete a specific task (or set of tasks) using each technology. This segment culminates in a “do-it-yourself” project, in which the students implement a simple IoT application using their basic knowledge of the technologies. The course’s second segment introduces software engineering practices, again primarily via hands-on practical tutorials. In the third segment of the course, the students conceive of, design, and implement a project that uses the technologies introduced in the first segment, all while being attentive to the good software engineering practices acquired in the second segment. In addition to presenting the course curriculum, the paper also discusses a first offering of the course in a three-week summer intensive program in 2017, including assessments done to evaluate the curriculum.
1. Introduction
In recent years, computer science education has been pushed earlier and earlier; now high schoolers (even middle and elementary schoolers) are routinely exposed to programming (e.g., through Google’s Hour of Code or other activities) and engineering (e.g., through robotics competitions or maker events). However, the application of good software engineering principles remains the stuff of undergraduate and graduate education. Even academic research on software engineering education remains focused on these much later stages of a student’s education.
Lack of student interest in computer science education generally and software engineering specifically has received a significant amount of attention, with a particular focus on the paucity of students from traditionally underrepresented groups (e.g., women and minorities) in the field [15]. This research has demonstrated that lack of interest from students who otherwise excel in STEM (science, technology, engineering, and math) domains is due to a sense that the activities have a limited relevance [20] and a culturally inculcated “fear” that programming is inherently (too) difficult to learn [18]. However, research has also shown that exposure to hands-on computer science in the K-12 years can positively impact students’ perceptions of computer science in general [11].
In this paper, we report on our experiences in taking these lessons learned about teaching computer science and applying them to teaching software engineering principles to high school students. In particular, we investigate coupling a rigorous introduction to the fundamentals of software engineering with hands-on activities utilizing the Internet of Things (IoT). Software engineering has a reputation among students as uninteresting, dry, or even “soft”. The IoT, on the other hand is tactile, hands-on by nature, seen as “hard” engineering, and engaging to today’s students because they can immediately relate to the applications they create.
Our concern is that early introduction of computer programming (i.e., in the K-12 years) without good software engineering practice (including a focus on requirements, design, testing, etc.) risks developing a generation of nearly capable students who are familiar with the general area of computer science but will easily become frustrated when faced with the task of building any system of real size and scale. Our hypothesis is that we can successfully couple the introduction of good software engineering practice with engaging and meaningful IoT application development activities that achieve the best of all worlds: capturing young students’ interest in computing, teaching fundamental programming and engineering concepts, and introducing the importance of good software engineering practice. This paper reports on our early efforts in developing such a course.
The course is a “flipped” one [8]. Almost all the content is delivered through online modules that students consume at their own pace. Class time is devoted entirely to hands-on team activities that demonstrate software engineering principles as applied to creating IoT applications. Prior work in software engineering education has promoted the use of such flipped
classrooms to deliver the Software Engineering Curriculum Model [13]. These efforts include the demonstration that a course on the fundamental elements of the software engineering process can be delivered using a flipped classroom approach [10]. However, this prior work promotes instructor-led discussions of the various phases of the software engineering process; our course instead focuses on intentional trial and error on the part of the students, followed by reflection.
The course has three major explicit goals: (1) to make basic software engineering fun and applicable, with the aim of engaging traditionally underrepresented groups in computing concepts; (2) to make young students approach problems with a design mindset, i.e., to start thinking about high-level designs before or as they start tinkering with things like breadboards and Raspberry Pis; and (3) to show students that computer science, generally, and software engineering, specifically, is about much more than programming (though programming is a substantial component).
The course encourages students to learn by (controlled) failure; learning from our failures is something of a mantra in the software engineering world [1]. The use of failure as a learning mechanism for software engineering was found to be an important element in game-based learning [22]. In our course, for instance, students are set up with tasks that are more prone to failure when good software engineering practices are not followed. Students will not be discouraged from just jumping in and trying things out; by allowing the students to fail (in a controlled way), the class then intentionally guides students in recovering from the failure in a way that is woven into the entire learning process.
2. The Course
The course unfolds in three segments. The first segment is a whirlwind introduction to a variety of IoT technologies. It is designed to allow the students to just “hack” at the different technologies. This segment does nothing to introduce any software engineering principles. Each technology is introduced as an isolated silo, with students given a specific task (or set of tasks) to complete using the technology. This segment culminates in a “do-it-yourself” project, in which, with little guidance, the students are asked to implement a simple IoT application using their basic knowledge of the related technologies. The second segment steps back from the IoT technologies to introduce principles and tools of software engineering. Periodically within this segment, these tools are explicitly related back to the do-it-yourself project or other tasks already performed. In the final segment of the course, the students are asked to conceive of, design, and implement a course project that utilizes at least three of the four technology components introduced in the first segment, all while being attentive to the good software engineering practices acquired in the second segment. In the remainder of this section, we briefly overview the curriculum from each segment. In the next section, we present some preliminary assessments used during the first offering of the course in a three-week summer intensive program offered in 2017. We also capture some of our initial insights.
2.1 Segment One: Technology Introduction
Android (Introduction to Java). Starting on the first day of the course, students are given a crash course in Java programming and asked to implement and deploy a basic Android application using a simple tutorial assignment. While the assignment launches directly into the Android framework (which is arguably unintuitive even for seasoned Java programmers [3]), the exercise is sprinkled with sidebars related to some fundamentals of object oriented programming. However, instead of coming away with a canonical “Hello World” application or a simple drawing canvas, the students have built an actual mobile application from scratch, which they deploy to an actual Android device. This task is very accessible to today’s young students, meeting them where they live while demystifying that little black box in their pockets.
Philips Hue (RESTful Programming in the Web). The second technology element of the course starts with a minilecture on the nature of RESTful programming for the web [9], with a brief introduction to web programming more generally (e.g., HTML and HTTP). The students then perform a two-step task with a set of smart light bulbs1. First, the students directly issue RESTful commands to an actual light using a web interface. Second, the students augment an existing Android application that connects to and controls the lights to add random colors to the lights. For extra credit, students also add slidebars to control hue, saturation, and brightness, or they write code to sense a shaking of the phone to randomly change the light.
Introduction to Sensing (Breadboarding and the Raspberry Pi). The third technology component takes a significant step away from the traditional high-level abstractions of software engineering and delves deep into low-level sensing. Students learn about breadboarding, some simple circuits, and connecting this all to a Raspberry Pi through general purpose input/output pins. At this point, the course delves into some fundamentals of electrical engineering, with some brief lessons about circuits, resistors, capacitors, etc. The students start to see what goes into making new “things” that can be connected to their high-level application. Students create a temperature sensor, a motion sensor, and a light sensor. They experience firsthand the intricate debugging required for these hardware components. On the Raspberry Pi, students are also introduced to a second programming language (Python), where the course explicitly delivers the lesson in choosing an appropriate programming language for a given task.
Communication (Bluetooth Sockets). The final technology component introduces the students to communication via low-level sockets, specifically utilizing the Bluetooth technology available on both the Raspberry Pi and the Android device to enable information to flow between the two in both directions.
---
1 https://www2.meethue.com/en-us
Proceedings of the 2018 ASEE Gulf-Southwest Section Annual Conference
The University of Texas at Austin
April 4-6, 2018
(i.e., both from the Raspberry Pi to the Android device and from the Android device to the Raspberry Pi). In addition to introducing Bluetooth as a technology, students also see, for the first time, Threads and Exceptions in the Java programming language. Both are key high-level programming features. Threads are widely employed computing abstractions that allow one to enable multiple executing entities to co-exist. In this segment of the course, the students use a separate thread within an Android application to handle each communication request. Exceptions allow programs to reactively respond to abnormal conditions. In this course component, the students must specify their program’s behavior in response to a communication channel breaking (e.g., because of a Bluetooth error). Again, the course tutorial for this component uses sidebars to introduce the fundamentals before the concepts are employed directly for an Android app to request and receive sensor data to be displayed on the screen. Interestingly, this technology element is by far the most daunting from an expert’s perspective, but the students had no preconceptions about how hard it should be and had no more trouble with it than with the others.
Do-It-Yourself Project. Finally, to demonstrate to the students how these technologies connect, the final task in the first segment asks the students to create a circuit with an LED that is controlled by the Raspberry Pi. However, the Raspberry Pi should toggle the LED only when a user presses a button on the Android device. While all the previous segments were delivered in tutorial format (where specific code and tasks were mostly given to the students), in this mini project, the students are expected to figure out how to put the entire thing together end-to-end on their own. In addition to demonstrating that the final product “works”, the students must submit a “Design document” in response to the following prompt:
We haven't learned (yet) about design documents and how they communicate the details of a design. However, let's give it our best shot anyway. Create a single page document describing the design underlying your assignment. Think carefully about the following points:
Audience: assume one of your classmates is your audience. You can assume a basic working knowledge of Android programming, Python programming, and connecting to the RPi through the GPIO pins.
Functional blocks: what are the major functional blocks and how are they connected to each other? I want you to start trying to think in abstractions instead of in individual lines of code.
Testing: what tests did you perform and how do they ensure correctness of your project?
Stumbling points: if someone were to replicate your design, what things would you recommend they watch out for?
Resources: were there any key resources that were really helpful to completing the assignment?
This assignment serves as a sort of pre-test for the second segment, which, among other things, introduces good design practices. This assignment has multiple goals. First, the idea is to encourage students to think of design documentation as natural and intuitive. By asking the students to communicate their design without introducing particular techniques or diagram styles, this assignment makes the point that the goal of documenting design is to communicate on a natural level. Second, in the course of completing this design document after having implemented the project, the students become keenly aware of how a good, clear design can better guide the implementation phase. For instance, by thinking about the functional components of their project, students often quickly visualize alternative designs that would improve their project. By having to write down the tests they performed, students inevitably identify other tests they should have included. In summary, this assignment is an intentional segue into the second segment of the course.
2.2 Segment Two: Software Engineering Tools
The course’s first segment takes a “get it done” kind of mentality. Students engage in practices that are deemed to be abhorrent in the software engineering community (e.g., sharing code with a collaborator via email or chat). The course’s second segment highlights three of these practices and provides easy-entry tools and techniques to change them.
Version Control. The first software engineering tool the course introduces is version control. Version control systems allow software engineers to maintain a repository of artifacts related to the project, including the source code, documentation, tests, etc. The repository can be shared among collaborating developers, and it also serves as a backup of the project. Further, the history of the repository can be examined, and the current working version of the project can be “rolled back” to a previous version (e.g., to a previous version in which a major error did not occur). The course’s version control module starts with a mini lecture on the importance of version control generally (both from a “backup” and history perspective and in support of collaborative projects) then uses a simple tutorial to introduce the students to both git\(^2\) and github\(^3\). The course uses git because it is the most widely used version control system today and because it has a low barrier to entry. To ensure that the lesson sinks in, the students are asked to commit all their prior work (their Android projects and their sensor projects) to a git repository. To commit work on Android, the students use the version control tools built into the Android Studio Integrated Development Environment (IDE); to commit the sensor projects from the Raspberry Pi, the students instead use the command line from Linux. From this point forward in the class, all exchange of code between students and the instructors requires using version control (specifically, git), including submission of the remaining assignments.
\(^2\) [https://git-scm.com/](https://git-scm.com/)
\(^3\) [https://github.com/](https://github.com/)
Software Design. This is the only course component that does not require interacting with any software or hardware introduced in the segment on software design. Instead, students are introduced to good design conceptually. The module then introduces canonical design elements from the software engineering domain (e.g., abstraction, encapsulation, coupling, cohesion, etc.) [17]. We also introduce elements of common software engineering methodologies like Agile Software Development [14] and eXtreme Programming [5] that emphasize simplicity and flexibility in design specifications. We introduce particularly use and expressive elements of the Unified Modeling Language (UML) [19], which is widely used in practice as the means to represent and communicate about software designs. As an exercise here, students are asked to revisit how they think about their do-it-yourself exercise and document how it should have been designed, where modules with distinct functionality are isolated one from another and interact only through well-defined interfaces. The prompt for this assignment is:
Let’s revisit the Do-It-Yourself assignment where we took a crack at documenting a design without really knowing what we were doing. Briefly redo this assignment. Restructure your document (and your code!) to do (at least) the following:
- Explicitly provide the requirements, architecture, technical, and user documentation.
- Refactor your code to adhere to the Google Java style guide4 (for the Android code) and Python PEP 8 style guide5 (for the Python code).
- Provide either (1) a UML-style sequence diagram showing the sequence of behaviors upon the user clicking one of the app’s buttons or (2) a UML-style activity diagram showing the overall user interaction with the entire system.
Here, the students put into practice the skills they have learned conceptually, in the context of a project they implemented themselves. This post-hoc design documentation is better than none at all, but the goal is prepare for the course’s third segment, in which the students must start with the design documentation.
Testing. The last module in the second segment of the course introduces software testing. We start by motivating the need for rigorous testing of software through the discussion of several colossal software failures[12], and then discuss the fundamentals of testing (from black box testing [7] to white box testing [16] and why both are important; unit testing to regression testing) and discuss important concepts related to testing (e.g., test suites and coverage). To make these concepts more concrete, we then walk through specific tools for testing Android applications (the Android Testing Support Library6, a wrapper around Junit [21] and Mockito [2]), tied to the Android applications the students have written, and integrated with the Android Studio IDE. Finally, the module introduces the notion of test-driven development [6], in which a programmer writes the tests before the actual implementation.
2.3 Segment Three: Course Project
Given all this preparation and directly following the segment on good software engineering practices the students are asked to conceive of, design, and execute a project of their own making in teams of two or three. At the outset of the course, we preview this project to get the students thinking about what their projects might end up being. The project should make use of the technology blocks covered in the class and encourages the students to use as many as possible.
Brainstorming. As the first step, students are encouraged to identify a real problem whose solution would make a difference to them or someone they know. The prompt for this phase of the project is:
Work with your partner to come up with some ideas. What kinds of things would you like to to be able to do with sensors, smart things, etc. If you have an idea that might use some devices that we don’t have or haven’t used yet, ask. We might have them, or we might be able to find a work-around. Come up with a few ideas. Start to draw some diagrams about the components the projects would have, and how they would fit together. Will you use Bluetooth connectivity, or would interaction through the web work? Make lists of the things that you KNOW how to do already and the unknowns.
While the students are strongly encouraged to come up with their own project ideas, we provide a small set of examples (e.g., a remote grilling meat thermometer that gradually changes the color of an interior light as the meat is more “done”, an ingress/egress sensor that counts the number of people that enter/exit a room, controlling the lights based on assumed occupancy; and a light control system that automatically adjusts a smart light based on the amount of detected ambient light. In the first iteration of the course, student projects included:
- An Android application, coupled with an LED connected to a Raspberry Pi that converted text entered in the Android application into Morse code pulses on the LED.
- A smart light application (controlled via Android) where the hue of the light reflects the sensed temperature; and another project where the hue of the light changes in response to detected motion.
- An Android application that pulled data from a weather website and adjusted the hue of a smart light based on keywords like “sunny”, “cloudy”, or “rainy”.
4 https://google.github.io/styleguide/javaguide.html
5 https://www.python.org/dev/peps/pep-0008/
Proceedings of the 2018 ASEE Gulf-Southwest Section Annual Conference
The University of Texas at Austin
April 4-6, 2018
• An Android application that used a speech recognition library to allow the user to set the smart light hue by simply saying the desired color.
• An extension of our Bluetooth socket program to use the Android device as a game controller for a Simon video game that runs on the Raspberry Pi.
**Design.** With their problem definition in hand, the students are asked to create a design for their project. They sketch a component diagram containing the major project components and indicating how these components are connected. The design also requires determining which technologies to employ and how, based on the specific project and its requirements (e.g., should a Raspberry Pi connect directly to an Android device via Bluetooth as in the communication tutorial, or should it use a RESTful web programming API as in the web programming tutorial?). As part of the design, the students also provide at least one “user story” [17] that details how a user will interact with the completed system and at least one behavior diagram (e.g., an activity diagram or a sequence diagram). As part of this step, we also encourage the students to take a test-driven development mindset and begin conceiving of, documenting, and implementing the test cases for the project.
**Implementation.** The next (and most fun!) step is for the students to take this design and build the system. The team is required to use github for source control and collaboration. This forces the students to think about how to structure the source code to make it easy to maintain. While they build and code their system, students must also appropriately document what they do to ensure that they avoid entering the same pitfalls more than once. Building on the testing design in the previous phase, the students also write a testing plan, build a test suite, and ensure that their project passes the tests.
**Finalizing.** Finally, with the project completed, the students write their own tutorial, in the format of the tutorials used throughout the course. The goal of this tutorial is, on one hand, to document what the students did for their project. On the other hand, it should be written in such a way that everything entailed in their project can be replicated, another essential element of good software engineering practice.
### 3. Assessments
From the perspective of assessing student performance in the course, the tutorial-based modules are graded primarily based on completion. The mini-project (the LED control), the design tutorial submission, and the final project make up each student’s course grade. More importantly however, are the assessments done within the class to evaluate the effectiveness of the approach to instilling software engineering principles and practices. In this section, we describe the assessments currently in place; in Section 5, we discuss our plans for future assessments.
#### 3.1 Pre-Course Survey
Before the course begins, students complete a survey to gather details about their demographics (e.g., gender, age, etc.), preparation (e.g., type of high school, classes taken, programming experience, etc.), other factors relevant to computing as a career trajectory (e.g., interest in the course material, encouragement by others), and their preconceptions about computer science and programming (e.g., their perspective on their own abilities, their understanding of what a software engineer does, etc.). The survey is based heavily upon the Engineering and Computer Science STEM assessment tools made available by the Assessing Women and Men in Engineering Project (AWE) [1]. This survey is used in preparing the course materials to be pitched to the ability level of the class participants but also as a baseline to compare later surveys against.
#### 3.2 Module Surveys
Upon completing each module of the course, the students are asked to complete a survey specific to that model. Each survey begins with a question to gauge the student’s prior familiarity with the topic. Then each survey asks, in some form, how successful the student was at completing the assigned task(s) and how well he or she believes he or she has mastered the material. Each survey then asks (1) what was the student’s single most difficult thing about the module; (2) what was the student’s single favorite aspect of the module; and (3) what was the student’s least favorite aspect of the module.
These frequent mid-course surveys achieve multiple goals. In the simplest sense, they serve as checkpoints for completion grades for the course. However, they also serve as an important mechanism for continuous improvement of the course. Finally, and perhaps essentially, they force the students to reflect about their performance and interest in the module in question. This ties directly into the course’s goal of ensuring that students learn from their challenges and failures.
#### 3.3 Post-Course Survey
For the first offering of the course, the students did a post-course survey using a generic course feedback form, which asks questions about the students’ satisfaction with the course (and instructor), the amount that they learned, how interested they were in the material, and whether they would recommend the course to a friend. The students are also asked for generic free-form feedback on the course. As described in Section 5, this survey will be replaced with a more in-depth assessment that matches the pre-course survey.
### 4. Insights from a First Offering
The course requires a certain type of student, those with a good deal of initiative and curiosity. Further, students should expect the answers to be non-obvious and in fact a bit messy. In the first offering of the course, a subset of the students expected a lecture-based course instead of this challenging and entirely hands-on course. In the future, more clearly setting expectations early for the students may help alleviate some of these challenges. However, the students must be ready to fail and to channel that failure in a positive way; not all high school students are prepared for that.
---
[1] https://www.engr.psu.edu/awe/
Students moved through the course material at drastically different rates. Not all students completed the entire course, though almost all students (88% of the students in Summer 2017) completed the course project to its full specification. Because of the different rates of progress, the mechanism of self-paced tutorials was, in hindsight, essential. However, check-pointing these self-paced modules with more synchronous lectures could pull the students in the course together as a cohort better.
Finally, by the final week of the course, the students were acting as teaching assistants and tutors for one another. Some of this was instructor driven (e.g., having just shown one team how to solve a problem that another team has encountered, we would ask the first team to assist the second) and some was instead student driven (e.g., students would overhear similar problems and ask/offer help directly). In this way, the student growth in the course was significant and noticeable.
5. Looking Forward
The first offering of the course was, by many measures, very successful. The students grew tremendously not only in their capabilities related to programming and software engineering, but, more importantly, in their engagement and excitement about the field. Even so, as is always true, experience and reflection suggest several potential improvements.
During this first offering, the class met for three weeks, for two hours per week. We did not assign homework for the students to complete outside of class. Therefore, there was simply too much to learn and do during the course time. We are restructuring the course so that portions of the self-paced modules (that do not require access to the hardware in the lab) can be done outside of class, in a more truly “flipped” nature.
The relatively generic post-course survey is insufficiently matched against the tailored pre-course survey to draw end-to-end conclusions about the progression of students through this (short) course. In the future, the course will use a tailored post-course survey also based on the post-program surveys from the AWE project to see if the needle had moved on any of the survey items. getting at the real goals of the course. (Has the course changed students’ perceptions of computer science and software engineering? Do students better understand design and the importance of design? Etc.)
Finally, we will also explore additional assessment techniques. For instance, to see if the course is successfully engaging students by making the material relatable, we will add Application Card [4] activities to some of the end-of-module surveys that ask the students to relate what they have just accomplished to some aspect of their everyday lives.
References
|
{"Source-Url": "https://jee.org/using-the-internet-of-things-to-teach-good-software-engineering-practice-to-high-school-students.pdf", "len_cl100k_base": 6112, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22365, "total-output-tokens": 7434, "length": "2e12", "weborganizer": {"__label__adult": 0.0009393692016601562, "__label__art_design": 0.001293182373046875, "__label__crime_law": 0.00069427490234375, "__label__education_jobs": 0.144287109375, "__label__entertainment": 0.0002053976058959961, "__label__fashion_beauty": 0.00052642822265625, "__label__finance_business": 0.0006976127624511719, "__label__food_dining": 0.0010547637939453125, "__label__games": 0.0014667510986328125, "__label__hardware": 0.0019931793212890625, "__label__health": 0.0012722015380859375, "__label__history": 0.0008654594421386719, "__label__home_hobbies": 0.000400543212890625, "__label__industrial": 0.0010900497436523438, "__label__literature": 0.0008778572082519531, "__label__politics": 0.0007901191711425781, "__label__religion": 0.0013780593872070312, "__label__science_tech": 0.02459716796875, "__label__social_life": 0.0005459785461425781, "__label__software": 0.00689697265625, "__label__software_dev": 0.8046875, "__label__sports_fitness": 0.00101470947265625, "__label__transportation": 0.0017271041870117188, "__label__travel": 0.0005588531494140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35478, 0.02898]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35478, 0.87716]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35478, 0.93582]], "google_gemma-3-12b-it_contains_pii": [[0, 5073, false], [5073, 11411, null], [11411, 17478, null], [17478, 23173, null], [23173, 29295, null], [29295, 35478, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5073, true], [5073, 11411, null], [11411, 17478, null], [17478, 23173, null], [23173, 29295, null], [29295, 35478, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35478, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35478, null]], "pdf_page_numbers": [[0, 5073, 1], [5073, 11411, 2], [11411, 17478, 3], [17478, 23173, 4], [23173, 29295, 5], [29295, 35478, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35478, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
6d183351ef5f38fc77a4f25636350442f66bbf9d
|
Interactive Applications (CANDA) Development with SAS for Windows
Frank S. Palmieri, Merck Research Laboratories
Rob Rosen, Merck Research Laboratories
Introduction
SAS for Windows (6.08 and 6.10) can be used to develop sophisticated interactive applications on a PC. However, there are many details that must be addressed when constructing a successful application in this environment. These issues span hardware, operating systems, and software.
In developing a Computer Assisted New Drug Application (CANDA) for the pharmaceutical industry, our team has made extensive use of SAS for Windows. Many of its features are new to the SAS System and are specific to the Microsoft (MS) Windows environment. The focus of this paper is on the advantages and drawbacks of development with SAS for Windows.
Hardware topics include selection of processor, hard drive, memory, and BUS as related to application response time. Operating System (OS) issues include MS-DOS and MS Windows with regard to application performance and reliability. Software elements cover Screen Control Language (SCL), the FRAME entry, and object oriented programming (OOP). Other aspects include Microsoft "look and feel", dynamic data exchange (DDE), and SAS for Windows limitations as a client/server application.
Background
A CANDA is an interactive document and/or data review tool which can include robust functions for accessing and analyzing drug related information. Pharmaceutical companies provide these applications to regulatory agencies (i.e., the Food and Drug Administration) to expedite approval of their newest drugs.
SAS for Windows provides the ability to create an interactive Graphic User Interface (GUI) for an application. It also supplies powerful features for accessing and analyzing data. This makes SAS for Windows a well-suited tool for developing interactive applications like a CANDA.
Hardware Issues
Microprocessor selection is an important choice when developing resource intensive applications. When considering 486 Central Processing Units (CPU), a faster clock speed will increase response time. However, a 66 MHz 486 DX2 chip will not necessarily double the program execution time of a 33 MHz 486 DX chip.
A Pentium (586) processor may or may not be faster than a 486. Pentiums are 64-bit based and can process more information with each CPU cycle than a 32-bit 486. However, software must be specifically written to utilize 64-bit technology. At present there are no 64-bit PC operating systems. Therefore, a PC cannot exploit the full potential of a Pentium processor.
A platform specific issue that can substantially increase system performance is the selection of internal BUS architecture. The four choices are ISA, EISA, VL BUS, and PCI and are 8, 16, 32, and 64-bit architectures, respectively. PCI is presently the fastest of these standards. Although applications and operating systems may not be written to use a 64-bit environment, the internal architecture will effect the overall response time.
Memory is an extremely important component in the design of a PC platform. When an application runs out of memory, it will use the hard drive as "virtual memory" and begin paging/swapping. Information exchange is much slower between CPU and hard drive than between CPU and memory. The amount of memory to use is application specific. (Note: SAS for Windows functions best with a minimum of 16 MB.) For example, a dataset that is 15 MB can completely sort in memory if you have more than 15 MB of memory in your machine. (Note: The SORTSIZE option in the CONFIG.SAS file must be set to 15MB or larger.)
Selection of a hard drive can increase system response time. The important performance characteristics of hard drives are seek and scan rates, throughput, and burst rate. Lower seek and scan rates and higher throughput and burst rates are better. The two most common types of hard drives are IDE and SCSI. Seek and scan rates on these drives are similar, however, SCSI drives have higher throughput and burst rates.
SCSI drives are typically easier to setup and have larger capacities than IDE (up to 4 GB or more), but are more expensive and require a SCSI controller. Some SCSI controllers allow for hardware cache which can greatly increase performance in an I/O intensive application.
Another feature of hard drives is 32-bit disk access (as supported by MS Windows for Workgroups). This will improve I/O between disk and CPU. Drives must be Western Digital Standard compliant to use the 32-bit disk access driver (WDCTRL). There are currently no SCSI hard drives that allow 32-bit disk access.
OS Issues
Choosing an operating system/GUI interface is an important decision as well. MS-DOS with Windows is currently the most predominant PC configuration in the user community. This does not necessarily mean it is the best configuration.
MS-DOS with Windows is known for yielding General Protection Faults (GPFs). Applications running under MS-DOS use an unprotected memory area. If a program was written improperly (i.e., contains a hardcoded memory address), it may occasionally access the same memory area as another program. This will result in the application freezing up, a GPF, program termination, or any combination of the above. GPFs are intercepted and handled better, but will still occur in the newer versions of MS-DOS and Windows. GPFs are not nearly as common when using an operating system like OS/2 or Windows NT. Upgrading to Windows 3.11 or Windows 3.11 for Workgroups (WFW) results in improved memory management and provides 32-bit disk and file access (which improves I/O between disk cache and CPU).
Both Windows 4.0 ("Windows 95" -- not in production at the time of this publication) and Windows NT replace Windows and MS-DOS. They are self-contained operating systems with GUIs, offer a protected memory area similar to OS/2, have easier memory management, and no RAM limit (such as the 64 MB memory limit in MS-DOS). Only software specifically written for Windows 95 and Windows NT will use protected memory A, and will be prone to GPFs. The Windows 95 GUI will bear little resemblance to the current Windows look and feel. This version of Windows will be similar to the Apple MacIntosh front end with folders instead of groups and icons.
Applications written for a 32-bit environment can run on Windows or WFW (both 16-bit), but must have a 32-bit emulator (WIN32S) installed. Applications will run in a 16-bit mode. Windows NT, a 32-bit environment, will run these applications in a true 32-bit mode.
SAS for Windows
In the MS Windows environment, careful consideration must be given to the "look and feel" of your application. "Look and feel" is the way objects (i.e., pushbuttons, listboxes, etc.) appear and function on the screen. Most Windows products follow MS standards for "look and feel". Packages that do not adhere to these standards can be clumsy for an experienced Windows user.
SAS for Windows 6.08 is not totally compliant to MS standards. For example: not all FRAME objects are three dimensional, mouse/keystroke combinations do not function properly within FRAME objects, screen colors cannot be changed using "Control Panel", printing bypasses "Print Manager", and help does not use the Help Engine. Most of these items have been addressed in SAS for Windows 6.10. However, applications developed using FRAME still use non-standard objects (i.e., control arrows and icons) and mouse/keystroke combinations do not function to MS standards.
When using SAS for Windows, interaction with other software (i.e., MS Excel) can be important. Dynamic Data Exchange (DDE) is one form of communication between packages. It is a method of sending and receiving data/commands. "Dynamic" implies an "active" DDE link is set up between two applications. When information is changed in one application window, updates are automatically reflected in the other application window.
There are two types of DDE applications. A DDE "client" can read and send data to a DDE "server". However, it cannot receive commands. In other words, it can use another application, but the other application cannot use it. A DDE "server" can read and send data as well as receive commands. It can use another application, and the other application can use it.
SAS for Windows 6.08 is strictly a DDE client application. Other Windows applications can be invoked from the SAS System, the contents of SAS datasets can be passed, and commands can be sent to perform operations on this data in the other product. However, there is no way to notify the SAS System to read the data back, or to perform any "dynamic" changes to this SAS dataset based on what has occurred in the other application.
With SAS for Windows 6.10, there is limited DDE server capabilities. This is implemented through the Microsoft Windows Open Database Connectivity (ODBC) standard. The SAS System supplies a "database" server engine which allows other Windows products to access and operate upon SAS datasets. However, specific SAS commands still cannot be invoked from other Windows applications. Therefore, you cannot easily take advantage of SAS System features from other Windows packages.
Design
SAS for Windows provides the ability to create a GUI for your customized applications through an extension to SAS/AF called FRAME technology. FRAME allows for the creation of screens with Windows objects (i.e., pushbuttons, listboxes, checkboxes, radiobuttons, etc.). Through popup menus and "drag and drop" mouse movements, one can create a screen (frame) by choosing and positioning objects. Once an object has been placed, it may easily be relocated by clicking, dragging, and dropping with the mouse.
Also, with the advent of FRAME technology, Object Oriented Design (OOD) is now possible with SAS for Windows. OOD is a methodology for building applications where code revolves around "objects". Objects can be screen items like pushbuttons, listboxes, or entire screens themselves. Objects can also be data structures like an array, a linked list, a database table, or even an entire database.
The underlying principle of OOD is any code that manipulates an object is bundled with the object "class". An object class is a generalization for a type of object. For instance, a pushbutton is an object class. An actual object (object "instance") is an OK button or a Cancel button. Separate code modules for an object class are called "methods", and all methods reside together. Code that operates on an object cannot be written in external modules. Furthermore, anyone writing external modules cannot access the code behind the object classes' methods. They may only call these methods to manipulate an object. (Note: This is called encapsulation.)
FRAME technology provides an environment for developing and encapsulating methods with screen objects. However, it does not provide this facility for data structures. Therefore, FRAME is not a complete object oriented development tool.
Consequently, OOD with SAS for Windows is possible when designing the interface portion of an application. This may be achieved by first "mocking up" your application screens based on the object classes which are available with FRAME. Next, find common objects across screens. These will become application specific object classes. Also, examine functionality that may be required to manipulate common screen objects. These will become methods that can be tied to each object class.
FRAME comes with a screen object class resource list. This is the list of all object classes that are available when creating a screen. New object classes (subclasses) may be created from
these pre-existing classes. For instance, if your application uses an OK button on almost every screen, you may create an OK button object subclass from the pushbutton object class, and place it in the resource list. The OK button is then immediately available for use on every new screen.
Each object class in the resource list has a number of methods which manipulate it. Methods may be added to an object class by referencing new code in the object classes' method list. For example, if your application needs to save screen settings whenever you press the OK button, you may design and add a "save screen settings" method to the OK button class.
Prototyping
Another advantage of FRAME technology is the ability to perform Rapid Application Development (RAD). RAD is a methodology for quickly developing systems with dynamic user interaction. Effective implementation of RAD is dependent on quick changes to your interface and functionality.
FRAME's ability to create screen objects, relocate them, and change their attributes through drag and drop, point and click mouse manipulation makes RAD practical during screen development.
Screen navigation functions may be implemented with little if any code. However, quick changes to back-end processing are not as simple. Therefore, RAD may be applied more effectively during prototyping where the interface and system requirements are being defined. Full scale functionality may be built in later during development.
Development
SAS for Windows presents many options to be considered for application development. An initial decision must be made between using an OOP or procedural approach. Other issues include choice of screen objects, data structures, and coding languages. All of these decisions can affect the performance and the maintainability of your application.
An OOP approach to development requires more "up front" design time than the standard procedural method. An application must be viewed in its entirety to determine which objects, both screen and data, are being used throughout the system. Methods must then be designed to manipulate these objects. Once this has been completed, development can begin.
The advantage of the OOP approach is the development of neatly packaged, reusable object classes. The disadvantage is the investment of time before actual coding begins. In the procedural approach, many lines of code may be written in the time that it takes to plan your object classes and associated methods. This is clearly a decision which must be made based on the life expectancy and reusability of your application.
An SCL list resides in memory and may be created, populated, and deleted during execution. It dynamically utilizes as much memory as it needs at run time and relinquishes it when no longer necessary. (Note: This is an advantage over an array which allocates a predefined amount of memory.) An SCL list may be stored as SLIST catalog entries. Also, an SCL list can be used to populate a listbox through the object's attribute screen.
An SCL list that contains one column of data is easy to manipulate. However, SCL lists can be composed of sublists that contain multiple "columns" of information. Manipulating this type of structure is not as intuitive and requires careful design to avoid confusion. Managing multi-column data is more straightforward in a SAS dataset structure. The contents of an SCL list must be examined and manipulated through code whereas a SAS dataset may be browsed and edited through the FSVIEW procedure. Finally, searching through a large SCL list can take considerable time. This is because the nature of its structure an SCL list must be read sequentially. A large indexed dataset is faster to search through than an SCL list of equal size.
Although methods or programs associated with FRAMES are written in SCL, Base SAS language and procedures may be used through an SCL submit block. This feature is important in terms of performance as dataset processing is generally quicker when using Base SAS instead of SCL. An explanation for this is the extra time required to transfer data back and forth between the Data Set Data Vector (DDV) and the SCL Data Vector (SDV).
Dataset processing may be performed using either SAS data steps or the SQL procedure. In most cases the use of these "languages" is interchangeable. There are instances, however, where only one of these languages will provide the necessary
functionality. A SAS data step, for example, can be used to create multiple output datasets in one pass while SQL cannot. In turn, SQL can be used to perform a many-to-many merge, or join, between datasets to form a cartesian product while a SAS data step merge cannot.
Another difference between these languages is that SAS data steps are generally faster than SQL. One of the main reasons for this is that SQL attempts to sort input datasets before merging them, even if they are presorted. Specifying SORTEDBY variables for input datasets will remedy this problem. SQL code should then execute as fast as the equivalent SAS data step.
Summary
SAS for Windows provides the tools for creating sophisticated and robust interactive applications (like a CANDA) on the PC platform. However, the choice of hardware and operating systems is critical to the performance of any software that is built upon them. Also, to effectively employ SAS for Windows as a development tool, one must learn to utilize native Windows features and new programming techniques unique to this environment.
Selection of appropriate hardware components will improve application response time. When selecting a microprocessor, note that there is not always a linear relationship between clock speed and program execution time. Also, a Pentium processor may or may not be faster than a 486. Less paging/swapping will occur with more memory. The amount of memory to use is application specific. PCI is presently the fastest internal BUS architecture. With regard to hard drives, lower seek and scan rates and higher throughput and burst rates are better.
When considering Operating Systems, MS-DOS with Windows is currently the most predominant PC configuration in the user community. Applications running under MS-DOS use an unprotected memory area and can yield GPFs. Windows 95 and Windows NT, which replace Windows and MS-DOS, will use protected memory. Windows 3.11 and newer versions provide 32-bit disk and file access which improves performance. 32-bit applications will run in a 16-bit mode under Windows and WFW (by installing WIN32S) and will run in 32-bit mode under NT.
SAS for Windows can be utilized to create a graphical user-friendly front-end, to aid in quick prototyping, and to support OOD. Although version 6.10 generally follows Microsoft standards for "look and feel", FRAME AF applications developed with this product do not completely adhere to these standards. Also, version 6.10 is a DDE client application with limited DDE server capabilities. SAS for Windows provides the ability to create a GUI for FRAME applications. Furthermore, it facilitates object oriented design and development of the application interface. However, it does not promote this approach where design and development of background processing is concerned. SAS for Windows also facilitates RAD of an application's front end. However, it does not support the RAD methodology on the back end. Therefore, RAD with SAS for Windows should not be applied when creating a fully functional prototype.
FRAME technology coupled with SCL, Base SAS, and SQL presents many options during implementation. FRAME provides numerous predefined screen objects. However, when two types of objects perform the same function (i.e., listboxes and extended tables) it may be more advantageous to use one object versus the other. SCL supports linked list data structures which can be very useful in conjunction with FRAME technology. However, linked lists should not exclusively be used because they are easier to manipulate. Although methods or programs associated with FRAMES are written in SCL, Base SAS language and procedures should be utilized for dataset processing to enhance the performance of your application. Depending on the situation, datasets should be processed using a mixture of SAS data steps and SQL.
Acknowledgments
Special thanks to the following people for their input and support to this CANDA project:
Wanda Bidlack
Barry Cohen
Andrea Contino
Merck Research Laboratories
Planning Data Systems
Contino Associates, Inc.
Trademarks
Microsoft, Windows, Windows for Workgroups, Windows 95, Windows 4.0, Windows NT, WIN32S, MS-DOS, Word, Excel, are a registered trademark of Microsoft Corporation, Inc.
SAS, SAS for Windows, SAS/AF, FRAME Technology, SCL, Base SAS, FS VIEW are registered trademarks of SAS Institute, Inc.
Pentium is a registered trademark of Intel Corporation, Inc.
|
{"Source-Url": "https://support.sas.com/resources/papers/proceedings-archive/SUGI95/Sugi-95-10%20Palmieri%20Rosen.pdf", "len_cl100k_base": 4149, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 12173, "total-output-tokens": 4398, "length": "2e12", "weborganizer": {"__label__adult": 0.00032258033752441406, "__label__art_design": 0.0002620220184326172, "__label__crime_law": 0.00028252601623535156, "__label__education_jobs": 0.0005273818969726562, "__label__entertainment": 4.804134368896485e-05, "__label__fashion_beauty": 0.0001329183578491211, "__label__finance_business": 0.00039005279541015625, "__label__food_dining": 0.00025272369384765625, "__label__games": 0.0005130767822265625, "__label__hardware": 0.004619598388671875, "__label__health": 0.0007066726684570312, "__label__history": 0.0001316070556640625, "__label__home_hobbies": 7.891654968261719e-05, "__label__industrial": 0.0007233619689941406, "__label__literature": 0.0001245737075805664, "__label__politics": 0.00010591745376586914, "__label__religion": 0.00030684471130371094, "__label__science_tech": 0.0435791015625, "__label__social_life": 4.178285598754883e-05, "__label__software": 0.0291290283203125, "__label__software_dev": 0.9169921875, "__label__sports_fitness": 0.0002205371856689453, "__label__transportation": 0.0003840923309326172, "__label__travel": 0.00012636184692382812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20593, 0.00497]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20593, 0.50988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20593, 0.91955]], "google_gemma-3-12b-it_contains_pii": [[0, 5654, false], [5654, 11687, null], [11687, 16132, null], [16132, 20593, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5654, true], [5654, 11687, null], [11687, 16132, null], [16132, 20593, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20593, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20593, null]], "pdf_page_numbers": [[0, 5654, 1], [5654, 11687, 2], [11687, 16132, 3], [16132, 20593, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20593, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
522dde22fd906d6d23a2f3bb8d8aa90888c802ee
|
Conclusion: resolution is a complete and effective deduction mechanism using:
Horn clauses (related to "Definite programs"),
Linear, Input strategy
Breadth-first exploration of the tree (or an equivalent approach)
(possibly ordered clauses, but not required – see Selection rule later)
• Very close to what is generally referred to as SLD-resolution (see later)
• This allows to some extent realizing Greene’s dream (within the theoretical limits
of the formal method), and efficiently!
Towards Logic Programming (Contd.)
- Given these results, why not use logic as a general purpose *programming language*? [Kowalski 74]
- A “logic program” would have two interpretations:
- *Declarative* (“LOGIC”): the logical reading (facts, statements, knowledge)
- *Procedural* (“CONTROL”): what resolution does with the program
- ALGORITHM = LOGIC + CONTROL
- Specify these components separately
- Often, worrying about control is not needed at all (thanks to resolution)
- Control can be effectively provided through the ordering of the literals in the clauses
Towards Logic Programming: Another (more compact) Clausal Form
- All formulas are transformed into a set of *Clauses*.
- A clause has the form: $conc_1, ..., conc_m \rightarrow cond_1, ..., cond_n$
where $conc_1, ..., conc_m$ “or” $cond_1, ..., cond_n$ are literals, and are the *conclusions* and *conditions* of a rule:
$conc_1, ..., conc_m \leftarrow cond_1, ..., cond_n$
“conclusions” “conditions”
- All variables are implicitly universally quantified: (if $X_1, ..., X_k$ are the variables)
$\forall X_1, ..., X_k \ conc_1 \lor \ldots \lor conc_m \leftarrow cond_1 \land \ldots \land cond_n$
- More compact than the traditional clausal form:
- no connectives, just commas
- no need to repeat negations: all negated atoms on one side, non-negated ones on the other
- A *Horn Clause* then has the form: $conc_1 \leftarrow cond_1, ..., cond_n$
where $n$ can be zero and possibly $conc_1$ empty.
Some Logic Programming Terminology – “Syntax” of Logic Programs
- **Definite Program**: a set of positive Horn clauses $head \leftarrow goal_1, ..., goal_n$
- The single conclusion is called the head.
- The conditions are called “goals” or “procedure calls”.
- $goal_1, ..., goal_n \ (n \geq 0)$ is called the “body”.
- if $n = 0$ the clause is called a “fact” (and the arrow is normally deleted)
- Otherwise it is called a “rule”
- **Query** (question): a negative Horn clause (a “headless” clause)
- A procedure is a set of rules and facts in which the heads have the same predicate symbol and arity.
- Terms in a goal are also called “arguments”.
Some Logic Programming Terminology (Contd.)
- Examples:
grandfather(X,Y) ← father(X,Z), mother(Z,Y).
grandfather(X,Y) ←.
grandfather(X,Y).
← grandfather(X,Y).
LOGIC: Declarative “Reading” (Informal Semantics)
- A rule (has head and body)
\[ \text{head} \leftarrow \text{goal}_1, \ldots, \text{goal}_n. \]
which contains variables \( X_1, \ldots, X_k \) can be read as
for all \( X_1, \ldots, X_k \):
“head” is true if “goal_1” and ... and “goal_n” are true
- A fact n=0 (has only head)
\[ \text{head}. \]
for all \( X_1, \ldots, X_k \): “head” is true (always)
- A query (the headless clause)
\[ \leftarrow \text{goal}_1, \ldots, \text{goal}_n \]
can be read as:
for which \( X_1, \ldots, X_k \) are “goal_1” and ... and “goal_n” true?
LOGIC: Declarative Semantics – Herbrand Base and Universe
- Given a first-order language \( L \), with a non-empty set of variables, constants, function symbols, relation symbols, connectives, quantifiers, etc. and given a syntactic object \( A \),
\[ \text{ground}(A) = \{ A\theta \mid \exists \theta \in \text{Subst}, \text{var}(A\theta) = \emptyset \} \]
i.e. the set of all “ground instances” of \( A \).
- Given \( L, U_L \) (Herbrand universe) is the set of all ground terms of \( L \).
- \( B_L \) (Herbrand Base) is the set of all ground atoms of \( L \).
- Similarly, for the language \( L_P \) associated with a given program \( P \) we define \( U_P \), and \( B_P \).
- Example:
\[ P = \{ \quad p(f(X)) \leftarrow p(X). \quad p(a). \quad q(a). \quad q(b). \quad \} \]
\[ U_P = \{ a, b, f(a), f(b), f(f(a)), f(f(b)), \ldots \} \]
\[ B_P = \{ p(a), p(b), q(a), q(b), p(f(a)), p(f(b)), q(f(a)), \ldots \} \]
Herbrand Interpretations and Models
- A **Herbrand Interpretation** is a subset of $B_L$, i.e. the set of all Herbrand interpretations $I_L = \mathcal{P}(B_L)$.
(Note that $I_L$ forms a complete lattice under $\subseteq$ – important for fixpoint operations to be introduced later).
- Example: $P = \{ p(f(X)) \leftarrow p(X), \ p(a), \ q(a), \ q(b) \}$
$U_P = \{ a, b, f(a), f(b), f(f(a)), f(f(b)), \ldots \}$
$B_P = \{ p(a), p(b), q(a), q(b), p(f(a)), p(f(b)), q(f(a)), \ldots \}$
$I_P = \text{all subsets of } B_P$
- A **Herbrand Model** is a Herbrand interpretation which contains all logical consequences of the program.
- The **Minimal Herbrand Model** $H_P$ is the smallest Herbrand interpretation which contains all logical consequences of the program. (It is unique.)
- Example:
$H_P = \{ q(a), q(b), p(a), p(f(a)), p(f(f(a))), \ldots \}$
---
Declarative Semantics, Completeness, Correctness
- **Declarative semantics of a logic program $P$**: the set of ground facts which are logical consequences of the program (i.e., $H_P$).
(Also called the “least model” semantics of $P$).
- **Intended meaning of a logic program $P$**: the set $M$ of ground facts that the user expects to be logical consequences of the program.
- A logic program is **correct** if $H_P \subseteq M$.
- A logic program is **complete** if $M \subseteq H_P$.
- Example:
father(john,peter).
father(john,mary).
mother(mary,mike).
grandfather(X,Y) ← father(X,Z), father(Z,Y).
with the usual intended meaning is **correct** but **incomplete**.
We now turn to the operational semantics of logic programs, given by a concrete operational procedure: Linear (Input) Resolution.
- Complementary literals:
- in two different clauses
- on different sides of $\leftarrow$
- unifiable with unifier $\theta$
$\text{father}(\text{john}, \text{mary}) \leftarrow$
$\text{grandfather}(X, Y) \leftarrow \text{father}(X, Z), \text{mother}(Z, Y)$
$\theta = \{ X/\text{john}, Z/\text{mary} \}$
- Resolution step (linear, input, ...):
- given a clause and a resolvent, we can build a new resolvent which follows from them by:
- renaming apart the clause ("standardization apart" step)
- putting all the conclusions to the left of the $\leftarrow$
- putting all the conditions to the right of the $\leftarrow$
- if there are complementary literals (unifying literals at different sides of the arrow in the two clauses), eliminating them and applying $\theta$ to the new resolvent
- LD-Resolution: linear (and input) resolution, applied to definite programs
Note that then all resolvents are negative Horn clauses (like the query).
Example
- from
father(john,peter) ←
mother(mary,david) ←
we can infer
father(john,peter), mother(mary,david) ←
- from
father(john,mary) ←
grandfather(X,Y) ← father(X,Z), mother(Z,Y)
we can infer
grandfather(john,Y') ← mother(mary,Y')
CONTROL: A proof using LD-Resolution
- Prove "grandfather(john,david) ←" using the set of axioms:
1. father(john,peter) ←
2. father(john,mary) ←
3. father(peter,mike) ←
4. mother(mary,david) ←
5. grandfather(L,M) ← father (L,N), father(N,M)
6. grandfather(X,Y) ← father (X,Z), mother(Z,Y)
- We introduce the predicate to prove (negated!)
7. ← grandfather(john,david)
- We start resolution: e.g. 6 and 7
8. ← father(john,Z₁), mother(Z₁,david) ← X₁/john, Y₁/david
using 2 and 8
9. ← mother(mary,david) ← Z₁/mary
- using 4 and 9
←
CONTROL: Rules and SLD-Resolution
- Two control-related issues are still left open in LD-resolution. Given a current resolvent $R$ and a set of clauses $K$:
- given a clause $C$ in $K$, several of the literals in $R$ may unify the non-negated a complementary literal in $C$
- given a literal $L$ in $R$, it may unify with complementary literals in several clauses in $K$
- A *Computation* (or *Selection* rule) is a function which, given a resolvent (and possibly the proof tree up to that point) returns (selects) a literal from it. This is the goal that will be used next in the resolution process.
- A *Search rule* is a function which, given a literal and a set of clauses (and possibly the proof tree up to that point), returns a clause from the set. This is the clause that will be used next in the resolution process.
CONTROL: Rules and SLD-Resolution (Contd.)
- SLD-resolution: Linear resolution for Definite programs with Selection rule.
- An SLD-resolution *method* is given by the combination of a *computation (or selection) rule* and a *search rule*.
- Independence of the computation rule: Completeness does not depend on the choice of the computation rule.
- Example: a “left-to-right” rule (as in ordered resolution) does not impair completeness – this coincides with the completeness result for ordered resolution.
- Fundamental result:
- “Declarative” semantics ($H_P$) ≡ “operational” semantics (SLD-resolution)
- I.e., all the facts in $H_P$ can be deduced using SLD-resolution.
CONTROL: Procedural reading of a logic program
- Given a rule
\[
\text{head} \leftarrow \text{goal}_1, \ldots, \text{goal}_n.
\]
it can be seen as a description of the goals the solver (resolution method) has to execute in order to solve “head”
- Possible, given computation and search rules.
- In general, “In order to solve ‘head’, solve ‘goal\textsubscript{1}’ and ... and solve ‘goal\textsubscript{n}’.”
- If ordered resolution is used (left-to-right computation rule), then read “In order to solve ‘head’, first solve ‘goal\textsubscript{1}’ and then ‘goal\textsubscript{2}’ and then ... and finally solve ‘goal\textsubscript{n}’.”
- Thus the “control” part corresponding to the computation rule is often associated with the order of the goals in the body of a clause
- Another part (corresponding to the search rule) is often associated with the order of clauses
CONTROL: Procedural reading of a logic program (Contd.)
- Example – read “procedurally”:
father(john,peter).
father(john,mary).
father(peter,mike).
father(X,Y) ← mother(Z,Y), married(X,Z).
Towards a Fixpoint Semantics for LP – Fixpoint Basics
- A fixpoint for an operator \( T : X \rightarrow X \) is an element of \( x \in X \) such that \( x = T(x) \).
- If \( X \) is a poset, \( T \) is monotonic if \( \forall x, y \in X, \, x \leq y \Rightarrow T(x) \leq T(y) \)
- If \( X \) is a complete lattice and \( T \) is monotonic the set of fixpoints of \( T \) is also a complete lattice [Tarski]
- The least element of the lattice is the least fixpoint of \( T \), denoted \( \text{lfp}(T) \)
- Powers of a monotonic operator (successive applications):
\[
T \uparrow 0(x) = x \\
T \uparrow n(x) = T(T \uparrow (n-1)(x))(n \text{ is a successor ordinal}) \\
T \uparrow \omega(x) = \cup\{T \uparrow n(x) | n < \omega\}
\]
We abbreviate \( T \uparrow \alpha(\bot) \) as \( T \uparrow \alpha \)
- There is some \( \omega \) such that \( T \uparrow \omega = \text{lfp}(T) \). The sequence \( T \uparrow 0, T \uparrow 1, ..., \text{lfp}(T) \) is the Kleene sequence for \( T \)
- In a finite lattice the Kleene sequence for a monotonic operator \( T \) is finite
Towards a Fixpoint Semantics for LP – Fixpoint Basics (Contd.)
- A subset \( Y \) of a poset \( X \) is an (ascending) chain iff \( \forall y, y' \in Y, \, y \leq y' \lor y' \leq y \)
- A complete lattice \( X \) is ascending chain finite (or Noetherian) if all ascending chains are finite
- In an ascending chain finite lattice the Kleene sequence for a monotonic operator \( T \) is finite
A Fixpoint Semantics for Logic Programs, and Equivalences
- The Immediate consequence operator $T_P$ is a mapping: $T_P : I_P \rightarrow I_P$ defined by:
$$T_P(I) = \{A \in B_P \mid \exists C \in \text{ground}(P), C = A \leftarrow L_1, \ldots, L_n \text{ and } L_1, \ldots, L_n \in I\}$$
(in particular, if $(A \leftarrow) \in P$, then every element of $\text{ground}(A)$ is in $T_P(I), \forall I$).
- $T_P$ is monotonic, so it has a least fixpoint $I^*$ so that $T_P(I^*) = I^*$, which can be obtained by applying $T_P$ iteratively starting from the bottom element of the lattice (the empty interpretation).
- (Characterization Theorem) [Van Emden and Kowalski]
A program $P$ has a Herbrand model $H_P$ such that:
- $H_P$ is the least Herbrand Model of $P$.
- $H_P$ is the least fixpoint of $T_P$ ($lfp T_P$).
- $H_P = T_P \uparrow \omega$.
I.e., $\text{least model semantics} (H_P) \equiv \text{fixpoint semantics} (lfp T_P)$
- Because it gives us some intuition on how to build $H_P$, the least fixpoint semantics can in some cases (e.g., finite models) also be an operational semantics (e.g., in deductive databases).
Example:
\[ P = \{ \text{p}(f(X)) \leftarrow \text{p}(X). \]
\[ \text{p}(a). \]
\[ \text{q}(a). \]
\[ \text{q}(b). \} \]
\[ U_P = \{ a, b, f(a), f(b), f(f(a)), f(f(b)), \ldots \} \]
\[ B_P = \{ p(a), p(b), q(a), q(b), p(f(a)), p(f(b)), q(f(a)), \ldots \} \]
\[ I_P \text{ is all subsets of } B \]
\[ H_P = \{ q(a), q(b), p(a), p(f(a)), p(f(f(a))), \ldots \} \]
\[ T_P \uparrow 0 = \{ p(a), q(a), q(b) \} \]
\[ T_P \uparrow 1 = \{ p(a), q(a), q(b), p(f(a)) \} \]
\[ T_P \uparrow 2 = \{ p(a), q(a), q(b), p(f(a)), p(f(f(a))) \} \]
\[ \ldots \]
\[ T_P \uparrow \omega = H_P \]
|
{"Source-Url": "http://cliplab.org/~logalg/slides/PS/6_lp_theory_2.pdf", "len_cl100k_base": 4165, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 29798, "total-output-tokens": 4854, "length": "2e12", "weborganizer": {"__label__adult": 0.0004334449768066406, "__label__art_design": 0.0006136894226074219, "__label__crime_law": 0.0006880760192871094, "__label__education_jobs": 0.0020351409912109375, "__label__entertainment": 0.00016832351684570312, "__label__fashion_beauty": 0.00021588802337646484, "__label__finance_business": 0.0003390312194824219, "__label__food_dining": 0.0006494522094726562, "__label__games": 0.0012073516845703125, "__label__hardware": 0.0008482933044433594, "__label__health": 0.000659942626953125, "__label__history": 0.0003440380096435547, "__label__home_hobbies": 0.0002008676528930664, "__label__industrial": 0.00083160400390625, "__label__literature": 0.0014286041259765625, "__label__politics": 0.0004076957702636719, "__label__religion": 0.0008206367492675781, "__label__science_tech": 0.11077880859375, "__label__social_life": 0.0001634359359741211, "__label__software": 0.01136016845703125, "__label__software_dev": 0.8642578125, "__label__sports_fitness": 0.0003740787506103515, "__label__transportation": 0.0009183883666992188, "__label__travel": 0.0001952648162841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13652, 0.00605]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13652, 0.89965]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13652, 0.79331]], "google_gemma-3-12b-it_contains_pii": [[0, 490, false], [490, 1994, null], [1994, 2824, null], [2824, 4372, null], [4372, 5948, null], [5948, 7052, null], [7052, 7852, null], [7852, 9377, null], [9377, 10459, null], [10459, 11935, null], [11935, 13076, null], [13076, 13652, null]], "google_gemma-3-12b-it_is_public_document": [[0, 490, true], [490, 1994, null], [1994, 2824, null], [2824, 4372, null], [4372, 5948, null], [5948, 7052, null], [7052, 7852, null], [7852, 9377, null], [9377, 10459, null], [10459, 11935, null], [11935, 13076, null], [13076, 13652, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13652, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 13652, null]], "pdf_page_numbers": [[0, 490, 1], [490, 1994, 2], [1994, 2824, 3], [2824, 4372, 4], [4372, 5948, 5], [5948, 7052, 6], [7052, 7852, 7], [7852, 9377, 8], [9377, 10459, 9], [10459, 11935, 10], [11935, 13076, 11], [13076, 13652, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13652, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
19f6d707f07e63707b1b4fde8526d46409b69510
|
Accessing Global Variables
- The bindings of global variables of an expression or a function are kept in a vector in the heap (Global Vector).
- They are addressed consecutively starting with 0.
- When an F-object or a C-object are constructed, the Global Vector for the function or the expression is determined and a reference to it is stored in the `gp`-component of the object.
- During the evaluation of an expression, the (new) register GP (Global Pointer) points to the actual Global Vector.
- In contrast, local variables should be administered on the stack ...
$\Rightarrow$ General form of the address environment:
$$
\rho : Vars \to \{L, G\} \times \mathbb{Z}
$$
Accessing Local Variables
Local variables are administered on the stack, in stack frames.
Let $e \equiv e'\ e_0\ \ldots\ e_{m-1}$ be the application of a function $e'$ to arguments $e_0, \ldots, e_{m-1}$.
Warning:
The arity of $e'$ does not need to be $m$ :-)
- $f$ may therefore receive less than $n$ arguments (under supply);
- $f$ may also receive more than $n$ arguments, if $t$ is a functional type (over supply).
Possible stack organisations:
Addressing of the arguments can be done relative to FP.
- The local variables of $e'$ cannot be addressed relative to FP.
- If $e'$ is an $n$-ary function with $n < m$, i.e., we have an over-supplied function application, the remaining $m - n$ arguments will have to be shifted.
If \( e' \) evaluates to a function, which has already been partially applied to the parameters \( a_0, \ldots, a_{k-1} \), these have to be sneaked in underneath \( e_0 \):
Alternative:
+ The further arguments $a_0, \ldots, a_{k-1}$ and the local variables can be allocated above the arguments.
Addressing of arguments and local variables relative to FP is no more possible. (Remember: $m$ is unknown when the function definition is translated.)
Way out:
- We address both, arguments and local variables, relative to the stack pointer $SP$ !!!
- However, the stack pointer changes during program execution...
• The difference between the current value of \( SP \) and its value \( sp_0 \) at the entry of the function body is called the stack distance, \( sd \).
• Fortunately, this stack distance can be determined at compile time for each program point, by simulating the movement of the \( SP \).
• The formal parameters \( x_0, x_1, x_2, \ldots \) successively receive the non-positive relative addresses \( 0, -1, -2, \ldots \), i.e., \( \rho x_i = (L, -i) \).
• The absolute address of the \( i \)-th formal parameter consequently is
\[
sp_0 - i = (SP - sd) - i
\]
• The local \texttt{let}-variables \( y_1, y_2, y_3, \ldots \) will be successively pushed onto the stack:
• The $y_i$ have positive relative addresses 1, 2, 3, ..., that is: $\rho y_i = (L, i)$.
• The absolute address of $y_i$ is then $sp_0 + i = (SP - sd) + i$
With CBN, we generate for the access to a variable:
\[
\text{code}_V \ x \ \rho \ sd \ = \ \text{getvar} \ x \ \rho \ sd \\
\text{eval}
\]
The instruction \text{eval} checks, whether the value has already been computed or whether its evaluation has to yet to be done (\(\implies\) will be treated later \(\therefore\))
With CBV, we can just delete \text{eval} from the above code schema.
The (compile-time) macro \text{getvar} is defined by:
\[
\text{getvar} \ x \ \rho \ sd \ = \ \text{let} \ (t, i) = \rho \ x \ \text{in} \\
\text{match} \ t \ \text{with} \\
L \to \text{pushloc} \ (sd - i) \\
| \ G \to \text{pushglob} \ i \\
\text{end}
\]
The access to local variables:
\[ S[SP+1] = S[SP - n]; SP++; \]
Correctness argument:
Let \( sp \) and \( sd \) be the values of the stack pointer resp. stack distance \textbf{before} the execution of the instruction. The value of the local variable with address \( i \) is loaded from \( S[a] \) with
\[
a = sp - (sd - i) = (sp - sd) + i = sp_0 + i
\]
... exactly as it should be \( :-) \)
The access to global variables is much simpler:
\[ \text{pushglob } i \]
\[
\begin{align*}
\text{GP} & \quad V \\
\text{SP} & \quad = \text{SP} + 1; \\
\text{S[SP]} & \quad = \text{GP} \rightarrow v[i];
\end{align*}
\]
Example:
Regard \( e \equiv (b + c) \) for \( \rho = \{ b \mapsto (L, 1), c \mapsto (G, 0) \} \) and \( \text{sd} = 1 \).
With CBN, we obtain:
\[
\begin{align*}
\text{code}_V e \; \rho \; 1 & \quad = \quad \text{getvar} \; b \; \rho \; 1 \quad = \quad 1 \quad \text{pushloc} \; 0 \\
& \quad \quad \quad \text{eval} \\
& \quad \quad \quad \text{getbasic} \\
& \quad \quad \quad \text{getvar} \; c \; \rho \; 2 \\
& \quad \quad \quad \text{eval} \\
& \quad \quad \quad \text{getbasic} \\
& \quad \quad \quad \text{add} \\
& \quad \quad \quad \text{mkbasic}
\end{align*}
\]
15 let-Expressions
As a warm-up let us first consider the treatment of local variables :-)
Let \( e \equiv \textbf{let} \ y_1 = e_1 \ \textbf{in} \ldots \textbf{let} \ e_n \ \textbf{in} \ e_0 \) be a nested let-expression.
The translation of \( e \) must deliver an instruction sequence that
- allocates local variables \( y_1, \ldots, y_n \);
- in the case of \( \text{CBV} \) evaluates \( e_1, \ldots, e_n \) and binds the \( y_i \) to their values;
\( \text{CBN} \) constructs closures for the \( e_1, \ldots, e_n \) and binds the \( y_i \) to them;
- evaluates the expression \( e_0 \) and returns its value.
Here, we consider the non-recursive case only, i.e. where \( y_j \) only depends on \( y_1, \ldots, y_{j-1} \). We obtain for \( \text{CBN} \):
\[
\text{code}_V e \rho \text{ sd} = \text{code}_C e_1 \rho \text{ sd} \\
\text{code}_C e_2 \rho_1 (\text{sd} + 1) \\
\ldots \\
\text{code}_C e_n \rho_{n-1} (\text{sd} + n - 1) \\
\text{code}_V e_0 \rho_n (\text{sd} + n) \\
\text{slide n} \quad \text{// deallocates local variables}
\]
where \( \rho_j = \rho \oplus \{ y_i \mapsto (L, \text{sd} + i) \mid i = 1, \ldots, j \} \).
In the case of \text{CBV}, we use \text{code}_V for the expressions \( e_1, \ldots, e_n \).
Warning!
All the \( e_i \) must be associated with the same binding for the global variables!
Example:
Consider the expression
\[ e \equiv \text{let } a = 19 \text{ in let } b = a \times a \text{ in } a + b \]
for \( \rho = \emptyset \) and \( sd = 0 \). We obtain (for CBV):
<table>
<thead>
<tr>
<th>0</th>
<th>loadc 19</th>
<th>3</th>
<th>getbasic</th>
<th>3</th>
<th>pushloc 1</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>mkbasic</td>
<td>3</td>
<td>mul</td>
<td>4</td>
<td>getbasic</td>
</tr>
<tr>
<td>1</td>
<td>pushloc 0</td>
<td>2</td>
<td>mkbasic</td>
<td>4</td>
<td>add</td>
</tr>
<tr>
<td>2</td>
<td>getbasic</td>
<td>2</td>
<td>pushloc 1</td>
<td>3</td>
<td>mkbasic</td>
</tr>
<tr>
<td>2</td>
<td>pushloc 1</td>
<td>3</td>
<td>getbasic</td>
<td>3</td>
<td>slide 2</td>
</tr>
</tbody>
</table>
The instruction \textit{slide} \( k \) deallocates again the space for the locals:
\begin{align*}
S[SP-k] &= S[SP]; \\
SP &= SP - k;
\end{align*}
16 Function Definitions
The definition of a function $f$ requires code that allocates a functional value for $f$ in the heap. This happens in the following steps:
- Creation of a Global Vector with the binding of the free variables;
- Creation of an (initially empty) argument vector;
- Creation of an F-Object, containing references to these vectors and the start address of the code for the body;
Separately, code for the body has to be generated.
Thus:
\[
\text{\texttt{code}}_V (\text{\texttt{fun}} \ x_0 \ldots x_{k-1} \to e) \ \rho \ \text{sd} = \begin{aligned}
&\text{getvar } z_0 \ \rho \ \text{sd} \\
&\text{getvar } z_1 \ \rho \ (\text{sd} + 1) \\
&\ldots \\
&\text{getvar } z_{g-1} \ \rho \ (\text{sd} + g - 1) \\
&\text{mkvec } g \\
&\text{mkfunval } A \\
&\text{jump } B \\
&A : \ \text{targ } k \\
&\text{\texttt{code}}_V e \ \rho' \ 0 \\
&\text{return } k \\
&B : \ \ldots
\end{aligned}
\]
where \[\{z_0, \ldots, z_{g-1}\} = \text{free}(\text{\texttt{fun}} \ x_0 \ldots x_{k-1} \to e)\]
and \[\rho' = \{x_i \mapsto (L, -i) \mid i = 0, \ldots, k - 1\} \cup \{z_j \mapsto (G, j) \mid j = 0, \ldots, g - 1\}\]
h = new (V, n);
SP = SP - g + 1;
for (i=0; i<g; i++)
h→v[i] = S[SP + i];
S[SP] = h;
a = new (V,0);
S[SP] = new (F, A, a, S[SP]);
Example:
Regard \( f \equiv \text{fun} b \rightarrow a + b \) for \( \rho = \{a \mapsto (L, 1)\} \) and \( \text{sd} = 1 \).
code \( f \rho 1 \) produces:
1 pushloc 0 0 pushglob 0 2 getbasic
2 mkvec 1 1 eval 2 add
2 mkfunval A 1 getbasic 1 mkbasic
2 jump B 1 pushloc 1 1 return 1
0 A : targ 1 2 eval 2 B : ...
The secrets around \( \text{targ} k \) and \( \text{return} k \) will be revealed later :-(
Function applications correspond to function calls in C.
The necessary actions for the evaluation of $e' e_0 \ldots e_{m-1}$ are:
- Allocation of a stack frame;
- Transfer of the actual parameters, i.e. with:
- CBV: Evaluation of the actual parameters;
- CBN: Allocation of closures for the actual parameters;
- Evaluation of the expression $e'$ to an F-object;
- Application of the function.
Thus for CBN:
To implement CBV, we use code\textsubscript{V} instead of code\textsubscript{C} for the arguments \(e_i\).
**Example:** For \((f\ 42)\), \(\rho = \{f \mapsto (L, 2)\}\) and \(sd = 2\), we obtain with CBV:
\[
\begin{align*}
2 & \quad \text{mark A} & 6 & \quad \text{mkbasic} & 7 & \quad \text{apply} \\
5 & \quad \text{loadc 42} & 6 & \quad \text{pushloc 4} & 3 & \quad A : \quad \ldots
\end{align*}
\]
A Slightly Larger Example:
\[
\text{let } a = 17 \text{ in let } f = \text{fun } b \to a + b \text{ in } f \ 42
\]
For CBV and \( sd = 0 \) we obtain:
<p>| | | | | | | | | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>loadc 17</td>
<td>2</td>
<td>jump B</td>
<td>2</td>
<td>getbasic</td>
<td>5</td>
<td>loadc 42</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>mkbasic</td>
<td>0</td>
<td>A:</td>
<td>targ 1</td>
<td>2</td>
<td>add</td>
<td>5</td>
<td>mkbasic</td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>pushloc 0</td>
<td>0</td>
<td>pushglob 0</td>
<td>1</td>
<td>mkbasic</td>
<td>6</td>
<td>pushloc 4</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>mkvec 1</td>
<td>1</td>
<td>getbasic</td>
<td>1</td>
<td>return 1</td>
<td>7</td>
<td>apply</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>mkfunval A</td>
<td>1</td>
<td>pushloc 1</td>
<td>2</td>
<td>B:</td>
<td>mark C</td>
<td>3</td>
<td>C:</td>
<td>slide 2</td>
<td></td>
</tr>
</tbody>
</table>
For the implementation of the new instruction, we must fix the organization of a stack frame:
Different from the CMa, the instruction mark A already saves the return address:
\[
\begin{align*}
S[SP+1] &= GP; \\
S[SP+2] &= FP; \\
S[SP+3] &= A; \\
FP &= SP = SP + 3;
\end{align*}
\]
The instruction `apply` unpacks the F-object, a reference to which (hopefully) resides on top of the stack, and continues execution at the address given there:
\[
\begin{align*}
\text{h} &= \text{S[SP]}; \\
\text{if } (\text{H[h]} \neq (\text{F, }\_\_)) &\quad \text{GP} = \text{h} \to \text{gp}; \text{PC} = \text{h} \to \text{cp}; \\
\text{Error “no fun”;} &\quad \text{for } (\text{i=0}; \text{i}<\text{h} \to \text{ap} \to \text{n}; \text{i}++) \\
\text{else } \{ &\quad \text{S[SP+i]} = \text{h} \to \text{ap} \to \text{v[i]}; \\
\text{SP} &= \text{SP} + \text{h} \to \text{ap} \to \text{n} - 1; \\
\} \\
\end{align*}
\]
Warning:
- The last element of the argument vector is the last to be put onto the stack. This must be the first argument reference.
- This should be kept in mind, when we treat the packing of arguments of an under-supplied function application into an F-object !!!
|
{"Source-Url": "http://www2.in.tum.de/~seidl/Courses/SS2013/virtualmachines/v6.pdf", "len_cl100k_base": 4209, "olmocr-version": "0.1.46", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 46988, "total-output-tokens": 5138, "length": "2e12", "weborganizer": {"__label__adult": 0.00024378299713134768, "__label__art_design": 0.00023806095123291016, "__label__crime_law": 0.00019788742065429688, "__label__education_jobs": 0.0002244710922241211, "__label__entertainment": 3.9517879486083984e-05, "__label__fashion_beauty": 9.822845458984376e-05, "__label__finance_business": 0.00013339519500732422, "__label__food_dining": 0.0003371238708496094, "__label__games": 0.00030684471130371094, "__label__hardware": 0.0008254051208496094, "__label__health": 0.00022590160369873047, "__label__history": 0.0001386404037475586, "__label__home_hobbies": 9.775161743164062e-05, "__label__industrial": 0.00033473968505859375, "__label__literature": 0.00010257959365844728, "__label__politics": 0.00017273426055908203, "__label__religion": 0.0002808570861816406, "__label__science_tech": 0.004840850830078125, "__label__social_life": 4.965066909790039e-05, "__label__software": 0.004314422607421875, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00023818016052246096, "__label__transportation": 0.00034689903259277344, "__label__travel": 0.0001863241195678711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 10925, 0.01626]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 10925, 0.3487]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 10925, 0.6533]], "google_gemma-3-12b-it_contains_pii": [[0, 676, false], [676, 1100, null], [1100, 1412, null], [1412, 1586, null], [1586, 1709, null], [1709, 1860, null], [1860, 2024, null], [2024, 2698, null], [2698, 2855, null], [2855, 3503, null], [3503, 3568, null], [3568, 3898, null], [3898, 4119, null], [4119, 4693, null], [4693, 5467, null], [5467, 6036, null], [6036, 6503, null], [6503, 6650, null], [6650, 7110, null], [7110, 7778, null], [7778, 7866, null], [7866, 7911, null], [7911, 8345, null], [8345, 8758, null], [8758, 9162, null], [9162, 9751, null], [9751, 9845, null], [9845, 10033, null], [10033, 10660, null], [10660, 10925, null]], "google_gemma-3-12b-it_is_public_document": [[0, 676, true], [676, 1100, null], [1100, 1412, null], [1412, 1586, null], [1586, 1709, null], [1709, 1860, null], [1860, 2024, null], [2024, 2698, null], [2698, 2855, null], [2855, 3503, null], [3503, 3568, null], [3568, 3898, null], [3898, 4119, null], [4119, 4693, null], [4693, 5467, null], [5467, 6036, null], [6036, 6503, null], [6503, 6650, null], [6650, 7110, null], [7110, 7778, null], [7778, 7866, null], [7866, 7911, null], [7911, 8345, null], [8345, 8758, null], [8758, 9162, null], [9162, 9751, null], [9751, 9845, null], [9845, 10033, null], [10033, 10660, null], [10660, 10925, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 10925, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 10925, null]], "pdf_page_numbers": [[0, 676, 1], [676, 1100, 2], [1100, 1412, 3], [1412, 1586, 4], [1586, 1709, 5], [1709, 1860, 6], [1860, 2024, 7], [2024, 2698, 8], [2698, 2855, 9], [2855, 3503, 10], [3503, 3568, 11], [3568, 3898, 12], [3898, 4119, 13], [4119, 4693, 14], [4693, 5467, 15], [5467, 6036, 16], [6036, 6503, 17], [6503, 6650, 18], [6650, 7110, 19], [7110, 7778, 20], [7778, 7866, 21], [7866, 7911, 22], [7911, 8345, 23], [8345, 8758, 24], [8758, 9162, 25], [9162, 9751, 26], [9751, 9845, 27], [9845, 10033, 28], [10033, 10660, 29], [10660, 10925, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 10925, 0.06075]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
337773fe05300e3eaba0d1a21ae24417f7bc2e1b
|
Icarus’ predicament: Managing the pathologies of overspecification and overdesign
Alex Coman a, Boaz Ronen b, *
a Academic College of Tel Aviv-Jaffa, Israel
b Tel Aviv University, Faculty of Management, 69978 Tel Aviv, Israel
Received 30 December 2008; received in revised form 21 April 2009; accepted 5 May 2009
Abstract
The phenomenon of overspecification and overdesign is well known in all industries: developing features that are not needed by the customer causes excess development efforts, missed due dates, terminated projects and higher lifecycle costs. The paper defines the phenomena, exploring inherent causes and prescribes solutions for both business-to-business and business-to-customer industries. It presents illustrative cases of overspecification and overdesign, proposes a self-assessment to determine the severity of these phenomena in an organization and resolves the conflicts driving these phenomena. Solutions suggested include adapting Simon’s satisficer approach, resolving the marketing conflict by focusing on the 20% of features that account for 80% of the value, breaking the assumption that overspecification is beneficial for future growth potential, resolving the product manager’s conflict via a global system view, implementing the 25/25 principle, freezing and stabilizing the specifications, constraining developer time to eliminate spontaneous overdesign, and piecemeal feature launch.
Keywords: Overspecification; Overdesign; The theory-of-constraints (TOC); 25/25; Research and development; Project management; Product management; ARENA
1. Introduction
Ronen and Pass (2008) define the problems of overspecification and overdesign: “Overspecification is defining product or service specifications beyond the actual needs of the customer or the market. Overdesign is designing and developing products or services beyond what is required by the specifications and/or the requirements of the customer or the market”. The phenomenon of overspecification usually originates during the interaction of R&D and marketing staff people. Marketing staff members face pressure to bring the product to the market as quickly as possible, when they may not be fully acquainted with the customer or market requirements. Therefore, marketing quite often deliberately defines excessive development requirements in order to leave all options open. There is also a hidden understanding that some of the requirements will eventually be down-graded because the full set of marketing requirements is difficult (and sometimes impossible) to attain. There is sometimes also pressure from the customer, and the desire to increase the appeal of the product to a specific customer or to all the market. This broad definition of overspecification includes tight tolerances, unnecessary features and overwhelming complexity. It also includes “artificial complexity” – a term used to describe the needless over-engineered complexity that exists in various systems, products and components.
Overdesign occurs when developers, especially inexperienced ones, design products that are expected to satisfy every imaginable whim of every potential customer. They also have a strong technology drive that leads them to try and develop products that are in the forefront of technology (state-of-the-art). Moreover, in other situations, the developers desire to create options for future extensions of the
product (growth potential). As a result, there is the silent conspiracy (a shared interest) by marketing and R&D to introduce overspecification and overdesign into the development of new products. Contrary to the marketing-production conflict where marketing pushes for more features and versions while production pushes in the opposite direction for standardized off-the-shelf products, there are no checks-and-balances in the marketing-R&D interface. Both marketing and R&D push for more features and performance. Moreover, in certain cases the interface between marketing and R&D is fuzzily defined, adding to the overspecification and overdesign pathology.
Tight tolerances of dimensions (e.g. length, width) and performance (e.g. power, gain) are yet another manifestation of the overspecification pathology in R&D. Tight tolerances do not guarantee the delivery of a better product or service. Time pressures on R&D and the difficulty in conforming to excessive tolerances often lead to delays and extra costs. In cases where the delay is excessive, the decision to wave tolerances is taken at a lower level. Thus overspecification may actually lead to underperformance.
The same phenomena of overspecification and overdesign described above for R&D departments also occur in the development of information technology (IT) applications.
Based on our work with dozens of R&D organizations and departments worldwide, more than 25% of development efforts are invested in issues and activities that do not add value, and may be considered garbage time (Ronen and Pass, 2008).
In the marketing literature, Surowiecki (2007) defines the phenomenon of “feature creep”, referring to the excessive addition of features resulting in products that are mind-boggling. Rust et al. (2006) found that when consumers were given a choice of three models, of varying complexity, of a digital device, more than 60% chose the one with the most features. Then, when the subjects were given the chance to customize their product, choosing from 25 features, they chose twenty features on average. Rust et al. (2006) found that consumers prefer to purchase feature-loaded offerings. Once they start using their purchase, the feature overload prevents them from effectively operating the functions they really need. They then return the item, and take their business elsewhere. Overspecification and overdesign result in costly returns, lost return sales, and detraction.
den Ouden (2006) found that Americans who returned a product that was too complicated for them had spent, on average, just 20 min with it before giving up. Lu et al. (2007) found that at least half of returned products had nothing wrong with them. Consumers just could not figure out how to use them.
Lu et al. (2007) assert that “for highly innovative products the actual product use is often very uncertain”. They emphasize that “Under the time-to-market pressure, it is increasingly important to take into consideration the significant factors that determine product use in the early product development process”.
Section 2 of the paper presents real-life cases where overspecification or overdesign destroyed value. Section 3 analyzes the causes for the overspecification and overdesign phenomena and presents the conflicts driving them. Section 4 presents solutions for the overspecification and overdesign problem. Section 5 presents a procedure to diagnose the extent to which an organization is afflicted with overspecification and overdesign. Section 6 concludes the paper and calls for further research.
2. Illustrative overspecification and overdesign cases
The following examples illustrate value destruction as a result of overspecification and overdesign. During our combined engineering, research, teaching, management and consulting experience of over 70 years we have encountered the following examples to illustrate the phenomenon:
Company A – a NASDAQ traded developer of WiMax telecom solutions, won a $42 million contract from a Japanese telephone service provider – to deploy Japan’s first WiMAX network across Tokyo. The deployment of the first-of-a-kind system which offered wireless telephone and internet connection to residential and business customers ran into difficulties due to low reliability. Post-mortem analysis revealed that the cause was a feature developed for future applications. This feature was not needed to complete the $42 million contract. The feature was not operational at the time but caused the system to crash frequently. Removal of the unneeded feature solved the problem. This phenomenon whereby a feature that is planted in the product for hypothetical future applications aborts short-term functionality, leading to product termination and in some cases to the company’s closure, was observed in several other cases.
Company E – a cellular handset manufacturer came up with a concept that was futuristic for its time: a cellular phone with game platform and multimedia console – music and video. The design was so innovative that the project kept delaying; the product was not delivered to customers, eventually leading to a crisis situation. The decision reached in order to salvage the project was to remove the multimedia features. The result was the critically delayed launch of a mediocre, me-too, product. To make things worse, the product’s platform was exorbitantly expensive since it was designed to support a highly demanding performance envelope. It had a powerful multimedia processor, large memory capacity and a beefed up power package to drive it. The resulting product was a market failure. This pathology of “too little; too late; too costly” is manifested in three stages: (1) ambitious overspecification and overdesign of a “killer” product; (2) development is acutely delayed, the product reaches a crisis management stage; (3) the product’s features are mercilessly tapered, and the product is finally launched. The launched product’s features are unimpressive (much energy was spent on features that did not make it to the release) – too little; it is a “me-too” product – too late; and its platform (processor, power
supply, etc.) is expensive since it was designed to support its ambitious (non-existent) features – too costly.
Microsoft – in its efforts to enter the software security market Microsoft acquired an innovative startup company that specialized in data security. In the first stages of the product’s specification the company went through an intensive brainstorming process, the result of which was a trailblazing design. The design did not just protect from threats but among a broad list of features also assured that illegal software was not run on the computer. The product which was aimed at penetrating the data security market was significantly delayed. A crisis management session cut out many of the features, launching a mediocre, me-too product.
Microsoft Word and Excel provide yet another bizarre example of unneeded features. Microsoft Word has a hidden pinball application and Microsoft Excel has a flight simulator stealthily incorporated into the product. Not only do these applications contribute nothing to Microsoft’s value; they result in wasted developer effort, distraction, increased memory resource demand and radical increase in the product’s complexity.
Company P – a cellular service provider initiated a strategic program to open “store-within-a-store” – distribution centers within drugstores. These were sales kiosks opened inside retailers such as drugstores, home products chains, etc. The legal department was in charge of closing the contract with the retailers. Time and again, it took longer to close the overspecified contract than it took to physically install the kiosk. The result was that the legal contracts were finalized after the business was already functional de-facto.
Bradley tank (Burton, 1993) – the Bradley tank was originally developed as a troop carrier to transport eleven soldiers to the battlefield. The development process exceeded 20 years and added to the Bradley a myriad of other functions such as: a missile carrier, amphibious features, weapons systems and more. As a result the Bradley turned out to be more than a tank.
Gutenberg (Britannica, 2009) – the inventor of metal movable print had a mission – to emulate the writing of contemporary scribes. In his effort to reach perfection and print in several colours, Gutenberg’s project required more and more financing to complete. Gutenberg borrowed money from a lawyer, Johann Fust, making him a business partner. Gutenberg’s perfectionism – the desire to improve the quality and extra features such as being able to print in colour, resulted in bankruptcy. Fust and his associate took over the business and printed the first fine books without Gutenberg’s overspecified features.
Legal & General (2009) – a mortgage financial services provider, required applicants to fill out an overspecified “full life application” form which took 2 h to complete. After reducing the form to a simpler, straightforward application, Legal & General witnessed a 40% growth in applications submitted, a 13% increase in immediate acceptance applications, a 9% reduction in applications not being processed, and an overall growth in profitability.
Software overspecification – from the authors’ experience most software applications developed in-house severely suffer from overspecification and overdesign. Unneeded or nice-to-have features are added to the product to be on the safe side or for future growth. Functionality is added with no economic analysis or justification. The end result is an excessively complex product, severe project overrun in terms of time and money and frequent project termination with no product delivery at all.
Consumer goods – manufacturers of consumer products continuously add new features. The result of this overspecification and overdesign is a severe reduction in product usability. By cramming in features that are seldom used, users have difficulty to focus on frequently used features that they need. This is common with audio and video remote controls that have over 50 buttons. The activation of common features such as volume control and station search becomes exorbitant. Washing machine manufacturers similarly add special programs and features that make them unusable to normal people who use one or two programs.
Table 1 summarizes the case studies. It illustrates the various pathologies, from delayed launch through excessive complexity to loss of the entire company.
We have observed that products can be classified according to their feature density. We identify three feature density zones (Fig. 1): the inferior product zone, the effective product zone, and the overspec product zone. In a given competitive arena, products that do not meet minimal customer requirements are considered inferior. They are not competitive and hence are value destroyers. Effective products are products that satisfy important customer requirements. Overspec products, cramming features that do not deliver value, destroy value by increasing costs, reducing throughput and delaying product launch. The overspec condition occurs when executives get carried away with the belief that “you can’t have too much of a good thing”. By contrast, the veteran product manager is familiar with the maxim: “a perfect product is the enemy of a good product”.
3. Sources for overspecification and overdesign
Better is notoriously the enemy of good. Why then do marketers and engineers keep falling into this trap time and again? Overspecification and overdesign stem from the characteristics of human behaviour and from organizational measurement and compensation:
1. Optimizer approach: in many cases the root-cause for both overspecification and overdesign lies in the phenomenon defined by Nobel laureate Herbert A. Simon as the optimizer approach (Simon, 1957; Ronen and Pass, 2008). Simon revolutionized management by identifying a managerial phenomenon which causes decision-making failures. He claimed that executives, engineers, and decision makers strive to be optimizers, that is, to achieve the best possible solution, without consideration of time constraints.
2. **Option overkill:** for both overspecification and over-design the tendency of marketing and engineering to anticipate the product’s growth potential has been observed (Rust et al., 2006). Features are crammed into the product to assure compliance with potential future demand. All features that can be conceived along the product’s lifecycle are incorporated in its first version.
3. **One size fits all:** the product is developed to comply with requirements from radically different customer segments. When a standard, off-the-shelf product is designed, rather than developing distinct product versions, the result is expensive and highly complex. The authors encountered a hi-tech company that developed a universal power-supply component. The product was designed to accept a broad range of input power sources and deliver a broad range of power output, aiming to simplify the development, logistics and testing of the product. The result was an overly complex product which could not receive certification-of-compliance. Moreover, it carried a significantly higher price tag than single-purpose power supplies and was therefore scrapped.
4. **Lack of knowledge and leadership:** marketing people do not know with certitude which features will generate market demand and will differentiate the product in the customer’s eyes. Similarly, engineering people do not know with certainty which standards, protocols, components and features will dominate the future. As a result both marketing and R&D people spread their bets across all slots in the roulette wheel. Many marketing and R&D executives do not have the leadership to determine which features should be included in the product’s initial release, which should be deferred to future releases along the product’s roadmap, and which features should be eliminated altogether.
5. **Measurement, incentives and compensation:** marketing and R&D executives should be measured by the value contribution of the product or project along its lifecycle. Unfortunately in many cases marketing people are measured by their creativity and therefore concentrate on dreaming up as many potential segments and features as possible. They are measured by the product’s media exposure and at best the short-run acceptance of the product. They therefore often tend to concentrate on exotic applications. R&D people are measured by the traditional performance trinity: scope, cost and time. Of these three elements, scope is most tangible during the initial product specification. Hence, R&D people initially incorporate as many features as they can possibly imagine. As the product’s launch is delayed, the product enters crisis-management mode and features are eliminated to shorten time-to-market. The features that are eliminated are often important, value-creating features that were delayed due to the waste of resources on marginal features that were not eliminated early enough (den Ouden, 2006).
6. **Organizational culture:** engineering schools train engineers to deliver the “best” product from a technological perspective. Only a minority of schools stress design-to-cost and teach product lifecycle principles; the majority of young engineers do not see their objective as increasing the company’s value through their product or project. Engineers’ self-esteem and peer-appreciation are derived from technological brilliance rather than value delivery breakthrough (Rust et al., 2006). The same applies to marketing people. Their goal is to be more creative and expose the product to the media. Their culture does not involve value creation. Since both R&D and marketing people have motivation for overspecification and over-design, a
---
Table 1
<table>
<thead>
<tr>
<th>Case</th>
<th>Industry</th>
<th>Pathology</th>
</tr>
</thead>
<tbody>
<tr>
<td>Company A</td>
<td>New product development</td>
<td>Future feature kills project</td>
</tr>
<tr>
<td>Company E</td>
<td>Consumer goods</td>
<td>Late launch due to overspecification results in a mediocre and overpriced product</td>
</tr>
<tr>
<td>Microsoft</td>
<td>Software development</td>
<td>Extreme overspecification and resulting crisis launch an outdated antivirus and protection product</td>
</tr>
<tr>
<td>Microsoft</td>
<td>Software development</td>
<td>Excessive complexity as a result of overspecification prevents timely updates and stabilization</td>
</tr>
<tr>
<td>Company P</td>
<td>Retail services</td>
<td>Perfectionist legal process is completed later than actual business transaction</td>
</tr>
<tr>
<td>Bradley tank</td>
<td>Defence</td>
<td>Product losses focus, diverges from original purpose and lasts for years</td>
</tr>
<tr>
<td>Gutenberg</td>
<td>Invention/new product development</td>
<td>Overspecification results in bankruptcy and loss of intellectual property</td>
</tr>
<tr>
<td>Legal & General</td>
<td>Mortgage bank</td>
<td>Simplified application form adds value</td>
</tr>
<tr>
<td>Software applications</td>
<td>Software development</td>
<td>“Growth potential” with no economic justification</td>
</tr>
<tr>
<td>Consumer goods</td>
<td>Consumer goods</td>
<td>Feature overload damages usability</td>
</tr>
</tbody>
</table>
---
Fig. 1. Feature density zones.
“conspiracy” is forged between the “customer” and the “supplier”. This is true for both solicited projects – where a specific customer orders the product and for unsolicited projects – where the product is conceived by the marketing department.
7. Grey R&D: every R&D department is characterized by a certain amount of “grey R&D” – unauthorized projects or features that are developed by highly motivated R&D people. The R&D department is a “permanent bottleneck” (Ronen and Pass, 2008) – i.e. demand for their services permanently exceeds supply. The authors’ experience shows that the demand for software and hardware applications and products is three to five times greater than the resources available. Thus, the formal strategic gating process eliminates low priority projects. In certain cases developers excited by new technologies and infrastructure develop these features in a covert mode – under management radar, with no control or supervision. This results in product overdesign.
8. Manipulative budgeting: new technologies, infrastructure, platforms and feature introduction should normally be part of the R&D budget. However, in certain cases the bulk of the budget is assigned to customer-specific projects. Thus, when R&D people wish to pursue new technologies they manipulate projects to use these technologies, even when they are not needed from the customer’s point of view. This results in features that are unneeded for the project.
9. Inertia: this pathology was defined by Christensen (2003) as the “Innovator’s Dilemma”. Christensen describes innovators locked on the improvement of a single performance measure when it no longer makes a difference to the customer. Engineers and computer scientists perpetually wish to improve performance. As a result they release improved products on a continuous basis. In some cases inertia causes the engineers to pursue performance improvement even when the improvement cost exceeds the value to the customer. This phenomenon is sometimes intensified by competitive market conditions. Examples include the race to improve a processor’s clock rate between Intel and AMD. Intel eventually realized that they had reached overspecification and moved to another more important parameter – namely power consumption.
10. The misconception of the linearity of effort: common human thought is linear, leading to the belief that the addition of each new feature results in a proportional increase in effort. However, complexity added to the system results in an exponential increase in effort. New features complicate the product’s architecture, compete for common constrained resources and cause a multitude of unanticipated quality problems. As a result developers miscalculate the effort associated with added features underestimating its impact on the project. Thus, for over 50 years Eli Lilly locked itself into an effort to manufacture purer and purer insulin. The company’s efforts eventually produced Humulin – 100% perfectly pure human insulin, only to discover that the purity differentiation was insignificant since only a fraction of people develop insulin resistance. Thus highly purified pork insulin is good enough for the majority of the population. At this level the race for purity becomes insignificant.
We apply Goldratt’s conflict-resolution-diagram (Goldratt, 1991) in the analysis of the underlying conflicts leading marketing and R&D to commit the overspecification and overdesign (Ronen and Pass, 2008) mistake.
Fig. 2 describes the marketing organizational conflict underlying overspecification.
Box A designates the undisputed goal to increase the company’s value. From the marketer’s point of view the conflict is between satisfying the needs of more customer segments thus increasing throughput (box B) vs. reducing operating expenses and complexity (box C). The conflict is evident between satisfying more customers via a multitude of features (box D) or reducing expenses and complexity via few features (box D’).
Fig. 3 describes the R&D organizational conflict underlying overdesign.
This goal can be achieved by trying to gain long-term benefits (box B) or by completing the project on time (box C). The actual conflict is presented in box D vs. box D’: Develop with overspec and overdesign (box D) vs. Develop just-in-spec (box D’).
Fig. 4 describes the underlying personal conflict for overspecification and overdesign experienced by R&D professionals.
As before, box A designates the developer’s individual goal which is to increase self-accomplishment. This goal can be achieved by being on the edge of technology and personal satisfaction; (box B) or by completing the project on time (box C). The actual conflict is presented in box D
4. Solutions for overspecification and overdesign
a. The “optimizer” approach, a major source for overspecification and overdesign is remedied by Simon’s (Simon, 1957; Ronen and Pass, 2008) “satisficer” approach. The satisficer sets a “level-of-aspiration”, a threshold he or she aspires to achieve. The objective is no longer to maximize or minimize some performance measure, but to achieve a solution that will improve the measure beyond the predefined level-of-aspiration. The satisficer need not exhaustively examine all possible alternatives. The satisficer examines some alternatives until one that satisfies the level-of-aspiration is found. Once the level-of-aspiration has been met, the satisficer may set a new level-of-aspiration. This iterative process delivers attainable, continuous improvement. The authors’ experience shows that whenever the satisficer concept is spread within a company, it resolves a significant part of the problem. The satisficer will not seek “the best solution”. Rather, he or she will look for a practical solution that will reach a certain “level-of-aspiration” that will represent the customer’s actual needs.
b. The overspecification-marketing conflict can be resolved via the differentiation principle. The assumption that overspecified products increase throughput is often erroneous. Differentiation applying the Pareto principle (20% of the features account for 80% of the value) resolves the majority of conflicts. Eighty percent of the features can be developed just-in-spec and only 20% of the features need be incorporated in the product platform.
c. The overdesign-R&D conflict can be resolved via assumption breaking (Ronen and Pass, 2008). The assumption of growth potential is often over-optimistic and naive. Technology changes rapidly while the product lifecycle becomes shorter. Therefore significantly fewer design features are smoothly recycled into next generation products. The organizational conflict should be resolved at the senior executive level – which features should be included on a product platform level, which should be postponed for later releases along the product’s roadmap and which should be eliminated altogether. This decision is often resolved de-facto by the engineering level.
d. The personal conflict for overspecification and overdesign of the product management and development person can be relaxed through globalization (Ronen and Pass, 2008). A global view of the organizational project portfolio and the R&D human resource personalities can relax this conflict. The desire to be at the cutting edge of the technological knowledge and to achieve personal satisfaction can be met by occasionally assigning these development people to innovative high-risk projects. This enables them to occasionally face cutting edge technology and fulfill their technological interest. However, low-risk more-of-the-same projects or components within projects should follow just-in-spec principles. The conflict may also be resolved by challenging the developers with the economic need of design-to-cost.
e. 25/25 principle: The 25/25 (Ronen and Pass, 2006) principle states that management should periodically terminate 25% of the projects/products and taper (trim down) 25% of the features in the projects/products that are not terminated. While the company’s innovation process is responsible for the steady addition of products and features, no organizational function is charged with the evaluation of products and projects and the removal of “white elephants”. This results in the proliferation of projects and products and generates a “high-mix low-volume” product portfolio. During the product’s specification phase the tendency of both marketing and development is to brainstorm as many features as possible. This “conspiracy” between marketing and R&D results in overspecification and later in overdesign. The 25/25 mechanism establishes checks-and-balances for this tendency. The 25/25 is chaired by senior business executives and operationalized by marketing, sales, R&D and operations people. For a specific project, trimming down the unneeded features, as a part of a 25/25 process can reduce overspecification and overdesign dramatically.
f. Freeze and stabilize: Product managers tend to leave as many options open as possible throughout the project’s lifecycle – to respond to new ideas and events in the competitive arena. This results in unmanageable feature creep. Two milestones must be established: freeze – the point in time after which no changes to the product specification are accepted from marketing, and stabilize – the point in time after which only fixes are accepted from R&D. For low uncertainty projects the project is frozen at the project launch. For high uncertainty projects the product may be frozen at a later phase determined by senior management. Breakthrough projects are often frozen at a very late phase.
g. Controlling the developer: In order to reduce the engineers’ tendency for overdesign, management should construct a straight-jacket – a gating mechanism containing the engineering feature explosion. Such
mechanisms include design-to-launch and critical-chain buffer management (Goldratt, 1997). Typically, projects are managed in a design-to-spec priority. This results in project delays, which in turn cause further features to be added in response to competitive moves in the arena. The design-to-launch mechanism subordinates the features to the pre-defined launch date, constraining the engineers’ ability to add unplanned features. Within the project’s timetable, critical-chain buffer management further constrains feature creep. Rather than assigning spare time to individual activities controlled by engineers, critical-chain clusters all spare time resources in a project buffer managed by the senior project manager. Individual engineers are allocated tight activity durations precluding them from making unauthorized feature additions.
h. Piecemeal feature launch: Today’s dynamic business arena causes multiple feature value and effort assessment mistakes. False-positive mistakes apply to features that were perceived valuable in the specification phase but proved to be redundant in the implementation phase. False-negative mistakes apply to features that were classified as white elephants during the specification phase but proved to be valuable later in the product’s lifecycle. R&D people are often detached from the real user in the field and are as a result prone to make both types of mistakes. Piecemeal feature launch enables the timely release of features while retaining the real-option to remedy gating mistakes. False-positive features are eliminated as soon as their real value is ascertained. False-negative features are reinstated as soon as the mistake is identified. This technique is particularly applicable for software features. The methodology of software-as-a-service (e.g. Google or Salesforce.com) enables gradual and granular increments of the product or the service’s features. The extreme-programming and scrum methodologies prescribe minute feature launches and frequent feature re-evaluation.
i. Using the quality-function-deployment (QFD) methodology (Yoji, 2004; Chan and Wu, 2005): QFD should be used to prioritize investments in product features. The QFD methodology starts with the customer’s quality functions – i.e. the set of features that define the product’s quality in the customer’s eyes. These features are weighed based on their importance to the customer. The quality function is deployed across the various organizational departments that are accountable for defining the quality function. These departments include development, design, purchasing, manufacturing, quality, logistics, customer support, etc. Next the product is compared with its competitors to prioritize feature value creation. The QFD methodology is incorporated into systems and software engineering lifecycle standard ISO 15288 (Chan and Wu, 2005), and into quality management standard ISO 10006 (ISO 10006, 2003).
5. Self-assessment
The following questions are used by senior executives to assess the severity of overspecification and overdesign in their organization. The questionnaire indicates whether overspecification and overdesign are severe pathologies in the organization.
Consider the following questions:
a. Are most product milestones or projects delivered on time? If “yes” then overspecification and overdesign are not significant pathologies in your organization. If “no” then consider the following questions:
b. Are most of your products excessively ahead of your competition?
c. What percentage of effort invested in new features is targeted for long-term potential?
d. What percentage of effort invested is designated for potential new customers?
e. What percentage of effort invested is for new, potential product applications?
f. What percentage of development effort exceeds specified requirements?
g. What is the proportion of overspecification in your project?
h. What is the proportion of overdesign in your project?
i. What is the extent of excessively tight tolerances in your project?
j. What is the extent of artificial complexity?
6. Conclusions
The pathology of overspecification and overdesign is critical for organizations dealing with frequent new product introduction. Though very important, little has been published in the academic literature on this topic. This paper defines the problems of overspecification and overdesign, demonstrates their occurrence, investigates their roots and causes, and suggests practical solutions. Finally, the paper proposes a simple methodology to diagnose the severity of the phenomena in a given organization. Further research should follow this paper: empirical studies should quantify the amount of effort wasted on overspecification and overdesign in various industries and arenas. Next, quantitative modelling studies should evaluate the dimensions that determine these pathologies and their interactions. Case studies that illustrate the phenomena should follow.
References
|
{"Source-Url": "https://en-coller.tau.ac.il/sites/coller-english.tau.ac.il/files/RP_173_Ronen.pdf", "len_cl100k_base": 6906, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23429, "total-output-tokens": 7877, "length": "2e12", "weborganizer": {"__label__adult": 0.00079345703125, "__label__art_design": 0.0032024383544921875, "__label__crime_law": 0.0011987686157226562, "__label__education_jobs": 0.0250396728515625, "__label__entertainment": 0.00028204917907714844, "__label__fashion_beauty": 0.0005040168762207031, "__label__finance_business": 0.036590576171875, "__label__food_dining": 0.0006351470947265625, "__label__games": 0.0018625259399414065, "__label__hardware": 0.0017719268798828125, "__label__health": 0.0013027191162109375, "__label__history": 0.000903606414794922, "__label__home_hobbies": 0.00044417381286621094, "__label__industrial": 0.0016336441040039062, "__label__literature": 0.0020732879638671875, "__label__politics": 0.000903606414794922, "__label__religion": 0.0008916854858398438, "__label__science_tech": 0.0894775390625, "__label__social_life": 0.0004229545593261719, "__label__software": 0.0195465087890625, "__label__software_dev": 0.80859375, "__label__sports_fitness": 0.0004401206970214844, "__label__transportation": 0.0012302398681640625, "__label__travel": 0.00039267539978027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37852, 0.01928]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37852, 0.09146]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37852, 0.94057]], "google_gemma-3-12b-it_contains_pii": [[0, 3416, false], [3416, 9575, null], [9575, 15682, null], [15682, 20909, null], [20909, 25650, null], [25650, 30787, null], [30787, 36142, null], [36142, 37852, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3416, true], [3416, 9575, null], [9575, 15682, null], [15682, 20909, null], [20909, 25650, null], [25650, 30787, null], [30787, 36142, null], [36142, 37852, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37852, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37852, null]], "pdf_page_numbers": [[0, 3416, 1], [3416, 9575, 2], [9575, 15682, 3], [15682, 20909, 4], [20909, 25650, 5], [25650, 30787, 6], [30787, 36142, 7], [36142, 37852, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37852, 0.10526]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
93edf3116e69710019fa651001c938d0f962d46e
|
The Piazza Project
Presented by:
Mengmeng Liu and Shirley Cohen
CIS 650
Agenda
- Project overview
- Basic model
- Query answering
- Mapping compositions
- Semantic Web connection
- Implementation aspects
Piazza Project Members
AnHai Doan
Oren Etzioni
Steven Gribble
Zack Ives
Alon Halevy
Jayant Madhavan
Peter Mork
Maya Rodrig
Dan Suciu
Igor Tatarinov
Piazza: Peer Data-Management
**Goal:** To enable users to share data across local or wide area networks in an ad-hoc, highly dynamic distributed architecture.
- Peers can:
- Export base data
- Provide views on base data
- Serve as logical mediators for other peers
- Every peer can be both a server and a client.
- Peers join and leave the PDMS at will.
Basic Model of PDMS
• *peer schema and peer relations*
• *stored schema and stored relations*
• Queries are posed over relations from a specific peer schema and be reformulated in terms of stored relations
• Assumptions:
– Relational data model
– CQ without comparison predicates
– Views refer to named queries
A PDMS example
Data integration: 1 mediated schema, $m$ mappings to sources
Peer data management system (PDMS):
- $n$ mediated peer schemas as few as $(n - 1)$ mappings between them – evaluated transitively
- $m$ mappings to stored relations
Schema mappings of PDMS
• Mappings between peer and stored relations
– Storage descriptions: $A : R \subseteq Q$
– Inclusion or equality
– Q is a CQ over the schema of peer A and R is a stored relation at peer A
– LAV mappings
• Mappings between different peer schemas
– Peer mappings
– GAV, LAV, GLAV mappings
Peer mappings
- Inclusion and equality mappings
- $Q_1(A_1) \subseteq Q_2(A_2)$
- a semantic mapping by stating that evaluating $Q_1$ over the peers $A_1$ will always produce the subset answers as evaluating $Q_2$ over $A_2$.
- GLAV mapping
- Definitional mappings
- Each mapping is a set of datalog rules whose head and body are both peer relations where the head contains one atom and the body contains several atoms.
- If a peer relation appears only once in the head in a set of rules, it can be written as equalities.
- Express disjunctions easily
- Exploit the full power of GAV mappings
Summary of basic model
• A PDMS is specified by
– A set of peers P1…Pn
– A set of peer schemas S1…Sm
– A mapping function from peers to peer schemas
– A set of stored relations $R_i$ at each $P_i$
– A set of peer mappings $LN$
– A set of storage descriptions $DN$
• Benefits of PDMS
– Scalability, extensibility and decentralization
– Placement of query is arbitrary
– Exploit transitive evaluation of semantic mappings
Complexity of query answering
• Query answering: Given a PDMS N, an instance of stored relations D and a query Q, find all certain answers of Q.
• Certain answers: \( \bigcap Q(I) \), \( I \) ranges over all data instances for PDMS N’s peer relations who are consistent with N and an instance D for N’s stored relations.
• Finding all certain answers in PDMS: undecidable!
• Observation: If a PDMS only includes storage descriptions and inclusion peer mappings, and peer mappings are acyclic, then a CQ can be answered in polynomial time. (non-recursive datalog)
Query answering in PDMS
• Query answering $\supseteq$ query rewriting $=$ query reformulation
• Problem: Given a set of peer mappings and storage descriptions and a query $Q$, output $Q'$ in terms of stored relations.
• Evaluating $Q'$ will always only produce certain answers to $Q$. If all certain answers can be found in PTIME, then $Q'$ will produce all certain answers.
• $Q'$ is the maximally-contained rewriting of $Q$.
Query reformulation algorithm
- Use a rule-goal “tree” to expand the mappings
- Goal nodes are labeled with atoms of the peer relations, rule nodes are labeled with mapping rules.
- Algorithm:
- Start with schemas being queried
- Look up mappings, expand
- Continue iteratively until queries are only over stored relations.
- Mappings in a PDMS may be a combination of LAV, GAV style mappings
- Applies unfolding for GAV
- Applies Minicon for LAV
- What about GLAV?
- A challenge to interleave them together.
An Example
Query: $Q(a_1, a_2) :- \text{SameProject}(a_1,a_2,p), \text{Author}(a_1,w), \text{Author}(a_2,w)$
Peer Mappings:
- $r_0$: $\text{SameProject}(a_1,a_2,p) \iff \text{ProjMember}(a_1,p), \text{ProjMember}(a_2,p)$
- $r_1$: $\text{CoAuthor}(a_1,a_2) \subseteq \text{Author}(a_1,w), \text{Author}(a_2,w)$
Storage Descriptions:
- $r_2$: $\text{S1}(a,p,s) \subseteq \text{ProjMember}(a,p), \text{Sched}(f,s,e)$
- $r_3$: $\text{CoAuthor}(f_1,f_2) = \text{S2}(f_1,f_2)$
Rule-Goal Tree Expansion
$q: Q(a1, a2) :- \text{SameProject}(a1, a2, p), \text{Author}(a1, w), \text{Author}(a2, w)$
Rule-Goal Tree Expansion
q: Q(a₁, a₂) :- SameProject(a₁,a₂,p), Author(a₁,w), Author(a₂,w)
Rule-Goal Tree Expansion
$q: Q(a1, a2) :- \text{SameProject}(a1, a2, p), \text{Author}(a1, w), \text{Author}(a2, w)$
Peer mappings:
$r0: \text{SameProject}(a1, a2, p) :- \text{ProjMember}(a1, p), \text{ProjMember}(a2, p)$
$r1: \text{CoAuthor}(a1, a2) \subseteq \text{Author}(a1, w), \text{Author}(a2, w)$
Rule-Goal Tree Expansion
q: Q(a₁, a₂) :- SameProject(a₁,a₂,p), Author(a₁,w), Author(a₂,w)
Peer mappings:
r₀: SameProject(a₁,a₂,p) :- ProjMember(a₁,p), ProjMember(a₂,p)
r₁: CoAuthor(a₁,a₂) ⊆ Author(a₁,w), Author(a₂,w)
Rule-Goal Tree Expansion
q: $Q(a_1, a_2) : \text{SameProject}(a_1, a_2, p), \text{Author}(a_1, w), \text{Author}(a_2, w)$
Storage descriptions:
r2: $S1(a, p, s) \subseteq \text{ProjMember}(a, p)$, $\text{Sched}(a, s, \text{end})$
r3: $\text{CoAuthor}(f_1, f_2) = S2(f_1, f_2)$
Rule-Goal Tree Expansion
q: \( Q(a_1, a_2) \) :- \( \text{SameProject}(a_1, a_2, p), \text{Author}(a_1, w), \text{Author}(a_2, w) \)
Storage descriptions:
r2: \( S_1(a, p, s) \subseteq \text{ProjMember}(a, p) \), \( \text{Sched}(a, s, \text{end}) \)
r3: \( \text{CoAuthor}(f_1, f_2) = S_2(f_1, f_2) \)
Rule-Goal Tree Expansion
$q: Q(a_1, a_2) :- \text{SameProject}(a_1, a_2, p), \text{Author}(a_1, w), \text{Author}(a_2, w)$
Rule-Goal Tree Expansion
$q: Q(a1, a2) :- \text{SameProject}(a1,a2,p), \text{Author}(a1,w), \text{Author}(a2,w)$
$Q'(a1,a2) :- \text{S1}(a1,p,\_), \text{S1}(a2,p,\_), \text{S2}(a1,a2)$
$\cup \text{S1}(a1,p,\_), \text{S1}(a2,p,\_), \text{S2}(a2,a1)$
Mapping Compositions
• Problem: combining two schema mappings into a single one
• Composition of schema mappings generalizes composition of queries
• Query composition corresponds to functional mappings
• Composition of queries implemented in most database commercial systems
• Evolution of schema mappings (GAV, LAV, GLAV, constraints)
Compositions in PDMS Setting
Problem Definition
$m_{13}$ is a composition of $m_{12}$ and $m_{23}$ if the certain answers obtained by way of $m_{13}$ for any query in a class of queries $L$ against schema $\sigma 3$ are precisely those that can be obtained by using $m_{12}$ and $m_{23}$ in sequence.
Composition Example
JJ Pickle
- Directory
- pid
- name
- wphone
- hphone
UT Austin
- Person
- pid
- name
- Phone
- number
- kind
- pid
UT System
- Addrbook
- name
- address
- Phonebook
- name
- phone
Schema Mappings
1. `Directory(pid, _, wphone, _) --> Phone(wphone, “work”, pid)`
2. `Directory(pid, _, hphone) --> Phone(hphone, “home”, pid)`
3. `Directory(pid, name, _) --> Person(pid, name)`
4. `Person(pid, name) --> Addrbook(name, _)`
5. `Person(pid, name), Phone(number, kind, pid) --> Phonebook(name, number)"
Composed Mappings
1. Directory(pid, name, _) --> Person(pid, name)
2. Person(pid, name) --> Addrbook(name, _)
3. Directory(_, name, _) --> Addrbook(name, _)
4. Directory(pid, name, wphone, _) --> Phone(wphone, “work”, pid), Person(pid, name)
4. Phone(number, kind, pid), Person(pid, name) --> Phonebook(name, number) -->
Directory(_, name, wphone, _) --> Phonebook(name, number)
Example with Infinite Mappings
\[ M_{ab} = \{ a_{rg}(x, y) \subseteq b_r(x, x_1), b_g(x_1, y) \} \]
\[ a_{gg}(x, y) \subseteq b_g(x, x_1), b_g(x_1, y) \} \]
\[ M_{bc} = \{ b_r(x, x_1), b_g(x_1, x_2), b_g(x_2, y) \subseteq c_{rgg}(x, y) \]}
\[ b_g(x, x_1), b_g(x_1, y) \subseteq c_{gg}(x, y) \} \]
\[ a_{gg}(x, y) \subseteq c_{gg}(x, y) \]
(By 2 and 4)
Example with Infinite Mappings
\[ M_{ab} = \{ a_{rg}(x, y) \subseteq b_r(x, x_1), b_g(x_1, y) \} \]
\[ a_{gg}(x, y) \subseteq b_g(x, x_1), b_g(x_1, y) \} \]
(1)
(2)
\[ M_{bc} = \{ b_r(x, x_1), b_g(x_1, x_2), b_g(x_2, y) \subseteq c_{rgg}(x, y) \} \]
\[ b_g(x, x_1), b_g(x_1, y) \subseteq c_{gg}(x, y) \} \]
(3)
(4)
\[ a_{rg}(x, y), a_{gg}(x, y) \subseteq c_{rgg}(x, y) \] (By 1, 2, and 3)
So far so good.
Example with Infinite Mappings
\[ M_{ab} = \{ a_{rg}(x, y) \subseteq b_r(x, x_1), b_g(x_1, y) \} \] \hspace{1cm} (1)
\[ a_{gg}(x, y) \subseteq b_g(x, x_1), b_g(x_1, y) \} \] \hspace{1cm} (2)
\[ M_{bc} = \{ b_r(x, x_1), b_g(x_1, x_2), b_g(x_2, y) \subseteq c_{rgg}(x, y) \} \] \hspace{1cm} (3)
\[ b_g(x, x_1), b_g(x_1, y) \subseteq c_{gg}(x, y) \} \] \hspace{1cm} (4)
\[ a_{rg}(x, x_1), a_{gg}(x_1, x_2), \subseteq c_{rgg}(x, y_1), c_{gg}(y_1, y_2), \]
\[ \ldots, a_{gg}(x_n, x_{n+1}) \ldots, c_{gg}(y_{n-1}, y_n ) \]
Sequence is infinite.
Inverse Rules
Inverted LAV rules commonly used in data integration systems (like Piazza)
Claim: We can use composition to optimize a set of inverse rules
Example:
LAV mapping: \( \forall xy \; Rxy \rightarrow \exists z \; Sxz, Tzy \)
Skolemized: \( \forall xy \; Rxy \rightarrow Sx \; f(xy), T \; f(x,y) \; y \)
Inverse Rules: \( Sx \; f(x,y) \) : - \( Rxy \)
\( T \; f(x,y) \; z \) : - \( Rxy \)
Universal solution (without chasing): \( Uxy \) : - \( Sxz, Tzy \)
Certain answers: \( Qx \) : - \( Uxy \)
Technical Results
Madhavan, Halevy (VLDB 2003)
• Setting: Query language L is a union of conjunctive queries and mappings are source-to-target and target-to-target dependencies (GLAV mappings)
• Showed that the result of composition may be an infinite set of formulas and proposed algorithms for the cases when composition can be done.
Fagin, Kolaitis, Popa, Tan (PODS 2004)
• Setting: Definition of mapping composition independent of query language
• Showed that full source-to-target constraints are closed under composition, but that embedded source-to-target constraints are not.
Bernstein, Green, Melnick, Nash (VLDB 2006)
• Setting: SQL Server implementation
• Showed that composition can be done efficiently.
What is the Semantic Web?
• Built on top of the regular web
• Several languages for representing information: RDF, RDF Schema, and OWL
• A lot of representation logics
• Some tools exist for implementing pieces of it
The Semantic Web (Alon’s view)
• Sharing structured data at web scale
– You can pose meaningful queries on web sites.
– Ontologies provide the *semantic glue*.
– Internal implementation of web sites left open.
• Agents perform tasks:
– Query one or more web sites
– Perform updates (e.g., set schedules)
– Coordinate actions
– Trust each other (or not).
• *i.e.*, agents operating on a gigantic heterogeneous distributed database.
Getting there
• Robust infrastructure for querying
– Peer data management systems.
• Facilitate mapping between different structures. Need tools for:
– Locating relevant structures
– Easily joining the semantic web.
• Disconnect between RDF and today’s data providers
– Piazza maps XML to RDF.
Piazza Mapping Language Example
XML Example
**Source:**
*source.xml*
- authors
- author*
- full-name
- publication*
- title
- pub-type
**Target:**
*target.xml*
- pubs
- book*
- title
- author*
- name
- publisher*
- name
```xml
<pubs>
<book>
{: $a IN document("source.xml")/authors/author}
$t IN $a/publication/title,
$typ IN $a/publication/pub-type
WHERE $typ = "book" : }
<title> { $t }</title>
<author>
<name> {: $a/full-name :} </name>
</author>
</book>
</pubs>
```
# Piazza Mapping Language Example
<table>
<thead>
<tr>
<th>Source: source.xml</th>
<th>Target: target.xml</th>
</tr>
</thead>
<tbody>
<tr>
<td>authors</td>
<td>pubs</td>
</tr>
<tr>
<td>author*</td>
<td>book*</td>
</tr>
<tr>
<td>full-name</td>
<td>title</td>
</tr>
<tr>
<td>publication*</td>
<td>author*</td>
</tr>
<tr>
<td>title</td>
<td>name</td>
</tr>
<tr>
<td>pub-type</td>
<td>publisher*</td>
</tr>
<tr>
<td></td>
<td>name</td>
</tr>
</tbody>
</table>
```
<pubs>
<book piazza:id={$t}>
{: $a IN document("source.xml")/authors/author
$t IN $a/publication/title,
$typ IN $a/publication/pub-type
WHERE $typ = "book" : }
<title> {$t }</title>
<author piazza:id={$t}>
<name> {: $a/full-name :} </name>
</author>
</book>
</pubs>
```
System Architecture
Acknowledgements
• Alon Halevy for slides on Semantic Web
• Zack for slides on rule-goal tree example
• TJ and Greg for insights on Piazza
• Val for suggestions
|
{"Source-Url": "http://www.cis.upenn.edu/~mengmeng/presentations/Piazza.pdf", "len_cl100k_base": 4238, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 55378, "total-output-tokens": 5719, "length": "2e12", "weborganizer": {"__label__adult": 0.00029349327087402344, "__label__art_design": 0.00033974647521972656, "__label__crime_law": 0.000514984130859375, "__label__education_jobs": 0.0018873214721679688, "__label__entertainment": 7.75456428527832e-05, "__label__fashion_beauty": 0.00015628337860107422, "__label__finance_business": 0.00041556358337402344, "__label__food_dining": 0.0003628730773925781, "__label__games": 0.0004148483276367187, "__label__hardware": 0.0008320808410644531, "__label__health": 0.0006246566772460938, "__label__history": 0.0003249645233154297, "__label__home_hobbies": 0.00010955333709716796, "__label__industrial": 0.0005273818969726562, "__label__literature": 0.0003108978271484375, "__label__politics": 0.0002586841583251953, "__label__religion": 0.0004436969757080078, "__label__science_tech": 0.09637451171875, "__label__social_life": 0.00018155574798583984, "__label__software": 0.035797119140625, "__label__software_dev": 0.85888671875, "__label__sports_fitness": 0.0002448558807373047, "__label__transportation": 0.00041556358337402344, "__label__travel": 0.00022840499877929688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13051, 0.01648]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13051, 0.65349]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13051, 0.71232]], "google_gemma-3-12b-it_contains_pii": [[0, 73, false], [73, 206, null], [206, 355, null], [355, 717, null], [717, 1035, null], [1035, 1278, null], [1278, 1645, null], [1645, 2255, null], [2255, 2695, null], [2695, 3262, null], [3262, 3691, null], [3691, 4219, null], [4219, 4695, null], [4695, 4813, null], [4813, 4904, null], [4904, 5213, null], [5213, 5434, null], [5434, 5715, null], [5715, 6021, null], [6021, 6145, null], [6145, 6396, null], [6396, 6734, null], [6734, 6763, null], [6763, 7036, null], [7036, 7265, null], [7265, 7586, null], [7586, 7969, null], [7969, 8327, null], [8327, 8742, null], [8742, 9291, null], [9291, 9798, null], [9798, 10517, null], [10517, 10735, null], [10735, 11183, null], [11183, 11488, null], [11488, 12041, null], [12041, 12791, null], [12791, 12890, null], [12890, 13051, null]], "google_gemma-3-12b-it_is_public_document": [[0, 73, true], [73, 206, null], [206, 355, null], [355, 717, null], [717, 1035, null], [1035, 1278, null], [1278, 1645, null], [1645, 2255, null], [2255, 2695, null], [2695, 3262, null], [3262, 3691, null], [3691, 4219, null], [4219, 4695, null], [4695, 4813, null], [4813, 4904, null], [4904, 5213, null], [5213, 5434, null], [5434, 5715, null], [5715, 6021, null], [6021, 6145, null], [6145, 6396, null], [6396, 6734, null], [6734, 6763, null], [6763, 7036, null], [7036, 7265, null], [7265, 7586, null], [7586, 7969, null], [7969, 8327, null], [8327, 8742, null], [8742, 9291, null], [9291, 9798, null], [9798, 10517, null], [10517, 10735, null], [10735, 11183, null], [11183, 11488, null], [11488, 12041, null], [12041, 12791, null], [12791, 12890, null], [12890, 13051, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13051, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13051, null]], "pdf_page_numbers": [[0, 73, 1], [73, 206, 2], [206, 355, 3], [355, 717, 4], [717, 1035, 5], [1035, 1278, 6], [1278, 1645, 7], [1645, 2255, 8], [2255, 2695, 9], [2695, 3262, 10], [3262, 3691, 11], [3691, 4219, 12], [4219, 4695, 13], [4695, 4813, 14], [4813, 4904, 15], [4904, 5213, 16], [5213, 5434, 17], [5434, 5715, 18], [5715, 6021, 19], [6021, 6145, 20], [6145, 6396, 21], [6396, 6734, 22], [6734, 6763, 23], [6763, 7036, 24], [7036, 7265, 25], [7265, 7586, 26], [7586, 7969, 27], [7969, 8327, 28], [8327, 8742, 29], [8742, 9291, 30], [9291, 9798, 31], [9798, 10517, 32], [10517, 10735, 33], [10735, 11183, 34], [11183, 11488, 35], [11488, 12041, 36], [12041, 12791, 37], [12791, 12890, 38], [12890, 13051, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13051, 0.02875]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
242c4071fd38f9c2d2209a0b607298528e037f93
|
Green Open Access added to TU Delft Institutional Repository
‘You share, we take care!’ – Taverne project
https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Approaches for Dialog Management in Conversational Agents
Jan-Gerrit Harms
Delft University of Technology
Pavel Kucherbaev
Delft University of Technology
Alessandro Bozzon
Delft University of Technology
Geert-Jan Houben
Delft University of Technology
Abstract—Dialog agents, like digital assistants and automated chat interfaces (e.g., chatbots), are becoming more and more popular as users adapt to conversing with their devices as they do with humans. In this paper, we present approaches and available tools for dialog management (DM), a component of dialog agents that handles dialog context and decides the next action for the agent to take. In this paper, we establish an overview of the field of DM, compare approaches and state-of-the-art tools in industry and research work on a set of dimensions, and identify directions for further research work.
The dream of a human-like highly intelligent computer assistant has been presented in many science fiction movies like Hal in “2001: A Space Odyssey” (1968), Samantha in “Her” (2013), and Jarvis in “Iron Man” (2013). Recent advances in automatic speech recognition systems, machine learning, and artificial intelligence enabled the advent of personal assistants like Google Assistant, Siri, and Alexa, first on smartphones and lately on home speakers and other devices. These might make the reality seem very close to the fiction in the movies. However, while the assistants are capable of executing small tasks, the richness and quality of their dialogs are not comparable to the ones of humans: interactions are still simple, short, and constrained by a limited vocabulary, thus forcing users to adjust to the system’s capabilities.
Digital assistants are part of a larger group of dialog agents which include: voice user interfaces (or spoken dialog systems), text-based agents and embodied conversational agents, [Chapter 4]. Historically, dialog agents aimed to simulate human conversation. The first examples of text-based agents are ELIZA which acted as a Rogerian psychotherapist, and PARRY simulating a paranoid schizophrenic. This was possible with extensive rule-sets and structured question-answer sets. With advances in natural language processing...
it became possible to have goal-oriented conversations by extracting pieces of information from user utterances and using that information for digital requests to external services. This type of dialog is still prevalent and is the underlying concept of most commercial systems,4 [Chapter 28]. But dialogs with digital assistants are short and simple, mostly not more than two interactions, while many tasks like travel planning can contain many more steps,4 [Chapter 29]. Creating rule sets for such complex dialogs is cumbersome and results in brittle dialog systems which have problems handling unforeseen user input. That is why research work is looking into using probabilistic techniques to learn a dialog policy, i.e., the strategy of what to say next in the conversation, from transcriptions of real conversations. Among these techniques, systems based on partially observable Markov decision processes (POMDP) received great attention.5 POMDPs model the conversation state as a not fully observable statistical variable. Lately, researchers shifted their focus to neural network-based approaches6 and hybrid approaches emerged that use a combination of rule-based and probabilistic dialog policies.7 Still, despite many years of research work, both the scientific and industrial world struggle to understand which approaches and tools are optimal for the type of dialog agent they plan to create.
As for now, dialog agents are still a convenience tool performing mini-transactions and need major innovations in order to become as useful as their fictional counterparts. In this paper, we discuss the approaches and tools for implementing the dialog management (DM) component of a conversational agent. The dialog manager keeps track of the information exchanged in the dialog and decides upon the next action of the dialog agent. Specifically, the purpose of this paper is three-fold:
1. to explore the ways in which DM has been approached;
2. to provide an overview of the state-of-the-art of commercial as well as research tools;
3. to define opportunities for future research directions.
First, we introduce the main concepts and terminology pertaining to the DM in conversational agents. Then, we introduce seven evaluation dimensions used to assess and compare the properties of seven groups of tools. The comparison provides an overview of the state-of-the-art and enables a discussion on directions for future research work.
**DIALOG MANAGEMENT**
Figure 1 shows the system architecture and information flow of a DM system. Dialog agents receive requests from users either through spoken language or direct text input and outputs either a textual or vocal (through speech synthesis) response. The shown architecture is applicable to both speech and text, although in the former case, spoken words need to be converted to text before the natural language understanding (NLU).
The figure is best explained by giving an example. Assume our dialog system is a travel planning service and the user tries to book a flight for a business trip. Once a message is received (e.g., “Book a flight to Amsterdam”) it is first handled by the NLU unit that converts the
message to a user action, also called the user intent (e.g., intent: “FlightBooking”).
This NLU’s output can carry data fields, also called slots (e.g., location: “Amsterdam”), which the user tries to convey to the dialog agent. The dialog manager then uses this output to update the state of the conversation. The DM and NLU are separate components, but they influence each other’s performance. For instance, a powerful NLU can also make the DM more capable and shorten the conversation (e.g., no need for extra confirmation messages, if the system is confident in understanding the user). If some slot (e.g., destination city) is not extracted the dialog manager might need to request destination while if the slot is filled it can directly ask for the next missing slot (e.g., departure city). On the other hand, inputs from DM can help the NLU in adapting its internal operations. For instance, knowledge about the next required input type (e.g., a destination city) could enable the NLU to employ domain- and entity-specific models (e.g., one trained to recognize city name). The dialog state keeps track of any information received throughout the conversation. It forms the foundation for deciding on the next action and for interpreting the conversation. The dialog state can also be influenced by the goal of the dialog agent itself, i.e., objectives pursued by the dialog agent that transcend the immediate intent of the user. For instance, next to helping the user achieve some goal, the agent might also be designed to steer the user behavior into a certain direction. In the travel planner example, the goal of the dialog agent could be to sell a business class flight, so it could propose “There are cheap business class flights available for 1500 Euro” or it could be more subtle hint like “Shall I book it right away?” The same happens when a sales person tries to convince the customer to buy a product.
Once the dialog state is updated, the dialog policy is triggered which takes the new state and decides on the next action for the dialog agent to take. The dialog policy is the central piece of the dialog manager, building the bridge between the conversation context, third-party services, and the dialog agent’s response. In the context of the dialog policy, some other concepts require an introduction. First, grounding is the establishment of common ground between parties in the conversation. This happens through acknowledgment of the last heard user input, through implicit or explicit confirmation. Second, the initiative is an important concept; a dialog system can have either user-directed initiative (user is leading the conversation for instance through asking questions to the system), system-directed initiative (the system leads the conversation by requesting information from the user) or mixed initiative (both user and system can take the lead). Finally, domains are the fields in which certain actions, states, and intents are defined, in this example the travel or flight domain. Domains can be organized into hierarchies or even graphs of subdomains with each domain having its own policy.
The dialog policies choose from dialog, internal and external actions. A dialog action corresponds to a message output that is sent to the user which can either be a template “There is a flight at [departure_time]” or, in more complex systems, a dialog act, for instance, to inform the user (inform(flight = “AE23”, departure_time = “1 pm”)). This output will then be converted by the natural language generation (NLG) component to a textual response to the user. An internal action is one that the dialog agent orchestrates in order to modify its behavior or improve its performance: for example, improving the policy through retraining, or seeking external input for performance improvement (e.g., in hybrid conversational agents, or in systems with online learning agents). An external action interacts with a service provider to satisfy a user’s request, by requesting data or by triggering some application event. In our example that would correspond to getting a list of flights and submitting a booking request once all information is available. Also, multiple actions can be possible: when the request for the flight booking service is taking some time the dialog agent might inform the user “I am looking for available flights right now.”
DIMENSIONS OF ANALYSIS
In this paper, we assess DM approaches and tools with respect to the following aspects:
- capability of creating natural, robust and complex dialogs;
Towards this goal, we derive the following dimensions of analysis.
**Dialog structure.** Dialogs can often be simplified by modeling it as a structure of possible states and state-transitions. A dialog structure could be a linear sequence of messages or a tree-like structure.
**Learning.** With the growing complexity of dialog agents, it will be important that the dialog strategies improve automatically when more data becomes available through new interactions. This setting tells whether improvement of the dialog manager is possible at runtime.
**Error handling.** Interactions that perfectly fit the conversation the developer had in mind are often easy to handle for any system. Difficulties arise when handling unexpected input and speech recognition or typing errors. This dimension shows how the system can react in such situations.
**Dependencies.** The dependencies of the tool tell what resources are required to create a working dialog manager. For example, in order to train a model, a corpus of data is required.
**Control.** For some tasks we require the dialog agent to be very precise, for example when handling the passport identifier in the flight-booking example. In those situations, the developer needs a lot of control about how the dialog agent shall interpret input and react to it.
**Domain independence.** In order to be reusable in multiple situations, a dialog manager can have some domain independent components, which are unrelated to the topic of the dialog.
**Tool availability.** For nonexpert developers to use the tool it needs to be convenient to create dialog agents. The availability of the tool tells how developers can access the functionality of the DM software.
**APPROACHES AND TOOLS**
Figure 2 shows a taxonomy of the approaches for managing dialogs and a classification of a selection of tools. With our selection we do not aim to be exhaustive, taking into consideration the format of this paper. We do aim to provide representative examples of each approach from the taxonomy. For the comparison, we select one tool per approach which helps determine the common features that different implementations based on that approach have. Our taxonomy is based on the one sketched in, where approaches are divided into handcrafted (rule-based) and probabilistic (statistical). However, some of the tools that were found did fit both categories, which we separated into a third category, hybrid systems.
Handcrafted approach. Handcrafted dialog managers define the state of the system as well as the policy by a set of rules which are defined by developers and domain experts. The simplest subset of dialog systems is modeled by a finite-state automata, in which the conversation always is in one definite state of the conversation at a time, each state having a fixed number of transitions to other states.
Such dialogs have system-directed initiative, so the system asks information from the user step-by-step. Many equivalent tools exist; for the comparison, we chose Flow XO, a software for business–customer relations such as customer support. It is a hosted solution with a visual interface and many third-party tool integrations.
To offer more flexibility, a data model can be added to the finite-state automaton which keeps track of slots. This type of approach is called frame-based DM. Slots are allowed to be filled in any sequence and multiple slots can be filled per turn, thus enabling some user initiative for the semi-mixed initiative system.
Probabilistic approach. Instead of defining rules for the dialog strategy by hand, probabilistic DM takes a different approach by learning the rules from actual conversations. Example-based systems learn appropriate answers from a large corpus by matching the last query with an example in the training dataset and uses the response from the training set. For example, a training corpus might contain the conversation set [user: “Hello;” system: “Hi, how are you doing?”]. If the user then starts a conversation with “Hello” a word similar to this, the system will reply with the other message. This was the earliest approach at a statistical dialog manager. It is often used for chatbots that aim to carry open-ended conversations—Chatterbot is an example for an open-source system which uses such an approach—but it suffers from several limitations, especially in terms of error handling. Approaches which have seen a lot of research work more than the past 10 years are Markov decision processes (MDP) and especially POMDP. They model the dialog state as an unobserved variable, the belief state, which is a distribution over all possible states, including error ones (e.g., incorrect input from the users). Observations (e.g., user input to the system) provide evidence for the most probable state of the system. Finally, a dialog policy is learned with reinforcement learning to map the belief state to a dialog agent action. In the flight booking example, a message “A flight to Amsterdam,” will lead to a high probability that the destination is filled with Amsterdam, while the state with all fields empty is given a low probability. PyDial is a statistical spoken dialogue system toolkit that has been published recently. PyDial features a modular architecture that allows, among others, the adoption of deep reinforcement learning techniques. Even more recently, memory neural networks have been applied to DM. Such neural networks are extended with the ability to read and write to a memory component in order to store information from previous input to the network. They take the pure text as input and return text as well. The goal of this is to use end-to-end learning on a set of dialogs to train DM without any handcrafting of state and action spaces. At the time of writing we were not aware of any tools, so instead, we use a proposed architecture from the paper by Bordes et al.
Hybrid approach. Next, to the purely rule or data-based approaches, some work has been done on combining the advantages of both approaches. Such hybrid approaches are an important step toward introducing data-driven elements into available dialog agents. At the time of writing two such tools have been published. Rasa Core is a commercial open-source
tool that combines frame-based state updates with a learning by example dialog policy. The software implementation is close to the concept of Hybrid Code Networks. As the name suggests this approach utilizes neural networks combined with coded constraints and rules. The intuition is that some parts of goal-oriented dialogs, like sorting the data returned by external service providers, is very hard to learn by example dialogs, while it is much simpler to implement in a few lines of code. By implementing software components into the framework, the training data can significantly be reduced.
OpenDial uses an approach called probabilistic rules, a way to express domain knowledge with if-then-else type rules to reduce the overall state space of the underlying POMDP model. For example, if the user is booking a flight to New York and the dialog agent asks the user about the city of departure we can assume with high probability that the user will answer with a city name which is the departure. If we can encode this in rules, this specific part does not need to be learned during the training phase.
COMPARISON
In Table 1, we applied the dimensions to each of the approaches defined in the previous section, taking one representative example tool per DM approach.
The dialog structure influences the complexity of the possible dialogs and the naturalness because rigid dialogs are also repetitive. The finite-state approach offers the stiffest structure by modeling dialogs like a tree, so branches in the conversation exist based on the previous answer but the conversation follows a strict flow without returning to previous states. This offers easy engineering of conversations, but at a cost of a limited diversity and richness in dialogs. The tools that use a frame-based dialog state model the dialog structure as directed acyclic graphs because the sequence in which information is given and requested is not predefined. Cycles are usually not possible; once a field is filled, it stays filled. However, correction intent could be manually added, thus making cycles possible. Undirected graphs which are possible with the information state architecture, the neural network and the approaches that use a belief state. In this model, states are activated based on the evidence collected through the whole history or a combination of rules. The flow through the conversation is not defined which makes the conversation less predictable, but arguably more natural. Finally, we have the example driven Chatterbot which has
<table>
<thead>
<tr>
<th>High-level category</th>
<th>Low-level category</th>
<th>Tools</th>
<th>Dialog structure</th>
<th>Learning</th>
<th>Error handling</th>
<th>Dependencies</th>
<th>Control</th>
<th>Domain independence</th>
<th>Tool availability</th>
</tr>
</thead>
<tbody>
<tr>
<td>Handcrafted</td>
<td></td>
<td>Flow xo</td>
<td>True</td>
<td>manual iterative</td>
<td>escalation message</td>
<td>none</td>
<td>complete control</td>
<td>none</td>
<td>Hosted, Visual</td>
</tr>
<tr>
<td></td>
<td></td>
<td>DialogFlow</td>
<td>Directed acyclic graph</td>
<td>manual iterative</td>
<td>escalation message</td>
<td>none</td>
<td>highly controlled</td>
<td>sub-modules reusable</td>
<td>Hosted, Visual</td>
</tr>
<tr>
<td></td>
<td></td>
<td>TrediKs</td>
<td>Undirected graph</td>
<td>manual iterative</td>
<td>rule-based recovery</td>
<td>conversation model</td>
<td>highly controlled</td>
<td>model party reusable</td>
<td>Prolog/Python</td>
</tr>
<tr>
<td>Probabilistic</td>
<td></td>
<td>Chatterbot</td>
<td>Random</td>
<td>no learning</td>
<td>ignored</td>
<td>Large corpus</td>
<td>uncontrolled</td>
<td>examples reusable</td>
<td>Python</td>
</tr>
<tr>
<td></td>
<td></td>
<td>PoDial</td>
<td>Undirected graph</td>
<td>run-time, iteratively</td>
<td>probabilistic recovery</td>
<td>large conversation collection (>1000)</td>
<td>semi-controlled</td>
<td>independent-state tracker and policy</td>
<td>Python</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Memory Networks</td>
<td>Undirected graph</td>
<td>run-time</td>
<td>-</td>
<td>-500 conversation collection</td>
<td>uncontrolled</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Hybrid</td>
<td></td>
<td>OpenDial</td>
<td>Undirected graph</td>
<td>run-time</td>
<td>probabilistic recovery</td>
<td>conversation collection (>100)</td>
<td>semi-controlled</td>
<td>independent-state tracker and policy</td>
<td>Java</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Rasa Core</td>
<td>Directed acyclic graph</td>
<td>manual run-time (through interaction)</td>
<td>probabilistic recovery</td>
<td>conversation examples (>10)</td>
<td>semi-controlled</td>
<td>sub-modules reusable</td>
<td>Python</td>
</tr>
</tbody>
</table>
Table 1. Comparison of DM approaches.
a semi-random state activation through matching the best example in the corpus. This can result in unexpected answers from the bot making the flow unnatural, but at the same time, if the examples are a good match, agent responses can be surprisingly human-like. For instance, the developer usually thinks about greeting messages that come very naturally however if we ask, “what is the color of tree leaves?,” but the only corpus example with the word “color” is “what is your favorite color?,” the bot might answer “I like red.” which is not an expected answer to the question.
As for availability the finite-state tool Flow XO and frame-based DialogFlow, are hosted solutions with visual interfaces for defining messages and frames. This makes them convenient for developers and visual interfaces help nonexpert and even nondevelopers to design interaction with the chatbots. The hosted solutions also offer easy integration with external tools that further increases the usefulness for application designers. For instance, in a customer survey application, data requested from the users can directly be fed into a spreadsheet for further processing and analytics. All other tools require writing code to create a dialog agent. TrindiKit was originally written in Prolog, a very specific logic programming language making it very much focused at research. Later a Python implementation became available but developing a working dialog agent still is a daunting endeavor due to the complexity of the underlying rule structure. PyDial, Chatterbot, and Rasa Core all offer Python implementations. Python is a popular language in research work and industry alike, making Python tools targeted at a large group of developers, but nondevelopers are mostly excluded from these tools. Integrations then need to be programmed customized to the application. OpenDial is written in Java, which is a little less common among developers of dialog agents.
It is hard to predict all the different ways in which users will interact with the dialog agent once it is deployed, therefore designing one is often a long process in which different versions need to be tested and improved. This learning process can be long and costly and is, therefore, an important aspect of the DM approach. Rule-based systems have handwritten rules that the program cannot update itself, therefore the learning needs to happen iteratively with manual updates of the interaction rules that can be a hard process and limit the scalability and dialog complexity of such approaches. Rasa Core learns through interactions in which the developer confirms the correctness of the dialog agents’ actions, as such it can be seen as a manual process but on an online system. OpenDial has two learning processes, first the probabilistic rules need to be updated which can be done iteratively with supervised learning when new data becomes available. Second, the underlying policy is trained like the POMDP approach with reinforcement learning. This can happen online as well. Reinforcement learning further allows for planning the conversation ahead as future rewards are considered which makes this approach better for complex tasks. Finally, memory networks can be trained end-to-end, which can allow for a simplified learning process without any preprocessing and data labeling. However, online learning has not yet been shown for memory networks.
Error handling helps the dialog agent to handle unexpected inputs, making the conversations robust and natural. In most cases the finite-state and frame-based approach use a simple escalation message such as “I didn’t understand” to give feedback to the user, however, users either have to rephrase or start a new interaction. In TrindiKit the developer can define rules for recovering from bad input, while the data-driven and hybrid approaches keep track of the state probability and lower that on unexpected input, returning to previous states in order to build up new evidence for the current belief state. This makes them quite robust as they usually reach their goal. However, unexpected input increases the length of the conversation because turns will be repeated. Handcrafted systems are less elegant in handling such situations. In example-based tools, errors are usually not detected or handled at all, the system just selects the next message to send, which can be hit-or-miss. Finally, error handling in memory networks have not yet been studied in depth.
Regarding dependencies, finite-state and frame-based systems are ready to use without any prior data or model, also the underlying state and
policy are easy to understand, thus making these tools very approachable. The research work focused TrindiKit requires the implementation of different rules which is not trivial and will require an underlying model in order to create a working system. For the other data-driven approaches it is obvious that some training data (and meta-data) is required. Example-based approaches require a lot of examples in order to perform human-like chit-chat conversations. Also, PyDial requires a lot of dialogs, in the range of 1000 for a simple task, which can be lowered with the OpenDial architecture, and Rasa Core lowers this figure to about ten for basic interactions. Memory Networks have been shown to work acceptably well with around 500 conversations. The large number of dialogs for the purely data-driven tools means some way of data-collection is required to create a workable solution making them less approachable. The hybrid approaches lower the data requirement far enough to make manual data creation a viable option.
*Domain independence* is one of the influencing factors for scalability of the overall dialog system. Finite-state approaches have fixed sequences of messages which make them mostly not reusable, whereas frame-based approaches can have sub-modules (e.g., a payment information module that can be shared between multiple applications), although issues of overfitting to training data could limit their applicability. DialogFlow, just like other similar software, have some available predefined modules that can be implemented into a new dialog agent. Rasa Core does not provide this, but a similar thing would theoretically be possible. TrindiKit can offer domain independence by reusing part of a model. PyDial provides domain-independent policy and state-tracker and requires domain dependent data only for NLU and NLG. Again, memory network research work is not mature enough yet to explore domain independence.
The last dimension is *control* so how much influence the developer has on the flow of the conversation at design time. Handcrafted systems offer the most control, in finite-state flow the whole conversation is fully designed, so complete control of the flow is possible, while the other tools offer conceptual control on a slightly more abstract level. Example-based and neural networks, on the other hand, offer no control at all other than selecting the dialogs of the training database. The other tools offer partial control through defining rewards during reinforcement training or manual interactive training.
**DISCUSSION AND OUTLOOK**
The comparison of DM approaches has shown that many convenient tools are available to create simple but useful dialog agents and a few research work focused solutions that help exploring new data-driven ways of handling dialogs. Below we summarize the evaluation of the current approaches and solutions against the original principles mentioned in *Dimensions and Analysis* section.
**Capability of creating natural, robust and complex dialogs.** Nowadays creators of dialog systems are balancing between robustness, naturalness, and complexity, seeking for suitable tradeoffs. Systems based on handcrafted rules are very robust but lack naturalness and complexity. In contrast, probabilistic models give a more natural fee, at a cost of robustness. So far, the ambition to create a system supporting complex dialogs performing well across multiple domains remains to be hardly reachable.
**Convenience for developers.** Tools for creating dialog systems based on handcrafted rules are not new, and ample support exists. Recently developers got in their toolkit tools allowing to build probabilistic and hybrid solutions. Some of these tools do not require a deep understanding of underlying principles, while some do require giving developers fairly wide freedom for experimentation. Developers can build robust dialog systems in a relatively easy fashion; however, such systems are working in a limited domain, where the naturalness is primarily defined by the amount and diversity of training data the developers can afford.
**Applicability in commercial environment.** The “robustness” is the first commercial priority (beyond entertainment domain), and current approaches and tools allow to build robust solutions working well in limited domains. It suggests that the current state of the technology is mature enough to be commercially viable, primarily providing another medium for customers
to get a service they used before via websites, mobile applications, or emails.
**Scalability/reusability in multiple applications.** Solutions based on hand-crafted rules lack scalability to wider and bigger domains. Some tools provide dialog policies for some domains (e.g., weather, chit chat) that could be reused by other developers to shorten time to deployment. However, transfer learning techniques for dialog policies are not currently common, and practices similar to the ones used in computer vision (e.g., reuse of neural networks without the last layer) are indeed in demand.
Human-like highly intelligent computer assistant is still not available, but several concrete work directions should be pursued for next-generation dialog managers to increase their impact on digital assistants and dialog agents in general. We identify four areas of interest.
**Training Data Availability.** Current efforts are severely limited by the lack of large amount, high-quality training data across different domains. While this might not be an issue for big commercial players, there is a clear challenge for academic research: training data generation is a tedious and expensive process. At the same time, issues of diversity (also in terms of the socio-demographic characteristics of the people involved in the creation of training samples) should be considered, to seek fair and inclusive conversations.
**Integration.** In current DM systems external data sources are accessed through handwritten external actions. However, future digital assistants shall be able to discover and select such actions in a dynamic and adaptive fashion, while influencing at the same time their surrounding devices. Understanding how to design dialog managers able to learn and organize new external sources at run-time could dramatically improve their utility. The learning process could be carried through conversations (with chatbot users or domain experts), or even automatically.
**Context awareness.** Dialog systems are still only able to perform quite simple tasks which often can be performed almost as quickly through using a mobile or web interface. One of the aspects which might make the future assistants really useful is understanding more of the context in which they exist (beyond location information). Using sensors in mobile devices, digital assistants could make suggestions for a restaurant around the user or suggest a movie on TV. Sensing devices might then act as another input for the dialog manager.
**Policy generation.** In the movies, characters can talk to the assistant about any topic and even learn from them. Creating or even learning dialog policies for any type of conversation or domain will not be a viable option. In order to create a dialog agent that can answer naturally to questions on any topic, there needs to be some automatic policy generation for unknown domains. Transfer learning approaches provide a way to bootstrap dialog policies across domains, but their applicability and performance are still limited.
ACKNOWLEDGMENTS
This work was supported by the Amsterdam Institute for Advanced Metropolitan Solutions with the AMS Social Bot grant.
REFERENCES
1. M. McTear, Z. Callejas, and D. Griol, “Introducing the conversational interface,” in *The Conversational Interface*. New York, NY, USA: Springer, 2016. Available at: https://doi.org/10.1007/978-3-319-32967-3_1
6. A. Bordes, Y.-L. Boureau, and J. Weston, “Learning end-to-end goal-oriented dialog,” in *Proc. ICLR*, 2016,
8. P. Kucherbaev, A. Bozzon, and G. Houben, “Human aided bots,” *IEEE Internet Comput.*, to be published. Available at: https://doi.org/10.1109/MIC.2018.2520953
---
**Jan-Gerrit Harms** is an Ms.C. student with the Web Information Systems Group, Delft University of Technology, Delft, The Netherlands. His research interests include ontology learning, conversational agents, and blockchain technology. Contact him at jan.gerrit.harms@gmail.com.
**Pavel Kucherbaev** is a Postdoctoral Researcher with the Web Information Systems Group, Delft University of Technology, Delft, The Netherlands. His research focuses on human computation and conversational agents. He received the Ph.D. degree from University of Trento, Trento, Italy. Contact him at pavel.kucherbaev@gmail.com.
**Alessandro Bozzon** is an Associate Professor with the Web Information Systems group, Delft University of Technology, Research Fellow with the AMS Amsterdam Institute for Advanced Metropolitan Solutions, and a Faculty Fellow with the IBM Benelux Center of Advanced Studies. His research interests include the intersection of crowdsourcing, user modeling, and web information retrieval. Contact him at a.bozzon@tudelft.nl.
**Geert-Jan Houben** is a Full Professor and the Leader with the Web Information Systems research group of TU Delft, a Scientific Director of Delft Data Science, a Research Program Leader on Open & Online Education in TU Delft Extension School, and a Principal Investigator in AMS, Amsterdam Institute for Advanced Metropolitan Solutions. His research group covers subjects in the wider field of web engineering and web science, and his research focuses on user modeling for web-based systems. Contact him at g.j.p.m.houben@tudelft.nl.
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/55207246/08536470.pdf", "len_cl100k_base": 7172, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 31208, "total-output-tokens": 8307, "length": "2e12", "weborganizer": {"__label__adult": 0.0004591941833496094, "__label__art_design": 0.000732421875, "__label__crime_law": 0.0004634857177734375, "__label__education_jobs": 0.002918243408203125, "__label__entertainment": 0.00032639503479003906, "__label__fashion_beauty": 0.0002536773681640625, "__label__finance_business": 0.0003273487091064453, "__label__food_dining": 0.0004458427429199219, "__label__games": 0.0010738372802734375, "__label__hardware": 0.0012035369873046875, "__label__health": 0.001003265380859375, "__label__history": 0.0004849433898925781, "__label__home_hobbies": 0.00011593103408813477, "__label__industrial": 0.0003962516784667969, "__label__literature": 0.0010976791381835938, "__label__politics": 0.0004000663757324219, "__label__religion": 0.0004949569702148438, "__label__science_tech": 0.26806640625, "__label__social_life": 0.0002574920654296875, "__label__software": 0.0335693359375, "__label__software_dev": 0.6845703125, "__label__sports_fitness": 0.0003025531768798828, "__label__transportation": 0.0006222724914550781, "__label__travel": 0.0002353191375732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38033, 0.01543]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38033, 0.49552]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38033, 0.91481]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 329, false], [329, 2558, null], [2558, 5734, null], [5734, 10292, null], [10292, 12747, null], [12747, 16556, null], [16556, 21092, null], [21092, 25711, null], [25711, 30200, null], [30200, 34686, null], [34686, 38033, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 329, true], [329, 2558, null], [2558, 5734, null], [5734, 10292, null], [10292, 12747, null], [12747, 16556, null], [16556, 21092, null], [21092, 25711, null], [25711, 30200, null], [30200, 34686, null], [34686, 38033, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38033, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38033, null]], "pdf_page_numbers": [[0, 0, 1], [0, 329, 2], [329, 2558, 3], [2558, 5734, 4], [5734, 10292, 5], [10292, 12747, 6], [12747, 16556, 7], [16556, 21092, 8], [21092, 25711, 9], [25711, 30200, 10], [30200, 34686, 11], [34686, 38033, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38033, 0.09346]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
6be4ae1865342fee6136925ddae682ac63d842d4
|
Tokenizers
Tasks of the Tokenizer
- Group the input (which is just a stream/string of characters) into tokens. (Numbers, operators, special words, strings)
- Eliminate comments.
- In C: Deal with `#include`, `#if #else #endif` and the like.
Tokenizers are also called scanners.
**Tokens**
**Definition:** A *token type* is a pair $(\Lambda, T)$, in which $\Lambda$ is a finite set of tokens labels, and $T$ is a function s.t. for each $\lambda \in \Lambda$, $T(\lambda)$ is a set. A *token (with attribute)* is a pair $(\lambda, x)$, s.t. $\lambda \in \Lambda$ and $x \in T(\lambda)$.
**Example:** If one puts $\Lambda = \{\texttt{int}, \texttt{real}\}$, with $T(\texttt{int}) = \mathbb{Z}$, $T(\texttt{real}) = \mathbb{R}$, then $(\texttt{real}, 3)$, $(\texttt{real}, 3.141526535)$, $(\texttt{int}, 2)$, $(\texttt{int}, -1)$ are tokens. $(\texttt{int}, 2.718271828)$ is not a token.
Tokens without Attribute
Not all tokens have an attribute. For example, reserved words `while`, `do`, `if`, `then`, `else` usually don’t.
For those, one needs a trivial type $\top = \{( )\}$. Then $T(\text{while}) = T(\text{do}) = T(\text{if}) = T(\text{then}) = T(\text{then}) = \top$.
Implementation Issues
I usually use $C^{++}$. A token is a struct containing an enum type, and a list for each possible type of attribute. The list has length 1 when the attribute has the type of the list, and 0 otherwise.
In $C$, one could use a struct containing an enum and a pointer to the heap, or an enum with a union type.
In Java, one could use a struct containing an enum and an Object.
Whatever implementation you choose, you should use an object-oriented approach. Make sure that there is a token class, make sure that it can be printed, that it can be passed to procedures, and put in containers.
Implementation (2)
It is a good idea to add information about where it comes from to a token. This makes it more easy to generate error messages.
A tokenizer is a function with signature
```cpp
token readtoken( reader& );
```
When called, it reads characters from reader until it has enough information to build a token.
The reader has a field `char nextchar;` and a method
```cpp
void moveforward();
```
which replaces nextchar by the next char.
Building a Tokenizer
There are basically two ways of building a tokenizer.
• Hacking (sometimes called ’careful coding by hand.’) If the tokenizer is not big, you can follow this approach.
• Using a scanner generator. (Lex)
Writing a Tokenizer by Hand
If the tokens are not too many, you can follow this approach.
Draw an NDFA for each non-trivial token.
Stare at the NDFA then and the tokens for which you didn’t draw an NDFA and find all overlaps.
Find ways of dealing with the overlaps. (Combine NDFA with overlaps into one. First read one token, if NDFA gets stuck, read another token. Do postprocessing of read tokens)
Overlaps
Sometimes, different tokens have shared prefixes. An example is `int` and `real`. One can decide only at the end that 1234533434343433434 is an `int`, and not a `real`.
Similarly, identifiers and reserved words overlap, like `while`, `do`, `dummy`, `which`.
Operators `+`, `++` and `-`, `->`, `--` overlap.
`-`, `--` overlap with `integer -1`
If you write a tokenizer by hand, you have to worry about overlaps. (which means that you loose modularity)
Usage of a scanner generator
The tokens are defined by regular expressions. The scanner generator constructs an NDFA, and translates this into an equivalent DFA. The resulting DFA is very efficient (optimal). The DFA reads the input only once. When defining the tokens, the user doesn’t need to worry about overlaps.
Disadvantages are that the user has to spend time learning to use the tool, that the resulting scanner does not give much help when computing the attribute, (DFAs are only good at saying ’yes’ or ’no’) and that the resulting scanners are not flexible.
Non-flexibility
In general, tokenizers tend to be not as clean as parsers, and sometimes one has to use tricks.
For example in Prolog, it is important whether there is a space between an identifier and a ’(’.
In some languages, a . terminates the input, but inside ( ), or [ ], it is just a usual operator.
Non-Deterministic Finite Automata
Definition: An NDFA is a structure of form \((\Sigma, Q, Q_s, Q_a, \delta)\), in which
- \(\Sigma\) is the alphabet,
- \(Q\) is the set of states (finite),
- \(Q_s \subseteq Q\) is the set of starting states,
- \(Q_a \subseteq Q\) is the set of accepting states,
- \(\delta \subseteq Q \times \Sigma^* \times Q\) is the transition relation.
We have been drawing NDFAs in the lecture and in the exercises. In a drawing, states are anonymous. If you want to represent an NDFA in a computer, you need some set \(Q\).
NDFAs (2)
An N DFA accepts a word $w$ iff there exist a finite sequence of words $w_1, \ldots, w_n$, and a sequence of states $q_1, q_2, \ldots, q_{n+1}$, s.t.
- $w = w_1 \cdot \ldots \cdot w_n$,
- $q_1 \in Q_s, \quad Q_{n+1} \in Q_a$,
- Each $(q_i, w_i, q_{i+1}) \in \delta$.
Non-Determinism
It would be nice if one could use a program of form
```plaintext
state = Qs;
nextstate = delta( state, r. lookahead );
while( nextstate != undefined )
{
r. moveforward( ); // Reads new r. lookahead
state = nextstate;
nextstate = delta( state, r. lookahead );
}
// Determine the type of token, based on
// the state in which we got stuck.
```
Non-Determinism (2)
Unfortunately, this is not possible, because (1) $\delta$ is not a function, but a relation, and (2) $Q_s$ is not a single state, but a set of states.
In practice, (1) is never a problem, but (2) usually is. Problem (2) is caused by the fact that in the beginning one does not know what token will come, so one has to start with the initial states for all of them.
**Remarks**
NDFAs can be programmed by hand using gotos, or by keeping an explicit state variable or a set of state variables.
Tokenizers are usually greedy. This means that they try to read the longest possible token. Doing something else would be problematic.
Regular Expressions (1)
(‘Regular’ means ‘according to rules’, which is actually a quite empty term.)
Let $\Sigma$ be an alphabet.
- Every word $s \in \Sigma^*$ is a regular expression.
- If $e$ is a regular expression then $e^*$ is also a regular expression.
- If $e_1, e_2$ are regular expressions then $e_1 \cdot e_2$ is a regular expression.
- If $e_1, e_2$ are regular expressions then $e_1 \mid e_2$ is a regular expression.
Regular Expressions (2)
Other constructs can be added as well:
- If $e$ is a regular expressions, $n \geq 0$, then $e^n$ is a regular expression.
- If $e$ is a regular expression, then $e?$ is a regular expression.
- If $e$ is a regular expression, then $e^+$ is a regular expression.
- If the alphabet $\Sigma$ is ordered by a total order $<$, and $\sigma_1, \sigma_2 \in \Sigma$ $\sigma_1 \leq \sigma_2$, then $\sigma_1 \ldots \sigma_2$ is a regular expression.
All these constructions are definable, but the definitions can be quite long. For example, $e^{20} = e \cdot \ldots \cdot e$.
$\sigma_1 \ldots \sigma_2 = \sigma_1 \mid \sigma_1' \mid \sigma_1'' \mid \sigma_1''' \mid \ldots \mid \sigma_2$. (This is only possible if $\Sigma$ is finite)
Regular Expressions (3)
Examples:
\[
\begin{align*}
digit & := '0' \ldots '9' \\
letter & := 'a' \ldots 'z' \mid 'A' \ldots 'Z' \\
ident & := letter \ ( letter \mid digit \mid '_' ) + \\
float & := ( "" \mid "+" \mid "-" ) \\
& \quad \text{digit +} \\
& \quad ( '.' \ \text{digit +} )? \\
& \quad ( ( 'e' \mid 'E' ) ( '-' \mid '+' \mid "" ) \ \text{digit +} )? \\
\end{align*}
\]
What do you find more readable? NDFAs or regular expressions?
Regular Expressions (4)
We define recursively when a word in $\Sigma^*$ satisfies a regular expression:
- If $s$ is a regular expression built by a single word $w$ and $w = s$, then $w$ satisfies $s$.
- If $w$ is a word, then $w$ satisfies $e^*$ if either $w = \epsilon$, or there exist $w_1, w_2$, such that $w = w_1 \cdot w_2$, $w_1$ satisfies $e$ and $w_2$ satisfies $e^*$.
- If $w$ is a word, then $w$ satisfies $e_1 \cdot e_2$ if there exist $w_1, w_2$, s.t. $w = w_1 \cdot w_2$, $w_1$ satisfies $e_1$, and $w_2$ satisfies $e_2$.
- If $w$ is a word, then $w$ satisfies $e_1 \mid e_2$ if either $w$ satisfies $e_1$ or $w$ satisfies $e_2$.
This is not a logic course, but the logically correct definition would be: satisfies(w,e) := the $\subseteq$-smallest 2-place binary predicate that has the list of properties mentioned above.
Structure of a Scanner Generator
A scanner generator proceeds as follows:
1. Translate the regular expressions belonging to the tokens into NDFAs.
2. Combine the NDFAs for the tokens into a single NDFA.
3. Translate the NDFA into a DFA.
4. Minimize the DFA.
5. Compress the DFA and generate tables.
Translating a Regular Expression into an NDFA
Translating regular expressions into NDFAs is surprisingly easy.
For a regular expression $e$, the translation $A(e) = (\Sigma, Q, Q_s, Q_a, \delta)$ will be defined on the following slides.
$\Sigma$ will be always the same.
$A$ is defined by recursion on the structure of $e$.
Translating a Regular Expression into an N DFA
Assume that $e$ is built from a single word $s$. The translation $A(e)$ is the automaton $(\Sigma, \{q_s, q_a\}, \{q_s\}, \{q_a\}, \{(q_s, s, q_a)\})$.
Translating a Regular Expression into an NDFA
Assume that $e$ has form $e = e_1 \cdot e_2$. Let
$$A_1 = A(a_1) = (\Sigma, Q_1, Q_{1,s}, Q_{1,a}, \delta_1),$$
and let
$$A_2 = A(a_2) = (\Sigma, Q_2, Q_{2,s}, Q_{2,a}, \delta_2).$$
Assume that $Q_1$ and $Q_2$ have no states in common. Otherwise, rename the states in $A_1$.
$A(e_1 \cdot e_2) = (\Sigma, Q, Q_s, Q_a, \delta)$ is obtained as follows:
- $Q = Q_1 \cup Q_2$,
- $Q_s = Q_{1,s}, \quad Q_a = Q_{2,a},$
- $\delta = \delta_1 \cup \delta_2 \cup \{(q, \epsilon, q') \mid q \in Q_{1,a}, \ q' \in Q_{2,s}\}$.
Translating a Regular Expression into an NDFA
For a regular expression $e$ of form $e = e_1 \mid e_2$, let
$$A_1 = A(e_1) = (\Sigma, Q_1, Q_{1,s}, Q_{1,a}, \delta_1),$$
and let
$$A_2 = A(e_2) = (\Sigma, Q_2, Q_{2,s}, Q_{2,a}, \delta_2).$$
Assume that $A_1$ and $A_2$ have no states in common. If they have, then rename the states in $A_1$. Then $A(e_1 \mid e_2) = (\Sigma, Q, Q_s, Q_a, \delta)$ is obtained as follows:
- $Q = Q_1 \cup Q_2,$
- $Q_s = Q_{1,s} \cup Q_{2,s},$ $Q_a = Q_{1,a} \cup Q_{2,a}.$
- $\delta = \delta_1 \cup \delta_2.$
Translating a Regular Expression into an NDFA
For a regular expression $e$ of form $e = e_1^*$, let
$$A_1 = A(e_1) = (\Sigma, Q_1, Q_{1,s}, Q_{1,a}, \delta_1).$$
Then $A(e_1^*) = (\Sigma, Q, Q_s, Q_a, \delta)$ is obtained as follows:
- $Q = Q_1 \cup \{\hat{q}\}$
- $Q_s = \{\hat{q}\}$, $Q_a = \{\hat{q}\}$,
- $\delta = \delta_1 \cup \{(\hat{q}, \epsilon, q) \mid q \in Q_{1,s}\} \cup \{(q, \epsilon, \hat{q}) \mid q \in Q_{1,a}\}$.
Translating a Regular Expression into an N DFA
Theorem Let \( e \) be regular expression. Then \( w \) satisfies \( e \) iff \( A(e) \) accepts \( w \).
Deterministic Finite Automata
**Definition:** An N DFA $A = (\Sigma, Q, Q_s, Q_a, \delta)$ is called deterministic if
1. $Q_s$ contains at most one element,
2. $(q, s, q') \in \delta \Rightarrow s$ has length 1.
3. $(q, s, q_1), (q, s, q_2) \in \delta \Rightarrow q_1 = q_2$.
In summary, a DFA always knows which transition to make when it sees the next token.
Determinization
In the slides that follow, we present a procedure that transforms an NDFA into an equivalent DFA.
Simplification of $\delta$
In our definition of NDFA, it is allowed to have transitions of form
$(q, w, q')$ in $\delta$, where $|w| \geq 2$.
The first step is to eliminate these transitions. Let
$A = (\Sigma, Q, Q_s, Q_a, \delta)$ be an NDFA.
- As long as $\delta$ contains a transition $(q, w, q')$ with $|w| \geq 2$, do the following: Write $n = |w|$. Let $q_1, \ldots, q_{n-1}$ a sequence of new states, not in $Q$. Put
$$Q := Q \cup \{q_1, \ldots, q_{n-1}\},$$
and put
$$\delta := \delta \setminus \{(q, w, q')\} \cup \{(q, w_1, q_1), (q_1, w_2, q_2), \ldots, (q_{n-1}, w_n, q')\}.$$
Outline (1)
If you have a non-deterministic automaton $\mathcal{A} = (\Sigma, Q, Q_s, Q_a, \delta)$, then for every word $w \in \Sigma^*$, there exists a set of reachable states $Q' \subseteq Q$, which is obtained as follows:
A state $q$ is reachable under $w$ if there exists a finite sequence of words $w_1, \ldots, w_n$, and a sequence of states $q_1, q_2, \ldots, q_{n+1}$, s.t.
- $w = w_1 \cdot \ldots \cdot w_n$,
- $q_1 \in Q_s$, $q_{n+1} = q$,
- Each $(q_i, w_i, q_{i+1}) \in \delta$.
(Intuitively, the state $q$ is reachable under $w$ if the automaton can start in a starting state, eat the word $w$, and end up in state $q$)
Outline (2)
The algorithm explores all sets of reachable states $R \subseteq Q$ and constructs the graph of them.
Since there are only finitely many subsets of $Q$, this exploration will eventually end, and the resulting graph will be a deterministic finite automaton.
Epsilon Closure
Let $S \subseteq Q$ a set of states belonging to an NDFA $A = (\Sigma, Q, Q_s, Q_a, \delta)$. The $\epsilon$-closure of $S$ is the smallest set $S'$, s.t.
- $S \subseteq S'$,
- If $q \in S'$ and $(q, \epsilon, q') \in \delta$, then $q' \in S'$.
$\text{CLOS}(S)$ can be computed as follows:
- $S' := S$,
- As long as there exist $q \in S'$ and $(q, \epsilon, q') \in \delta$, s.t. $q' \notin S'$ do $S' := S' \cup \{q'\}$,
- Now $S' = \text{CLOS}(S)$.
**Step Function**
Let \( S \subseteq Q \) be a set of states belonging to an NDFA \( \mathcal{A} = (\Sigma, Q, Q_s, Q_a, \delta) \). Let \( \sigma \in \Sigma \). Then \( \text{STEP}(S, \sigma) \) is defined as the set
\[ \{ q' \mid \text{there is a } q \in S, \text{ s.t. } (q, \sigma, q') \in \delta \}. \]
Let \( A = (\Sigma, Q, Q_s, Q_a, \delta) \) be an NDFA. The determinization of \( A \) is the automaton \( A' = (\Sigma, Q', Q'_s, Q'_a, \delta') \), which is the result of the following algorithm:
- Start with \( A' := (\Sigma, \{ \text{CLOS}(Q_s) \}, \text{CLOS}(Q_s), \emptyset, \emptyset) \).
- As long as there exist an \( S \in Q' \), and a \( \sigma \in \Sigma \), s.t. \( S' = \text{CLOS}(\text{STEP}(S, \sigma)) \notin Q' \), put
\[
Q' := Q' \cup \{ S' \}, \quad \delta' := \delta' \cup \{ (S, \sigma, S') \}.
\]
If \( Q_a \cap S' \neq \emptyset \), then also
\[
Q'_a := Q'_a \cup \{ S' \}.
\]
- As long as there exist \( S, S' \in Q' \), and a \( \sigma \in \Sigma \), such that \( S' = \text{CLOS}(\text{STEP}(S, \sigma)) \) and \( (S, \sigma, S') \notin \delta \), put
\[
\delta := \delta \cup \{ (S, \sigma, S') \}.
\]
Minimalization of a DFA
It can happen that the DFA that was obtained by the previous construction, is not minimal. Such a DFA will appear if one determinizes the NDFA resulting from the following regular expression: $(ab|(ab)^*)^*$.
On the following slides we will give a procedure for detecting states with the same observational behaviour. Once the states are found, they can be unified, which results in an automaton with less states.
**Definition:** Let \((\Sigma, Q, Q_s, Q_a, \delta)\) be a DFA. A state partition \(\Pi\) is a set of sets of states with the following properties:
- For every \(q\) in \(Q\), there is an \(S \in \Pi\), s.t. \(q \in S\).
- For every \(q \in Q\), if there are \(S_1, S_2 \in \Pi\), s.t. \(q \in S_1\) and \(q \in S_2\), then \(S_1 = S_2\).
So \(\Pi\) separates \(Q\) into different groups. Each \(q \in Q\) occurs in exactly one group.
The aim is to construct $\Pi$ in such a way that states that ’behave in the same way’ go into the same group.
Initially, all states are put in a single group. Then all groups are inspected for states that behave different in some way. If such states are found, the group is separated into two new groups. The procedure stops when no more separations are possible.
Two states have different behaviour if
1. One of them is an accepting state, while the other is not
2. There is a letter $s \in \Sigma$, such that the transitions from the states end up in states that are in different partitions.
3. There is a letter $s \in \Sigma$, such that from one of the states a transition is possible, while from the other it is not.
Minimalization Algorithm (Initial Partition)
- The algorithm starts with the partition
\[ \Pi := \{Q \setminus Q_a, Q_a\}. \]
If different elements in \( Q_a \) accept different tokens, one has to further partition \( Q_a \) according to the tokens that are being accepted.
For example if \( Q_a \) consists of three states \( q_1, q_2, q_3 \), where \( q_1 \) accepts \textbf{real}, and \( q_2, q_3 \) accept \textbf{int}, then one has to start with the partition
\[ \Pi := \{ Q \setminus \{q_1, q_2, q_3\}, \{q_1\}, \{q_2, q_3\} \}. \]
Minimalization Algorithm (Refining the Partition)
- As long as there exist $S, S' \in \Pi$, states $q_1, q_2 \in S$, and a $\sigma \in \Sigma$, and a state $q'_1 \in S'$, s.t.
$$ (q_1, \sigma, q'_1) \in \delta, $$
while at the same time there is no state $q'_2 \in S'$, s.t.
$$ (q'_2, \sigma, q'_2) \in \delta, $$
replace $S$ in $\Pi$ by two sets as follows:
$$ \{ q \in S \mid \text{there is a } q' \in S', \text{ s.t. } (q, \sigma, q') \in \delta \}, $$
and
$$ \{ q \in S \mid \text{there is no } q' \in S', \text{ s.t. } (q, \sigma, q') \in \delta \}. $$
Minimalization Algorithm (Reading Of the Result)
Let $A = (\Sigma, Q, Q_s, Q_a, \delta)$ be a DFA. Let $\Pi$ be the partition constructed by the determinization algorithm. Then the simplified automaton $A' = (\Sigma, Q', Q_s', Q_a', \delta')$ can be constructed as follows:
- $Q' = \Pi$,
- $Q_s' = \{S \in \Pi \mid Q_s \in S\}$,
- $Q_a' = \{S \in \Pi \mid Q_a \in S\}$,
- $\delta' = \{(S, s, S') \mid$ there are $q, q' \in S, S'$, s.t. $(q, s, q') \in \delta\}$.
Pruning the DFA
Let \((\Sigma, Q, Q_s, Q_a, \delta)\) a (N)DFA. If \(Q\) contains states that
1. are not reachable from \(Q_s\), or
2. from which there exists no path to a state in \(Q_a\),
then remove these states, and all the transitions in \(\delta\) in which these states occur.
**FLEX tool**
The FLEX tool reads a list of regular expressions and associated actions.
It performs the minimal DFA construction listed on the previous pages.
Usage of FLEX tool (my impression)
- It is really very easy to write a complex scanner with FLEX.
- I find the syntax of FLEX not so good.
- FLEX gives no support in the computation of the attributes. One still has to use `atoi`, ` atof`. This means that one still uses an NDFA that somebody wrote by hand. (But you have seen in the exercises that the main problem is in the combination of different tokens. At least this problem was solved by FLEX)
- The $C^{++}$ interface is not so good? $C^{++}$ is more than a few plusses on $C$. It is another programming paradigm, and I don’t see this in FLEX.
|
{"Source-Url": "http://www.ii.uni.wroc.pl/~nivelle/teaching/compilers2009/tokenizer.pdf", "len_cl100k_base": 6083, "olmocr-version": "0.1.53", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 74209, "total-output-tokens": 7940, "length": "2e12", "weborganizer": {"__label__adult": 0.0003020763397216797, "__label__art_design": 0.0002810955047607422, "__label__crime_law": 0.0002536773681640625, "__label__education_jobs": 0.0006508827209472656, "__label__entertainment": 6.723403930664062e-05, "__label__fashion_beauty": 0.00012743473052978516, "__label__finance_business": 0.00010091066360473631, "__label__food_dining": 0.00038552284240722656, "__label__games": 0.0005626678466796875, "__label__hardware": 0.0008745193481445312, "__label__health": 0.0003008842468261719, "__label__history": 0.00015604496002197266, "__label__home_hobbies": 0.00011092424392700197, "__label__industrial": 0.000385284423828125, "__label__literature": 0.00022017955780029297, "__label__politics": 0.00019872188568115232, "__label__religion": 0.0004432201385498047, "__label__science_tech": 0.0083770751953125, "__label__social_life": 8.606910705566406e-05, "__label__software": 0.003694534301757813, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0002791881561279297, "__label__transportation": 0.00041365623474121094, "__label__travel": 0.00017333030700683594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18933, 0.01332]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18933, 0.93063]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18933, 0.82161]], "google_gemma-3-12b-it_contains_pii": [[0, 11, false], [11, 280, null], [280, 888, null], [888, 1178, null], [1178, 1791, null], [1791, 2253, null], [2253, 2480, null], [2480, 2884, null], [2884, 3349, null], [3349, 3920, null], [3920, 4230, null], [4230, 4781, null], [4781, 5061, null], [5061, 5430, null], [5430, 5817, null], [5817, 6081, null], [6081, 6515, null], [6515, 7268, null], [7268, 7713, null], [7713, 8551, null], [8551, 8852, null], [8852, 9181, null], [9181, 9382, null], [9382, 9949, null], [9949, 10496, null], [10496, 10933, null], [10933, 11087, null], [11087, 11453, null], [11453, 11568, null], [11568, 12165, null], [12165, 12803, null], [12803, 13074, null], [13074, 13545, null], [13545, 13855, null], [13855, 14708, null], [14708, 15148, null], [15148, 15585, null], [15585, 16310, null], [16310, 16853, null], [16853, 17419, null], [17419, 17884, null], [17884, 18171, null], [18171, 18332, null], [18332, 18933, null]], "google_gemma-3-12b-it_is_public_document": [[0, 11, false], [11, 280, null], [280, 888, null], [888, 1178, null], [1178, 1791, null], [1791, 2253, null], [2253, 2480, null], [2480, 2884, null], [2884, 3349, null], [3349, 3920, null], [3920, 4230, null], [4230, 4781, null], [4781, 5061, null], [5061, 5430, null], [5430, 5817, null], [5817, 6081, null], [6081, 6515, null], [6515, 7268, null], [7268, 7713, null], [7713, 8551, null], [8551, 8852, null], [8852, 9181, null], [9181, 9382, null], [9382, 9949, null], [9949, 10496, null], [10496, 10933, null], [10933, 11087, null], [11087, 11453, null], [11453, 11568, null], [11568, 12165, null], [12165, 12803, null], [12803, 13074, null], [13074, 13545, null], [13545, 13855, null], [13855, 14708, null], [14708, 15148, null], [15148, 15585, null], [15585, 16310, null], [16310, 16853, null], [16853, 17419, null], [17419, 17884, null], [17884, 18171, null], [18171, 18332, null], [18332, 18933, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18933, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18933, null]], "pdf_page_numbers": [[0, 11, 1], [11, 280, 2], [280, 888, 3], [888, 1178, 4], [1178, 1791, 5], [1791, 2253, 6], [2253, 2480, 7], [2480, 2884, 8], [2884, 3349, 9], [3349, 3920, 10], [3920, 4230, 11], [4230, 4781, 12], [4781, 5061, 13], [5061, 5430, 14], [5430, 5817, 15], [5817, 6081, 16], [6081, 6515, 17], [6515, 7268, 18], [7268, 7713, 19], [7713, 8551, 20], [8551, 8852, 21], [8852, 9181, 22], [9181, 9382, 23], [9382, 9949, 24], [9949, 10496, 25], [10496, 10933, 26], [10933, 11087, 27], [11087, 11453, 28], [11453, 11568, 29], [11568, 12165, 30], [12165, 12803, 31], [12803, 13074, 32], [13074, 13545, 33], [13545, 13855, 34], [13855, 14708, 35], [14708, 15148, 36], [15148, 15585, 37], [15585, 16310, 38], [16310, 16853, 39], [16853, 17419, 40], [17419, 17884, 41], [17884, 18171, 42], [18171, 18332, 43], [18332, 18933, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18933, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
39b97d52d39f34fdef7efc6d9f42ad59f1c3aacd
|
Modelling and Analysing Resilient Cyber-Physical Systems
Conference or Workshop Item
How to cite:
Bennaceur, Amel; Ghezzi, Carlo; Tei, Kenji; Kehrer, Timo; Weyns, Danny; Calinescu, Radu; Dustdar, Schahram; Honiden, Shinichi; Ishikawa, Fuyuki; Jin, Zhi; Kramer, Jeffrey; Litoiu, Marin; Loreti, Michele; Moreno, Gabriel; Muller, Hausi; Nenzi, Laura; Nuseibeh, Bashar; Pasquale, Liliana; Reisig, Wolfgang; Schmidt, Heinz; Tsigkanos, Christos and Zhao, Haiyan Zhao (2019). Modelling and Analysing Resilient Cyber-Physical Systems. In: 14th Symposium on Software Engineering for Adaptive and Self-Managing Systems 2019, 25-26 May 2019, Montréal, Canada.
For guidance on citations see FAQs.
© [not recorded]
Version: Accepted Manuscript
Link(s) to article on publisher’s website:
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online's data policy on reuse of materials please consult the policies page.
Modelling and Analysing Resilient Cyber-Physical Systems
Amel Bennaceur, Carlo Ghezzi, Kenji Tei, Timo Kehrer, Danny Weyns, Radu Calinescu, Schahram Dustdar, Zhenjiang Hu, Shinichi Honiden, Fuyuki Ishikawa, Zhi Jin, Jeffrey Kramer, Marin Litoiu, Michele Loreti, Gabriel A. Moreno, Hausi A. Müller, Laura Nenzi, Bashar Nuseibeh, Liliana Pasquale, Wolfgang Reisig, Heinz Schmidt, Christos Tsigkanos, Haiyan Zhao
Shonan Seminar 118 Participants
Email: shonan_meeting_118@nii.ac.jp
Abstract—From smart buildings to medical devices to smart nations, software systems increasingly integrate computation, networking, and interaction with the physical environment. These systems are known as Cyber-Physical Systems (CPS). While these systems open new opportunities to deliver improved quality of life for people and reinvigorate computing, their engineering is a difficult problem given the level of heterogeneity and dynamism they exhibit. While progress has been made, we argue that complexity is now at a level such that existing approaches need a major re-think to define principles and associated techniques for CPS. In this paper, we identify research challenges when modelling, analysing and engineering CPS. We focus on three key topics: theoretical foundations of CPS, self-adaptation methods for CPS, and exemplars of CPS serving as a research vehicle shared by a larger community. For each topic, we present an overview and suggest future research directions, thereby focusing on selected challenges. This paper is one of the results of the Shonan Seminar 118 on Modelling and Analysing Resilient Cyber-Physical Systems, which took place in December 2018.
I. INTRODUCTION
The ultimate goal of any software system is to support individuals and groups in their social and professional endeavours. This is ever more important today that software permeates every aspect of our lives. From smart buildings to medical devices to smart nations, software systems increasingly integrate computation, networking, and interaction with the physical environment. These systems are known as Cyber-Physical Systems (CPS). The National Institute of Standards and Technology (NIST) define them as follows: “Cyber physical systems are hybrid networked cyber and engineered physical elements co-designed to create adaptive and predictive systems for enhanced performance. Performance metrics include safety and security, reliability, agility and stability, efficiency and sustainability, privacy.” [1].
The need for self-management and self-adaptation is inherent in CPS: they are long-lived, continuously running systems that interact with the environment and humans in ways that can hardly be fully anticipated at design time and continuously evolve at runtime. In other words, CPS must be resilient, that is able to self-adapt to deal with change. Yet, existing software engineering methods often focus on sanitised environments, abstracting away many details including those related to the physical properties of the environment. Theory, methodology, and tools for the systematic design and engineering of CPS are yet to be defined.
First, theories are crucial to understand the interplay between the physical and the digital worlds. Typically, the changing topology of space in which computations are embedded needs to be understood during design and managed properly during operation. Furthermore, as many of those systems are safety critical, rigorous modelling and analysis are necessary to provide guarantees about the overall behaviour of the CPS. This rigorous design is often challenged by the differences in nature of the components of these systems, including discrete-time computation components as well as continuous-time physical components. Hence integration is made more challenging and so is planning and controlling of the emergent behaviour of multiple such hybrid systems. Furthermore, many transversal issues such as security and adaptation are made more difficult due to the inherent uncertainty of the physical environment, and the incompleteness of any model thereof.
The timeliness of this topic is illustrated through (a) an increasing number of papers and surveys in the domain such as the systematic survey by Muccini et al. [2] which focuses specifically on architectural adaptation for CPS, (b) some dedicated workshops such as SEsCPS [3] which focuses on smart CPS, (c) standardisation effort such as the NIST Cyber-Physical Systems Program [1], (d) multiple organised working seminars such as SENCPS [4] which explores synergies between software engineering and CPS, and (e) multiple government initiatives such as NSF Cyber-Physical Systems Virtual Organization [5] and EU CyPhERS (Cyber-Physical European Roadmap & Strategy) [6].
This paper reports on the outcome of a Shonan seminar that aimed to reflect on both the theoretical and practical underpinning of resilient CPS. The key topics investigated were: (i) Theory of Resilient CPS (ii) Design and Engineering of Resilient CPS, and (iii) Applications and Exemplars for CPS. These topics require the interaction of many areas of expertise in embedded systems, verification and simulation, and spatial reasoning besides those in software engineering and adaptive software systems. In the following, we summarise the main points discussed during the seminar.
Section I focuses on theory and highlights the need to rethink the modelling of software to account for ecosystems of
CPS, to redefine assurances in terms of equilibrium, and how these theoretical concepts can be taught. Section III moves to the engineering space and focuses on self-adaptation. It highlights the need for decentralised control of ecosystems of CPS and highlights human behaviour aspects. Section IV motivates and presents a feature-based classification scheme that serves as an instrument to organise and structure CPS exemplars. Finally, Section V concludes the paper.
II. FOUNDATION
A. Motivation
Resilient CPS involve rethinking design and engineering with a major focus on composition and dynamic environments. One possible way to capture those aspects is considering ecosystems that compose software platforms as well as communities of users [7].
The rigorous analysis of CPS requires models that represent heterogeneous aspects of CPS across different layers of the technology stack—from the physical, sensor and actuator layer, to communication and middleware, up to application layer. Models may be required across tiers of the CPS, to represent heterogeneous types of software, from user applications to supporting services and back-end storage. These are inherently multi-faceted and typically belong to different disciplines (e.g., physical, communication, software, social). An important challenge is then how to align the abstractions of these heterogeneous models into a unified representation that allows for reasoning and supporting adaptation decisions.
While rigorously representing CPS is difficult, their composition, analysis, and adaptive control are even more challenging [8]. In particular adaptive control of CPS is challenging due to their inherent hybrid nature. On the one hand, discrete-time control focuses on functional requirements, deals with composition but require complete knowledge of the environment. On the other hand, continuous-time control focuses on quantitative requirements, adapt to perturbations in the environment but does not support composition and concurrency. Defining appropriate assurance properties and methods for CPS is essential.
Given the diversity of techniques and methods that are foundational for modelling and analysing CPS, curricula that prepare and train a skilled workforce should reflect this diversity and multidisciplinarity.
B. Ecosystems
The choice of the environment depends on the scale of the system at hand: how a CPS is defined depends on the scope and the context. Figure 1 gives an example on how we can model an automotive system at four different levels of granularity. For each level of granularity, the notion of environment is defined with respect to the chosen system. For example, while modelling the engine, the environment is made up of the other components of the car. When modelling a car, then other cars compose the environment. When designing a platoon, then the transport infrastructure can represent the environment. Finally, when considering a smart city as a CPS, then the environment may include other cities.
The scope and goals of those ecosystems need to be well understood in order for the impact of collaboration and interconnection to be specified rather than just incurred. Understanding, yet alone controlling, emergent collaborations between communities of users and CPS, and the theory and processes for understanding them are still to be defined.
C. Assurance
Resilience has been defined as the persistence of dependability while facing change [9], and often understood as the ability of the system to return to a viable zone/stability [10] while avoiding Zeno behaviour, i.e. the system undergoing an unbounded number of discrete transitions in a finite and bounded length of time. The classic notion of satisfaction is insufficient to describe properties of such behaviour. Therefore, we consider the notion of equilibrium as a new form of satisfaction. The idea is that the system maintains a behaviour within its multidimensional viability zone rather than satisfy a property in the case of perturbations. Moreover, the system actively monitors whether it is in its normal viability zone and is able to bring it back within if it ventures outside. After returning or healing, the system can potentially be stronger and so even the bounds can change leading to contextual viability zones [11]. Different definitions of this notion can lead to different interpretations and requirements.
For self-adaptive CPS, assurances must also consist of comprehensive evidence (obtained through modelling and simulation, testing, formal verification, compliance with established practices, etc.) that the CPS can safely achieve the goals of their intended application in the physical environment in which they operate. Given the heterogeneity and distributed nature of many CPS and the complexity of their goals, devising this comprehensive body of evidence represents a major challenge that is not fully addressed by existing approaches.
A further challenge in the provision of assurances for CPS is the need to integrate assurance evidence from all stages of the CPS lifecycle. Assurance cases for CPS must combine development-time evidence from the CPS design, implementation and verification with runtime evidence that they continue to safely achieve their goals during self-adaptation. Dynamic safety cases have been used to tackle this
challenge for self-adaptive software [12], but extending their applicability to CPS requires significant additional research due to the physical aspects of these systems and of their goals.
Modelling and reasoning about spatio-temporal properties is also important. Cyber-physical spaces [13] are composite models integrating human agents, computational and physical aspects of systems. Formal languages such as spatio-temporal logics [14], [15] can be used to describe, verify, and test complex properties where the spatial and temporal part are intrinsically connected and influence each other. Furthermore, they provide efficient monitoring procedures to verify the property and they deal with changes in spatial configuration.
D. Education
The multifaceted nature of designing and engineering CPS raises multiple questions on how to educate students with those foundational concepts in CPS in order to create and maintain a skilled workforce to support the design, engineering, deployment, and operation of future CPS. CPS engineers, scientists and developers not only need strong backgrounds in CPS foundations, but also significant knowledge in relevant application domains. The cross-cutting and rapidly evolving application of sensing, actuation, control, communication and computing presents significant challenges for industry, academia and governments. Existing engineering and computer science programs are challenged in teaching the comprehensive skill set required for a successful career in the CPS realm [16]. The software engineering community has made tremendous strides in designing and operating highly dynamical software systems by developing methods and techniques to deal with CPS uncertainty and resilience at runtime as well as standardise and distribute CPS components and services effectively. It is high time to inject these innovations into computing and software curricula which still largely concentrate on design-time aspects of, for example, requirements, models and V&V (Verification and Validation). Digital control, which integrates discrete and continuous mathematics, is central to CPS. On the one hand, computer science and software engineering programs need digital control courses; on the other hand, traditional engineering programs need to include software engineering courses. Designing CPS contents involves a careful balancing of physical and cyber aspects as well as CPS application knowledge [17]. While adding CPS courses, options or degree programs is extremely challenging due to the many competing forces, trained CPS students are needed in industry to harvest CPS rich economic opportunities.
III. ENGINEERING SELF-ADAPTATION FOR RESILIENT CPS
A. Motivation
CPS must handle high levels of dynamicity and uncertainty. This is due to factors that include workload variation, interactions with human users and operators, regular goal changes, and components joining and leaving the CPS. As such, the software controlling the CPS operation must manage its dynamicity and uncertainty, using self-adaptation to ensure that the system behaviour stays within the bounds defined by its goals. For CPS used in safety-critical applications, these goals often specify strict safety, dependability and performance requirements. Accordingly, the CPS control software must also provide assurances guaranteeing the system compliance with these requirements. While the features we mentioned so far are common to most types of self-adaptive systems, several distinguishing characteristics of CPS further increase the challenges associated with the engineering of their control software. First, the heterogeneity of the CPS components and of their sensors and actuators (vertically across the technology stack, and horizontally across different components and subsystems) greatly increases the complexity of the control software. Second, the distributed deployment of most CPS, often with only unreliable, high-latency or low-bandwidth communication affordable between components, precludes the maintenance of up-to-date global system models. Third, even when such global models can be assembled and kept up to date, they are typically too large to be analysed efficiently and to support timely reasoning about the CPS. Fourth, many CPS are assembled through the integration of components owned by different organisations. Last but not least, the constraints and optimisation criteria specified by CPS goals refer not only to computational aspects such as throughput and task ordering, but also to physical aspects of the system components.
This unique combination of characteristics is responsible for multiple open challenges in developing self-adaptation methods and software for resilient CPS. In the remainder of this section, we summarise four of these open challenges that we expect to drive future research in this area.
B. Control software decentralisation
For the numerous CPS for which system-level modelling and analysis are unfeasible, or the system components are owned by multiple organisations, the control software needs to be decentralised. Examples of such CPS include many Internet of Things (IoT) systems, unmanned-vehicle CPS, and smart e-health CPS. For instance, to support multiple tenants and increase the scale of the IoT system presented in [18], the control software necessarily needs to be decentralised to enable local decision-making while keeping the energy consumption of battery-powered modes within bounds. As another example, consider the CPS of unmanned underwater vehicles from [19], the driving factors for decentralising the control software are the efficiency of modelling and analysis, and ensuring the mission goals regardless of the inherent restrictions of communication under water. Finally, in a smart e-health CPS as the one presented in [20], different parts of the systems have different owners that may be unable to share all information (e.g., for security or privacy reasons); hence, autonomy of subsystems and decentralising the control software is imperative.
In summary, decentralising self-adaptation enables dealing with multiple owners and autonomy of CPS components, and inherent distribution and restrictions of resources. However, successfully decentralising the CPS control software is neither a panacea nor without its costs. We highlight four implications or
potential drawbacks, together with their associated challenges and starting points for addressing them.
As CPS are often long-living systems that organically grow, decentralisation of control software can serve as an enabler to support robust and scalable system evolution. However, this raises the challenge of suitable coordination capabilities for entities to join and leave the CPS ecosystem. Agent coordination and protocols [21] could be a starting point for tackling this challenge.
Decentralisation of the CPS control software requires adaptation decisions to be made based on locally available information that are not necessarily altruistic. Consequently, the decisions may be sub-optimal compared to global decision-making. The challenges are then how to measure and quantify the cost of decentralising the control software in terms of loss of decision-making optimality. This cost may then be traded against the degree of decentralisation, e.g., by structuring decision-making for adaptation hierarchically. One source of inspiration to study these challenges is “Price of anarchy” [22], which is a concept from economics and game theory that allows measuring how a system’s efficiency degrades as a result of distributed competitive decision making.
Decentralisation of control software may raise trust issues as well. In a decentralised setting, the subsystems of a CPS may be unwilling or unable to share all the information needed for local decisions, e.g., on how to perform re-configurations. A challenge is then how to ensure sufficient trust in the system and how to ensure that no undesired effects emerge from local decisions? Interesting approaches to start tackling this challenge are computational mechanism design and game theory [23].
An important aspect of CPS is incident handling, e.g., due to security or privacy events. An important challenge is then to understand the impact of decentralisation of control software on incident handling. This impact can be considered from two perspectives: on the one hand, detecting incidents may be more difficult due to locality of activities; on the other hand, the effects may be localised, reducing the harm caused by incidents.
C. Adaptive Security for CPS
As CPS span cyber and physical spaces, they are more vulnerable than conventional software systems to attacks [24]. Malicious actors can exploit cyber accessibility to a digital network to gain access to the physical devices connected to the network (e.g., German Still Mill Attack [25]). Malicious actors can also exploit vulnerabilities of physical devices to control them remotely and orchestrate attacks against third party systems and services (e.g., Mirai Attack [26]).
So far security risks arising from the cyber and physical spaces have been assessed separately [27], leading to gaps and vulnerabilities for parts of the system. Thus, traditional risk assessment methods (e.g., CORAS [28]) need to be revised and should consider the extended attack surface brought by the interplay between cyber and physical components in CPS.
Unpredictability, heterogeneity, and scale make it difficult to anticipate how security threats can materialise and what security countermeasures to apply to prevent them. To protect todays CPS, designing static and rigid security solutions is no longer sufficient. CPS should be designed with the capability to self-protect [29], [30], especially when security threats may arise from different spaces.
Existing approaches proposed to develop self-protecting software systems (e.g., [31]) usually can only react to a set of changes (in the system or its operating environment) that are known at design time by enacting a set of pre-defined countermeasures. This would still leave the sub-system to be protected exposed, for example, to attacks targeting new assets or exploiting vulnerabilities brought by changes in the topology (structure and connectivity) of cyber and physical components. Thus it is necessary to develop novel threat analysis and planning techniques to reason about changing security threats and selecting a set of countermeasures that could guarantee assets protection. These techniques should scale by adaptively focusing on the aspects of the CPS that require protection.
D. Models at runtime
The self-adaptation methods used by CPS must efficiently and coherently leverage multiple types of models at runtime. Models used for self-adaptation often capture uncertainties (e.g., in terms of probabilities of properties in the environment), or the models themselves may have uncertainty (e.g., due to sensor noise). Given the heterogeneity of CPS, a challenge is then how to ensure that the runtime models are sufficiently accurate to make timely adaptation decisions. Rephrased from a models@runtime perspective, the question raised by this challenge is: what does causal connection mean for runtime models of CPS, and how can this causality be realised?
As CPS are often large-scale systems and the control software for self-adaptation is decentralised, an important challenge is to decide what information is collected where, what and how is information supporting users and operators in their regular activities, and to enable CPS certification by regulators), and underpinning self-adaptation decisions (e.g., to gain the trust of users, and to enable CPS certification by regulators), and information supporting users and operators in their regular interactions with the system. The adoption and success of many envisaged CPS depend on this challenge being addressed by the research community.
Numerous CPS used in smart cities, e-health, smart transportation and similar applications are complex socio-technical systems. Humans who interact with these CPS are not merely providers of system input and consumers of artefacts produced
1Recall that a causal connection refers to the link between the managed system and the model representing it such that whenever a change is made to the model, this change is reified in the system and whenever the system changes, this change is reflected in the corresponding model.
by the system. They are first-class participants in the CPS, whom the system relies upon for contributions to decision making, to the execution of these decisions, etc. This means that the self-adaptation methods employed by these CPS must consider human participants in all their steps—from the monitoring and analysis of the system and its environment, to the synthesis of adaptation plans and the execution of these plans. While preliminary work and thoughts on self-adaptive systems with “humans in the loop” (e.g., [33], [34]) provide a starting point for tackling this challenge, further research is needed to apply these concepts to CPS with human participants.
Research has emphasised the need for social adaptation, where the software system analyses users’ feedback and updates its behaviour to best satisfy the requirements in the given context [35]. In fact with the prevalence of mobile and ubiquitous technology, it is becoming easier to have a better understanding of user preferences, and one can aim to compose both digital and social services [36].
IV. Exemplars
A. Motivation
Good (software) engineering research not only requires methodological, technical and theoretical results, but also convincing evidence that these results are sound [37]. Exemplars are well-suited for validation, studying relevant problems, and as a medium for education. Exemplars have been collected and established in various areas of engineering software-intensive systems, e.g., in requirements engineering [38], software and system evolution [39], software product-line engineering [40], and self-adaptive and self-managing systems [41]. However, to the best of our knowledge there is no structured catalogue or repository of exemplars specifically addressing CPS.
Therefore, our goal is to provide comprehensive information about CPS exemplars that would be otherwise scattered in the literature or restricted to local usage in dedicated laboratories, such as the Cyber-Physical Systems Laboratory at the HPI [42] or the Virtual Experiences Laboratory (VXLab) at the Royal Melbourne Institute of Technology [43], [44]. The primary target group comprises researchers and educators who can use the collection as a source of information to find the exemplars which fit to their individual needs. We focus on a common classification scheme for characterising the exemplars and a technical infrastructure for collecting these exemplars.
B. Classification Scheme
As mentioned above, collections of exemplars have been established by several research communities. The SEAMS community maintains a catalogue of exemplars for self-adaptive systems, ranging from generic artefacts to specific model problems [41]. Some of these exemplars are specifically addressing CPS and represent a good starting point for our classification scheme. Yet, our goal is to address CPS from a broader perspective, including further qualities besides self-adaptation and -management.
Moreover, exemplars in the SEAMS catalogue are mainly described in an unstructured way using natural language. While this has the advantage that providing new exemplars is easy, searching for an exemplar offering specific characteristics can be difficult and tedious. Therefore, we propose a more detailed classification scheme that enables structured descriptions of CPS exemplars amenable to (semi-)automated search. This scheme should allow one to a) characterise the general kind of CPS represented by an exemplar as well as b) characterise a specific exemplar itself.
a) Characterising the kind of CPS represented by an exemplar: To characterise the general kind of CPS represented by an exemplar, we rely on techniques that are primarily known from the field of software product-line engineering, particularly the use of feature models [45]. These have proven well-suited for structuring a domain of interest. The idea is that the features including their inter-relations formally capture the variation points of the set of conceivable exemplars, while the kind of system represented by a specific exemplar is precisely characterised by a valid configuration of the feature model. Besides formally documenting the main variation points of a CPS, such a feature model also provides a common yet high-level terminology for CPS, which is of increasing importance given its interdisciplinary nature. Our aim is not to come up with an exhaustive taxonomy or ontology, but with a feature model which is generic enough to classify any kind of CPS of interest and specifically tailored for our purpose of describing exemplars. An excerpt of an early version of our feature model is shown in Figure 2.
A first variation point to do a high-level characterisation is the Domain in which a CPS is intended to operate. Some typical domains are Healthcare, Transportation or Food Security. Another high-level yet distinguishing feature is whether a CPS emphasises the role of the Human interacting with the system or not.
In addition, there are a number of cross-cutting features which, regardless of the particular domain and regardless of whether the CPS emphasises human interaction, are interesting for validating a broad range of generic methods as: Qualities. Since we are specifically addressing the analysis of CPS, one important variation point pertains the Qualities which we expect to be exposed by a particular kind of CPS. Qualities of interest include Dependability properties such as Safety, Security and Privacy. Self-Adaptivity leads to improvements in dependability. Specifically, considering our example dependability properties, Self-Healing and Self-Protection refer to the automatic detection of failures and attacks as well as their subsequent correction and suppression, respectively.
Distribution. CPS are highly distributed systems by definition. However, we may distinguish whether Distribution is only Virtual or also Spatial. The former reflects the classical notion of a distributed system where computational entities are distributed and connected over some network structure, while the latter applies to CPS which are designated to be operated in a larger spatial environment such as smart buildings or cities.
Evolution. Another aspect which is of particular interest for
various analysis methods is Evolution, where we distinguish among Short-Term evolution and Long-Term evolution. Short-term evolution means that the system operates in a highly dynamic environment undergoing continuous changes, while long-term evolution stresses the fact that a system is intended to be operated for a long period of time.
For example, let us consider two concrete CPS exemplars from the SEAMS catalogue: The Automated Traffic Routing Problem (ATRP) [46] and an IoT-based ecosystem to support nutrition called “Feed me, Feed me” (FmFm) [47]. According to our feature-based classification, both systems have a set of common and individual features. While stemming from different domains, namely Transportation and Food Security, both systems share a highly dynamic nature (Short-Term) and must deal with frequent changes and uncertainty (Adaptivity). Concerning further qualities, FmFm produces vast quantities of personal data which demands for robust protection mechanisms (Security and Privacy), while Safety is one of the primary concerns for ATRP. Moreover, ATRP clearly operates in a Spatial environment, while this dimension of distribution is of minor importance for FmFm. However, in contrast to ATRP, FmFm puts forward the shared control and partial automation between the software system and its users in the social dimension (Human).
b) Characterisation of a specific exemplar: In addition to the characterisation of the kinds of systems represented by an exemplar, exemplars shall be further characterised by collecting meta-data that are specific to an exemplar instance.
Generic Meta-data include but are not limited to, e.g., literature references where the exemplar has been used, which kinds of artefacts are available for the exemplar, and, if available, a literature reference to where the exemplar has been originally published as well as further pointers where to find more detailed information about the exemplar.
In order to evaluate the scalability of a method, researchers might also be interested in the Size of an exemplar. For our classification scheme, we propose to use a purely qualitative classification into Small-, Medium- and Large-scaled exemplars.
Optionally, an exemplar may also be intended for serving a particular Purpose. Typical purposes are to drive and communicate individual research advances, to compare and contrast alternative approaches, to establish research agendas, and, ultimately, to lead to advances in practices of developing and operating certain kinds of CPSs. This characterisation can be useful since, as argued in [38], there are interferences between these different purposes of exemplars, and an exemplar suited to serve one purpose is not necessarily suited to serve another.
V. CONCLUSION
As CPS increasingly permeate every aspect of modern systems, it is important to define the theoretical and practical foundation for modelling, analysing, and adapting them. This paper summarises some directions towards this goal.
The theoretical foundation involves modelling ecosystems at different levels of granularity and focusing on their composition. It also involves a rich notion of assurance, that not only focuses on the satisfaction of well-specified requirements but also on defining equilibrium and contextual satisfaction.
From an engineering perspective, more decentralisation is necessary to account for the inherently distributed nature of these systems. Models at runtime allows for accommodating the continuous change in the environments of CPS. Accounting for humans as an integral part of these systems also raises multiple challenges for adaptation. In addition, one cannot forget that with openness comes threats and therefore CPS also need adaptive methods to protect themselves.
Finally, establishing a technical infrastructure for collecting and describing CPS exemplars, and for querying and browsing these exemplars is essential, not only for evaluating research but also for education. Given the multifaceted nature of designing and engineering CPS, creating CPS curricula and courses involves a careful balancing of theoretical and practical knowledge of physical and cyber aspects as well as knowledge of CPS applications.
We believe that CPS hold great promise for research in self-adaptation. They also raise many challenges. This paper gave a summary of these challenges and we invite other researchers, educators and practitioners to collaborate with us in addressing them.
ACKNOWLEDGMENT
The authors would like to thank the staff of Shonan Village for their valuable support. We acknowledge SFI grant 13/RC/2094 and EPSRC support.
REFERENCES
The authors would like to thank the staff of Shonan Village for their valuable support. We acknowledge SFI grant 13/RC/2094 and EPSRC support.
|
{"Source-Url": "http://oro.open.ac.uk/60292/2/seams18_cameraReady.pdf", "len_cl100k_base": 6680, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23930, "total-output-tokens": 8681, "length": "2e12", "weborganizer": {"__label__adult": 0.00042057037353515625, "__label__art_design": 0.0005669593811035156, "__label__crime_law": 0.000461578369140625, "__label__education_jobs": 0.003284454345703125, "__label__entertainment": 0.00014913082122802734, "__label__fashion_beauty": 0.0002601146697998047, "__label__finance_business": 0.0005860328674316406, "__label__food_dining": 0.0004036426544189453, "__label__games": 0.0009050369262695312, "__label__hardware": 0.0014925003051757812, "__label__health": 0.0012407302856445312, "__label__history": 0.0006127357482910156, "__label__home_hobbies": 0.00016129016876220703, "__label__industrial": 0.0008106231689453125, "__label__literature": 0.0006833076477050781, "__label__politics": 0.0004169940948486328, "__label__religion": 0.0006275177001953125, "__label__science_tech": 0.471435546875, "__label__social_life": 0.000217437744140625, "__label__software": 0.011688232421875, "__label__software_dev": 0.501953125, "__label__sports_fitness": 0.0003633499145507813, "__label__transportation": 0.00115203857421875, "__label__travel": 0.0002751350402832031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39603, 0.01899]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39603, 0.60826]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39603, 0.92737]], "google_gemma-3-12b-it_contains_pii": [[0, 1093, false], [1093, 6553, null], [6553, 11909, null], [11909, 18278, null], [18278, 24403, null], [24403, 30669, null], [30669, 35160, null], [35160, 39603, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1093, true], [1093, 6553, null], [6553, 11909, null], [11909, 18278, null], [18278, 24403, null], [24403, 30669, null], [30669, 35160, null], [35160, 39603, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39603, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39603, null]], "pdf_page_numbers": [[0, 1093, 1], [1093, 6553, 2], [6553, 11909, 3], [11909, 18278, 4], [18278, 24403, 5], [24403, 30669, 6], [30669, 35160, 7], [35160, 39603, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39603, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
91d021095091a02aba8a0125db069279909a4632
|
Best Practice
Open Data Warehouse
Building a Data Warehouse with Pentaho
## Contents
1. Objectives and benefits ..... 3
2. Management Summary ..... 4
3. Building a data warehouse system ..... 5
4. Data sources ..... 6
5. Data capture ..... 7
5.1 Designing the ETL process ..... 8
5.1.1 Best practice approach to working with the Lookup and Join operations ..... 8
5.2 Practical implementation ..... 11
5.3 Data storage & management ..... 17
5.3.1 Performance optimization ..... 18
5.4 Data analysis ..... 20
5.5 Data presentation ..... 21
6. Conclusion and outlook ..... 22
7. Appendix ..... 23
1. Objectives and benefits
This best practice guide shows how a powerful, high-performance data warehouse system can be built with open source products. This best practice guide is of interest to you if you
– are thinking about introducing a data warehouse system into a company and are looking for an appropriate solution
– are more interested in software adaptation than investing your money in software licensing
– want an open and customizable solution that can quickly be put in place and will grow with your needs
– already have open source solutions in mind, but are unsure how to approach such a project
– have a limited project budget
You will benefit from reading this best practice guide, because it
– clearly demonstrates and describes an example architecture for an open source-based data warehouse
– gives you tips for designing and implementing your system
– provides a schema that you can modify to suit your own individual needs
– contains recommendations for proven software products
2. Management Summary
Companies and the environments in which they exist are producing ever faster and ever greater amounts of data. The resulting data contains an enormous economic potential – but only when it has been properly processed and analyzed. A data warehouse solution provides the ideal basis for this processing and analysis. Setting up and developing a data warehouse, however, often entails costly licensing and hardware issues. This deters mainly small and medium-sized companies from undertaking such a project, even though cost-effective open source solutions have long been available in the business intelligence sector. Are these solutions suitable, however, for developing a high-performance data warehouse system or is there really no alternative to commercial software products when it comes to performance and functionality?
This best practice guide addresses whether and how a powerful data warehouse can be built using open source systems.
For this purpose, we will be designing and building a prototypical example architecture using the Pentaho and Infobright solutions. We have implemented this architecture – with its essential features – in a number of projects, many of which have been in successful use for years.
For ease of understanding, this best practice guide has been organized in the following manner:
- General construction of a data warehouse
- Connecting data sources
- Designing the ETL process
- Data storage & management
- Data analysis
- Data presentation
- Conclusions
In the process, all pertinent points associated with the design and implementation of a data warehouse will be covered.
3. Building a data warehouse system
A data warehouse system consists of five levels: Data sources, data acquisition, data management, data analysis and data presentation. The individual layers are realized by means of various concepts and tools and their respective implementations are based on the requirements of the given company.
The data warehouse system used for our prototype is structured as shown in the figure below. All components are open source. The Microsoft database AdventureWorks will serve as the data source. The ETL process at the data acquisition level will be realized with Pentaho Data Integration. This tool is part of the open source solution Pentaho Business Analytics Suite, which, in our example, will be used for data analysis and presentation.
The actual data warehouse used within the data management system will be the analytical database management system, Infobright. Pentaho Mondrian will be used as the OLAP server. The required XML schema will be generated with Pentaho Schema Workbench. Once these components have been implemented, the data at the data presentation level can be prepared in the form of analyses, dashboards, and reports using a variety of Pentaho tools. In the following chapters, the individual layers and their associated tools will be described in detail.
4. Data sources
In our prototype system, we will be using AdventureWorks2008R2 (AWR2) as the basis for our sample data. This OLTP database from Microsoft can be downloaded as a free backup file here: http://msftdbprodsamples.codeplex.com/ The data represents a fictitious, multinational company in the production sector that specializes in the manufacture and sale of bicycles and related accessories. The data model consists of more than 70 tables divided into five categories: Human Resources, Person, Production, Purchasing, and Sales.
The data model is similar to the database structures of real companies in terms of scenarios and complexity of database structure and is, therefore, well suited for demonstration and testing purposes. The backup file (.bak) provided can be easily integrated into Microsoft SQL Server. To do this, open SQL Server Management Studio and, clicking on the database symbol, open the restore database interface (see illustration). The new database will then appear in the Object Explorer.
Microsoft SQL Server is not, of course, open source software. We have, however, employed it for this example so that we can use the test data supplied by Microsoft. Alternatively, the backup can be converted into CSV files. These files can then be loaded into any open source database via the Bulk Load option.
5. Data capture
Data acquisition is concerned with the extraction, transformation and loading (ETL) of operational data into the data warehouse. The goal is to achieve a uniform, high-quality base of data in the data warehouse that can be analyzed as needed. Planning the data model is also one of the tasks associated with data capture.
The data model then forms the basis for the later transformation of the data. Certain fundamental decisions have to be made concerning the business process to be analyzed, the level of detail for the analysis as well as any possible dimensions and values.
We will be evaluating sales data as part of the present prototype. No changes will be made in the level of detail for the data, so all hierarchy levels will be retained. Overall, five dimensions are to be created with respect to the OLTP database. These are date, customer, product, sales territory, and currency. As for the metrics, it should be possible to analyze quantity, shipping costs, discounts granted, taxes, unit costs and revenue. The data will be presented in the form of a star schema, because this is the best modeling variant for achieving our goals (see figure). It is also possible, however, to use a snowflake or flat schema.
5.1 Designing the ETL process
For the implementation of the ETL process, we will use the open source tool Pentaho Data Integration (PDI). This tool provides predefined steps for extracting, transforming, and loading data. These can be created via a graphical drag-and-drop interface called Spoon. Using Spoon, professional ETL routines in the form of transformations and parent jobs can be built with very little effort. Thus, the amount of manual programming required is very small, making this tool ideal for the novice user.
5.1.1 Best practice approach to working with the Lookup and Join operations
The dimensions for the star schema require information from different tables in the OLTP database. To be able to compare and combine the data, Pentaho Data Integration offers Lookup and Join operations. Each step can be changed or altered, so it is important they are handled correctly. Applying them in the wrong context can lead to performance problems. Since, as a rule, many tables must often be combined within the ETL process, choosing the right steps will significantly increase processing speed. We will, therefore, explain the most important steps below.
Database Lookup
Database Lookup enables the comparison of data from different tables in a database, which allows for the integration of new attributes into the existing database table. Database Lookup was designed for small to medium-sized data sets. Particular attention needs to be paid to how the cache is configured here, because using an active cache can significantly shorten the processing time. A further increase in performance can also be achieved by selecting the checkbox „Load All Data From Table“. Particularly with larger data sets, caching can, however, overload the Java stack space causing the transformation to collapse.
Stream Lookup
Stream Lookup is similar to Database Lookup with active caching, but the data being compared can be extracted directly from the stream. Due to continuous caching, Stream Lookup is very resource intensive and should only be used with smaller data sets that cannot be loaded directly from a database. By selecting the check box „Preserve Memory (Costs CPU)“, it is possible to reduce the memory footprint, but at the expense of the CPU. When this function is activated, the loaded data is encoded during sorting (hashing). A disadvantage of this, however, is longer running times.

Merge Join
Merge Join carries out a traditional join operation within the data integration component. A distinction has to be made here between Inner, Full Outer, Left Outer and Right Outer Joins. Merge Join is particularly suitable for large data sets with high cardinality, that is, many different attribute values within a column. From the beginning, the data must be sorted according to the attributes being compared. Whenever possible, this should be performed directly on the database via „Order By“. Alternatively, this can also be done via Sort Rows, however, this will impact on performance.

Database Join
Using Database Join it is also possible to link database tables and create and execute native queries. The parameters used in the previous transformation can be integrated into the native query. A question mark is used as a placeholder and is replaced by the corresponding parameter when the query is run. When integrating multiple parameters, it is crucial that these be placed in their correct order in the „The parameters to use“ area.
Dimension Lookup / Update
The Dimension Lookup / Update step combines the functionality of Insert / Update with that of Database Lookup. It is possible to switch between both functions by selecting or deselecting the checkbox „Update The Dimension“. In addition, this step allows for the efficient integration of the Slowly Caching dimension. The Lookup / Update dimension was developed for special application scenarios and is, therefore, very processor intensive. We mention it here only in the interests of completeness.
**Performance Test**
To highlight the differences in performance, we have examined the various steps in connection with two data sets that correspond to the expected amounts of data in our demo.
As demonstrated in the table, Database Join highlights major performance problems when increasing data volumes are involved. For the remaining steps, however, there is hardly any increase in processing time despite the significantly larger amounts of data.
<table>
<thead>
<tr>
<th>Operation</th>
<th>40,000 Data Sets</th>
<th>152,500 Data Sets</th>
</tr>
</thead>
<tbody>
<tr>
<td>Database Lookup</td>
<td>0,9 sec.</td>
<td>1,5 sec.</td>
</tr>
<tr>
<td>Stream Lookup</td>
<td>1,6 sec.</td>
<td>2,3 sec.</td>
</tr>
<tr>
<td>Merge Join</td>
<td>1,3 sec.</td>
<td>2,6 sec.</td>
</tr>
<tr>
<td>Database Join</td>
<td>8,2 sec.</td>
<td>3 min.</td>
</tr>
</tbody>
</table>
**5.2 Practical implementation**
The ETL processes conform to the specifications laid down for a multi-dimensional data model. A transformation is created per dimension or fact table. This provides a better overview and errors can be more easily located. In addition, the risk of overloading the server is also reduced. The various transformations are combined and controlled via a central job:
Before the actual transformation processes for preparing and populating the data warehouse can begin, the database connections to the data source and the data warehouse must first be defined. New database connections can be created in Pentaho Data Integration under the tab View>Database Connections. The selection of the correct database and driver as well as user-specific settings, such as database name or user, is decisive.
It is important to ensure that the JDBC drivers for both databases are in the lib directory of the Data Integration Tool. By default, MySQL can be accessed via port 1433 and the Infobright database via port 5029.
**DimCurrency**
The dimension DimCurrency is generated based on the table Sales.Currency. Only the name and the abbreviation for the respective currency are applied. When this has been done, an identification number (CurrencyID) is generated for the individual tuples, which acts as the primary key. Then, using the step TableDimCurrency, the table is created and populated.
**DimCustomer**
The second transformation, DimCustomer.ktr also does not require a join operation. The dimension is based on the table Sales.vIndividualCustomer. This only requires slight changes with regard to the dimension table DimCustomer, which needs to be created. The first and last name will be combined in a common data field for later analysis. The primary key name is modified to suit the designation within the multi-dimensional data model.
**DimPerson**
The transformation DimPerson.ktr is based on the table Person.Person. This is expanded to include Person.PersonPhone and Person.EmailAdress using Database Lookup. In both cases, BusinessEntityID is used for comparison as it serves as the primary key for all three tables.
**DimProduct**
The dimension DimProduct is generated from three tables of the OLTP database. The transformation starts with the table Production.Product. As a first step, we will include the attributes Name and ProductCategoryID from the table Production.ProductSubcategory. The second of these attributes will serve as the key for the second Lookup operation performed on the table Production.ProductCategory. The next step, Select Product, is used to clean up and rename data fields before populating the table with data. While this is being done, an SQL script is run in parallel to create an additional product data set named Freight. This will enable us to filter on freight costs for individual shipments during our later analyses.
**DimSalesTerritory**
The transformation DimSalesTerritory.ktr essentially corresponds to the previous processes. It is based on the table StateProvince. During the Lookup operation, the attributes Name and Group from the tables CountryRegion and SalesTerritory are added to this table. After this has been successfully completed, the dimension is linked to the table Address using a Merge Join. Prior to this, the address lines within the step Contact Fields are combined, which will prove useful for later analysis. Merge Join is realized as a Right Outer Join, ensuring the address information is only updated for the dimension’s data sets. In this transformation, Merge Join is, therefore, the most efficient solution.
**DimDate**
The dimension DimDate cannot be extracted from the OLTP database, so it must be created manually. The basis of this transformation is a CSV document created using Excel. The file contains a unique identification number, the specific date, the corresponding day of the week as well as day, month and year – all separated. The dimension also covers the period from 2003 to 2013. The corresponding CSV file can be integrated into Pentaho Data Integration using the step CSV Input File. The data can then be processed and/or loaded into the target database (as depicted in the figure).
**FactSales**
This uses the table `Sales.SalesOrderHeader`. It already contains many important keys and values such as Freight and TaxAmt. The connections are, however, rarely designed to work quickly. This is why `OrderDate` and `ShipDate` are replaced by `DimDate` in the first section, which was generated within the dimension. After this, the attribute `ToCurrencyCode` from the table `Sales.CurrencyRate` is integrated using a Lookup. In doing so, `CurrencyRateID` is used as the key. Since for many orders no currency conversion will be performed (transactions within the dollar zone), the column value is often null. The null values are replaced in the next step by the abbreviation USD for dollars. Then, the attribute is replaced by the better performing `CurrencyID` from the dimension `DimCurrency`.
Now, the table `SalesOrderDetail` can be integrated using a Left Outer Join. This table provides detailed information on the individual orders. This join combines data sets of different aggregation levels, so these will have to be subsequently modified. The data stream is then split for further processing: The aim here is to create an additional data set per order that includes the aggregated values Freight and TaxAmt. This is implemented in the upper path. The lower part, however, processes the data sets for the individual order items. As an alternative, it is also possible to distribute the value of the items Freight and TaxAmt equally among the individual positions of each respective order.
In the lower section, only the two parent values Freight and TaxAmt are set to 0. In parallel to this, the data sets in the upper part are filtered by distinct statement. Then, various attributes such as `ProductID` or `UnitPrice` are set to 0. Now, a new value is created from the sum of the taxes and freight costs. This is correspondingly designated as `LineTotal`. After further terms have been modified, the two data streams are joined again before a final selection of the attributes is carried out. The finished fact table now contains seven keys and six values.
5.3 Data storage & management
Depending on the type of data and the objectives pursued, a number of different database management systems (DBMS) can be integrated at the data management level. Here, a distinction must be made between relational and non-relational systems. Non-relational systems, which are also referred to as NoSQL databases, include MongoDB, HBase and CouchDB. The relational DBMS include traditional, row-oriented products such as MySQL and Postgres as well as analytical, column-oriented systems such as Infobright, InfiniDB and MonetDB.
Within the example architecture we have used Infobright as the database management system for our data warehouse. Infobright is an analytical database based on MySQL. It is, therefore, compatible with a variety of well-known MySQL tools, but also offers significantly better performance when dealing with large amounts of data. At the core of Infobright is the Brighthouse Engine, which has been specially developed for analyzing large data sets. Infobright is available in both a free Community Edition and a paid Enterprise Edition. The Enterprise Edition offers significant advantages in terms of functionality and performance, which is why we will be using it for our prototype data warehouse.
The installation and integration of Infobright is easy due to its high degree of automation. Large parts of the configuration and manual "tweaking" of the system, including indexing, are omitted. The corresponding settings are carried out by the system automatically, but can also be
modified retrospectively. The configuration files my-ib.ini and brighthouse.ini are located in the installation directory.
5.3.1 Performance optimization
Infobright’s already high performance and compression rate can be enhanced even further. This is important, for example, for the data types Char / Varchar, because they weaken system performance and significantly increase memory requirements. Infobright offers several solutions here, including Lookup Columns and Domain Expert. Both of these are used in our prototype system.
**Lookup Columns**
The principle behind Lookup Columns is similar to that of a compressed glossary or dictionary. When using Lookup Columns, all values are transformed into corresponding indices of the type Integer within a column. In parallel, a directory is created that stores the indexes and the corresponding Char / Varchar values. Comparable terms are represented by the same index. This improves performance and reduces the memory requirements of the database.
Lookup Columns are defined when the table is created within the Create statement. Subsequent integration using the Alter statement is not possible. The respective column is completed via the additional „comment ‘lookup’“. Cardinality plays a crucial role when creating Lookup Columns. For ratios up to 10 to 1, they are definitely worth using. This means that for a column with 100,000 values, a maximum of 10,000 different instances can exist. Thus, for higher cardinalities, any possible gains in performance must be carefully weighed up against a longer process loading time.
**DomainExpert**
Infobright offers another type of optimization: DomainExpert. This technology enables the user to create an attribute’s structure in the form of a rule. By using this technique the way data is processed, stored, and retrieved is sustainably improved. According to the developer, queries will run up to 50% faster and significantly better data compression can be achieved.
DomainExpert can only be used with columns whose values have a particular pattern or a certain structure. For instance, these include, IP addresses (Number.Number.Number.Number) or email addresses (String@String.String):
```sql
mysql> CALL sys_infobright.create_rule('TickPerf', '%s-%d-%d-%d', 'Stock Ticker Performance Log');
```
Example pattern
Only one of the two methods described can be applied per column. In general, Lookup Columns should be preferred. DomainExpert is merely an alternative that can be used if a column has a fixed pattern, but due to a too high cardinality Lookup Columns cannot be used.
<table>
<thead>
<tr>
<th>Table</th>
<th>Column</th>
<th>Pattern</th>
</tr>
</thead>
<tbody>
<tr>
<td>DimCustomer</td>
<td>EmailAddress</td>
<td>%s@%s.%s</td>
</tr>
<tr>
<td>DimDate</td>
<td>Date</td>
<td>%d.%d.%d</td>
</tr>
</tbody>
</table>
DomainExpert rules in our example architecture
5.4 Data analysis
Data analysis within a data warehouse architecture is based on the principle of Online Analytical Processing (OLAP). OLAP makes it possible to carry out dynamic analysis in multi-dimensional spaces, regardless of the physical data storage used. OLAP has a number of variations: relational (ROLAP), multi-dimensional (MOLAP) and hybrid (HOLAP) OLAP.
Within our prototype architecture, we have implemented the data analysis level using the open source server Mondrian. Mondrian is based on the ROLAP or relational approach. Using a data cube, it transforms MDX queries into multiple SQL queries and, conversely, processes relational results multi-dimensionally. In addition, Mondrian uses the information provided by Pentaho Schema Workbench to generate the XML schema. This tool lets you establish a direct connection to the database, which can be used to store attributes in the schema via a drop-down list. The connection is established in the same way as Pentaho Data Integration.
The schema is created according to a fixed structure: The basis here is the cube, which is created by clicking on the cube symbol. Next, the fact table FactSales needs to be defined, after which the individual dimensions can be set up. In addition to the hierarchies and their associated levels, the integration of the respective dimension table must also be performed.
It may be advisable to separate multiple hierarchies of a dimension table in the XML schema into their own dimensions. This allows the hierarchies to be integrated onto the different axes in parallel. We used this principle in our example architecture for the dimensions ShipDate and OrderDate. Special attention should be paid to the specific order of the hierarchy level and / or attributes.
After this has been done, the various key figures can be integrated into the XML schema. Choosing the right aggregator here is crucial. The values Turnover, TaxAmt, OrderQty, Freight are added together using sum. Adding together the values UnitPrice and UnitPriceDiscount would not be meaningful. Instead, we derive their average using avg.
When setting up the schema, special attention must be paid to the specific characteristics of the respective analysis tools. For ad-hoc analyses, Saiku or Pivot4J can be used instead of the Pentaho Analyzer. Pentaho Analyzer relies on the defined hierarchies for labeling dimensions rather than the actual dimensions themselves.
5.5 Data presentation
The data presentation level is the interface between the system and the end user. Various graphical tools can be used to select, analyze, and visualize the data processed by the data warehouse. Included among these are ad-hoc analyses, dashboards, and reports.
For our open source data warehouse, we have chosen to use Pentaho Analyzer and Analyzer Report for the data presentation and visualization layer. Analyzer Report provides a graphical drag and drop interface with which individual fields can easily be added to an analysis. Using various combinations of attributes and values as well as filters, different views of the data can be created. But before this can be done, the schema as well as the corresponding database connection must first be set up within the User Console. This is done via the menu item Manage Data Sources > New Data Source. Once this has been done, it is possible to produce comprehensive analyses of the data stored in the data warehouse.
6. Conclusion and outlook
The architecture illustrated in this document proves that it is possible to construct a high-performance, user-friendly data warehouse system with open source software. Neither during the construction nor the operation of the data warehouse were we able to find any faults, deficiencies or loss of performance for our open source solutions in comparison with products offered by major manufacturers.
We therefore recommend that you take a serious look at open source business analytics software products. There are now a number of excellent open source tools for data analysis and Big Data scenarios that more than compete with the closed source „big boys“. Small and medium-sized businesses can definitely benefit here, because for them, the majority of solutions provided by large software vendors are too powerful and too expensive to use. Large corporate users, by contrast, will benefit from the open architectures, interfaces, and extreme flexibility that an open source data warehouse system can offer.
## 7. Appendix
### Table Column
<table>
<thead>
<tr>
<th>Table</th>
<th>Column</th>
</tr>
</thead>
<tbody>
<tr>
<td>DimDate</td>
<td>w_day</td>
</tr>
<tr>
<td></td>
<td>month</td>
</tr>
<tr>
<td>DimCustomer</td>
<td>Titel</td>
</tr>
<tr>
<td></td>
<td>City</td>
</tr>
<tr>
<td></td>
<td>StateProvinceName</td>
</tr>
<tr>
<td></td>
<td>PostalCode</td>
</tr>
<tr>
<td></td>
<td>CountryRegionName</td>
</tr>
<tr>
<td>DimPerson</td>
<td>PersonType</td>
</tr>
<tr>
<td></td>
<td>Titel</td>
</tr>
<tr>
<td></td>
<td>FirstName</td>
</tr>
<tr>
<td></td>
<td>LastName</td>
</tr>
<tr>
<td>DimProduct</td>
<td>ProductCategoryName</td>
</tr>
<tr>
<td></td>
<td>ProductSubcategoryName</td>
</tr>
<tr>
<td></td>
<td>Color</td>
</tr>
<tr>
<td></td>
<td>ProductLine</td>
</tr>
<tr>
<td></td>
<td>CLASS</td>
</tr>
<tr>
<td></td>
<td>Style</td>
</tr>
<tr>
<td>DimSalesTerritory</td>
<td>City</td>
</tr>
<tr>
<td></td>
<td>StateProvinceCode</td>
</tr>
<tr>
<td></td>
<td>StateProvinceName</td>
</tr>
<tr>
<td></td>
<td>CountryRegionCode</td>
</tr>
<tr>
<td></td>
<td>LocationName</td>
</tr>
<tr>
<td></td>
<td>GROUP</td>
</tr>
</tbody>
</table>
Lookup Columns within the example architecture
Leading in Business Open Source solutions and consulting
it-novum is the leading IT consultancy for Business Open Source in the German-speaking market. Founded in 2001, it-novum today is a subsidiary of the publicly-held KAP Beteiligungs-AG.
We operate with 85 employees from our main office in Fulda and branch offices in Düsseldorf, Dortmund, Vienna and Zurich to serve large SME enterprises as well as big companies in the German-speaking markets.
it-novum is a certified SAP Business Partner and longtime accredited partner of a wide range of Open Source products. We mainly focus on the integration of Open Source with Closed Source and the development of combined Open Source solutions and platforms.
Due to the ISO 9001 certification it-novum belongs to one of the few Open Source specialists who can prove the business suitability of their solutions, proven by international quality standards.
More than 15 years of Open Source project experience
- Our portfolio contains a wide range of Open Source solutions within the applications and infrastructure area as well as own product developments which are well-established in the market.
- As an IT consulting company with a profound technical know-how within the Business Open Source area we differentiate ourselves from the big solution providers’ standard offerings. Because our solutions are not only scalable and flexible but also integrate seamlessly in your existing IT infrastructure.
- We can assemble multidisciplinary project teams, consisting of engineers, consultants and business data processing specialists. Thus we combine business know-how with technological excellence to build sustainable business processes.
- Our target is to provide you with a high-quality level of consulting during all project phases – from the analysis and conception up to the implementation and support.
- As a decision-making basis prior to the project’s start we offer you a Proof-of-Concept. Through a real-case simulation and a developed prototype you can decide on a new software without taking any risks. Moreover, you benefit from:
- Security and predictability
- Clear project methodology
- Sensible calculation
Your contact person for Business Intelligence and Big Data:
Stefan Müller
Director Big Data Analytics
✉ stefan.mueller@it-novum.com
📞 +49 (0) 661 103 942
it-novum GmbH Germany
Headquarters Fulda: Edelzeller Straße 44 · 36043 Fulda
Phone: +49 (0) 661 103 333
Branches in Düsseldorf & Dortmund
it-novum branch Austria
Ausstellungsstraße 50 / Zugang C · 1020 Vienna
Phone: +43 1 205 774 1041
it-novum GmbH Switzerland
Hotelstrasse 1 · 8058 Zurich
Phone: +41 (0) 44 567 62 07
|
{"Source-Url": "https://it-novum.com/fileadmin/Whitepaper/WP%20ENG/whitepaper_open-data-warehouse_ENG.pdf", "len_cl100k_base": 6463, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 42382, "total-output-tokens": 7128, "length": "2e12", "weborganizer": {"__label__adult": 0.0003001689910888672, "__label__art_design": 0.0005512237548828125, "__label__crime_law": 0.0004377365112304687, "__label__education_jobs": 0.00078582763671875, "__label__entertainment": 5.6624412536621094e-05, "__label__fashion_beauty": 0.0001436471939086914, "__label__finance_business": 0.006740570068359375, "__label__food_dining": 0.0004107952117919922, "__label__games": 0.0003812313079833984, "__label__hardware": 0.0009684562683105468, "__label__health": 0.0002803802490234375, "__label__history": 0.00021541118621826172, "__label__home_hobbies": 0.00012022256851196288, "__label__industrial": 0.0012683868408203125, "__label__literature": 0.0001386404037475586, "__label__politics": 0.00019121170043945312, "__label__religion": 0.00027942657470703125, "__label__science_tech": 0.0204010009765625, "__label__social_life": 7.426738739013672e-05, "__label__software": 0.082763671875, "__label__software_dev": 0.8828125, "__label__sports_fitness": 0.0001748800277709961, "__label__transportation": 0.00047397613525390625, "__label__travel": 0.0002157688140869141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31415, 0.02728]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31415, 0.23786]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31415, 0.90172]], "google_gemma-3-12b-it_contains_pii": [[0, 74, false], [74, 624, null], [624, 1630, null], [1630, 3272, null], [3272, 4589, null], [4589, 5925, null], [5925, 7167, null], [7167, 8980, null], [8980, 10222, null], [10222, 11200, null], [11200, 12406, null], [12406, 13049, null], [13049, 13880, null], [13880, 14905, null], [14905, 16223, null], [16223, 18310, null], [18310, 19854, null], [19854, 21437, null], [21437, 22677, null], [22677, 24788, null], [24788, 26112, null], [26112, 27150, null], [27150, 28757, null], [28757, 31415, null]], "google_gemma-3-12b-it_is_public_document": [[0, 74, true], [74, 624, null], [624, 1630, null], [1630, 3272, null], [3272, 4589, null], [4589, 5925, null], [5925, 7167, null], [7167, 8980, null], [8980, 10222, null], [10222, 11200, null], [11200, 12406, null], [12406, 13049, null], [13049, 13880, null], [13880, 14905, null], [14905, 16223, null], [16223, 18310, null], [18310, 19854, null], [19854, 21437, null], [21437, 22677, null], [22677, 24788, null], [24788, 26112, null], [26112, 27150, null], [27150, 28757, null], [28757, 31415, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31415, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31415, null]], "pdf_page_numbers": [[0, 74, 1], [74, 624, 2], [624, 1630, 3], [1630, 3272, 4], [3272, 4589, 5], [4589, 5925, 6], [5925, 7167, 7], [7167, 8980, 8], [8980, 10222, 9], [10222, 11200, 10], [11200, 12406, 11], [12406, 13049, 12], [13049, 13880, 13], [13880, 14905, 14], [14905, 16223, 15], [16223, 18310, 16], [18310, 19854, 17], [19854, 21437, 18], [21437, 22677, 19], [22677, 24788, 20], [24788, 26112, 21], [26112, 27150, 22], [27150, 28757, 23], [28757, 31415, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31415, 0.18135]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
04004bc994668e4843519c119aee923e811e06cd
|
Keywords: Audio and Video Encoding, Multimedia Terminal, Remote Management, Open Source Software
Abstract: This paper presents a multimedia platform for digital television based on existing single purpose open-source solutions. The advantage of the proposed platform regarding existing ones is the possibility to integrate new features or change existing ones (e.g., recording format and coding standard). In this sense, the proposed terminal natively supports: (i) multi-user remote management; (ii) broadcast of recorded contents to remote devices, such as laptops and mobile phones; (iii) broadcast of real-time TV or video/surveillance cameras.
1 INTRODUCTION
Nowadays, the commercial trend for multimedia terminals are all-in-one solutions that are able to display real-time TV shows, record TV programs and manage other multimedia contents. However, the latest advances in multimedia services will soon integrate other features, such as video-call or home surveillance, which will require more customizable platforms, where the user has the possibility to add new features.
Some platforms for the distribution of multimedia contents already exist. Example of such platforms include: commercial solutions, such as Microsoft Mediaroom [17], do not allow any change due to proprietary rights. Example of open-source solutions include Linux TV [15], which may have a large support for devices and formats, but require programming knowledge by the end-user, making it hard to customize or change.
In contrast, the platform that is proposed allows different inputs, from satellite broadcasts to video-camera or surveillance-cameras. The produced content is handled by an encoding and streaming server, allowing different types of devices to access the service with no extra costs for the user other than the Internet connection. Moreover, its strict personal usage is assured by convenient security (using SSL/TLS protocols) and user authentication mechanisms.
To summarize, the proposed system is characterized by offering support for:
- input devices: satellite, video or radio antenna, cable TV, IPTV;
- output devices: TV, computer, laptops, mobile devices, Hard Disk Drive (HDD) (for recording purpose), others (e.g. tablets ) through a local connection, LAN and/or Internet.
Furthermore, it supports the following core services:
- **TV Streaming** – after acquiring the TV signal, it may be displayed in real-time or even displayed while recording. This reproduction facility of the audio/video is often denoted as streaming, where media is constantly received and displayed to the end-user through an active Internet connection, whether or not the user is next to the computer where the signal is acquired.
- **TV Recording** – this basic functionality provides the ability to remotely manage the recording of any TV show or program that, by some reason, cannot be seen at that time.
- **Video-call** – since the audio/video acquisition is implemented for the TV Streaming functionality, setting it up for a web-cam and microphone represents a small change in the system. This way, the video-call can also be offered as a service.
2 RELATED WORK
Several tools that implement the previously presented features already exist independently, but with no connectivity between them. The main difference between the developed platform and the tools already developed is that this framework integrates several independent solutions into it. Other differences are:
- Some proprietary solutions cannot be modified without violating intellectual property.
- Many software tools have a complex interface and are suitable only for experienced users or users with some programming knowledge. In some cases, this is due to the fact that these tools support many more features and configuration parameters than what is expected in an all-in-one multimedia solution.
- Some TV applications cover only DVB, while no analog support is provided;
- Most applications only work in specific world areas (e.g., USA).
- Some applications only support a limited set of devices.
A new emergent area is IPTV (Internet Protocol Television), with several solutions being developed in a daily basis, but all related to IP solutions. It is the case of a Personal TV framework presented in [1], where the main goal is the design of a Framework for Personal TV for personalized services over IP.
The solution presented in this paper differs from the Personal TV [1] in several aspects. The solution here in presented is: implemented based on existent open-source solutions; intended to be easily modifiable; aggregate several multimedia functionalities, such as video-call, recording content; and to serve the user with several different multimedia video formats (the streamed video is done in WebM format, but it is possible to download the recorded content in different video formats, after requesting the platform to re-encode the content).
In the following, a set of existing platforms is presented. It should be noted the existence of other small applications just for video display (e.g., TV players such as Xawtv [6]) However, in comparison with the proposed application, there is no solution that covers all the features offered by the proposed solution.
Commercial Solutions Several commercial solutions exist, but none is legally modifiable. GoTV [9] is a proprietary and paid software tool, which offers TV viewing specifically to mobile-devices (e.g., Android, iPhone, ...) and only works in the USA. Microsoft MediaRoom [16] is a proprietary and paid IPTV service available to television providers. Many providers use this software (including Portuguese MEO and Vodafone) and it is also accessible through a large set of devices (personal computer, mobile devices, TV’s, Microsoft XBox360). GoogleTV [7] is an IPTV service for Android based systems. It is an all-in-one solution, allows developers to add new features, through the Android Market, and currently works only for some selected Sony televisions and Sony Set-Top boxes. NDS MediaHighway [20] is a platform adopted worldwide by many Set-Top boxes (e.g., Portuguese Zon provider). The difference between NDS and MediaRoom is that NDS supports DVB (terrestrial, satellite and hybrid), while MediaRoom does not.
All of the above commercial solutions have similar functionalities and charge their usage. However, some support a great number of devices (MS MediaRoom), and some are specialized in one kind of device, e.g. GoTV - mobile devices. None of the mentioned commercial solutions offer support for video-conference, either as an add-on or within the normal service.
Free/open-source software Linux TV [15] is a repository for several tools that offers a vast set of support for several kinds of TV Cards and broadcasts methods. By using the Video for Linux driver (V4L) [2], it is possible to watch TV from all kinds of DVB sources, but none for analog TV broadcast sources. The problem of this solution is that, for a regular user with no programming knowledge, it is hard to setup any of the proposed solutions. Video Disk Recorder (VDR) [14] is another open-solution for DVB, with the common options (regular playback, recording and video edition). However, it requires some programming knowledge. Kastor! TV (K!TV) [22] is an open solution for MS Windows to view and record TV content from a video card. MythTV [19] is a free open-source software for digital video recording (DVR). It has a vast support and development team, and any user can modify/customize it with no fee.
In general, the existent open-source software offer similar functionalities in comparison with the proposed solution. The major restrictions of using these solutions are: the user needs some programming knowledge (Linux TV); the acquired content can only be viewed in the machine where the signal is acquired (VDR); and other solutions (MythTV) offer the possibility for remote usage, but the remote user needs specific software installed in order to properly view the remote content.
3 ARCHITECTURE
The proposed architecture is based on existent single purpose open-source software tools and was defined in order to make it easy to manipulate, remove or add new features and hardware components. The core functionalities are:
- **Video Streaming**, allowing real-time reproduction of audio/video acquired from different sources (e.g., TV cards, video cameras, surveillance cameras). The media is constantly received and displayed to the end-user through an active Internet connection.
- **Video Recording**, providing the ability to remotely manage the recording of any source (e.g., a TV show or program) in a storage medium;
- **Video-call**, considering that most TV providers also offer their customers an Internet connection, it can be used together with a web-camera and a microphone, to implement a video-call service.
The conceived architecture adopts a client-server model. The server is responsible for signal acquisition and management of the available multimedia sources (e.g., cable TV, terrestrial TV, web-camera, etc.), as well as the reproduction and recording of the audio/video signals. The client application is responsible for the data presentation and the user interface.
Fig. 1 illustrates the architecture in the form of a structured set of layers. This structure has the advantage of reducing the conceptual and development complexity, allows easy maintenance, and permits feature addition and/or modification.
Common to both sides, server and client, is the presentation layer. The user interface is defined in this layer and is accessible both locally and remotely. Through the user interface it should be possible to login as a normal user or as an administrator. The common user uses the interface to view and/or schedule recordings of TV shows or previously recorded content and to do a video-call. The administrator interface allows administration tasks, such as retrieving passwords, disable or enable user accounts or even channels.
3.1 Server Side
As shown in Fig. 1, the server is composed of six main modules:
- **Signal Acquisition And Control (SAAC)**, responsible for the signal acquisition and channel change;
- **Encoding Engine**, which is responsible for channel change and for encoding audio and video data with the selected profile, i.e. different encoding parameters;
- **Video Streamer Engine (VSE)**, which streams the encoded video through the Internet connection;
- **Scheduler**, responsible for managing multimedia recordings;
- **Video Recorder Engine (VRE)**, which records the video into the local hard drive, for posterior visualization, download or re-encoding;
- **Video-Call Module (VMC)**, which streams the audio/video acquired from the web-cam and microphone.
Using a bottom-up approach, the server side modules, presented in Fig. 1 will now be described in the next subsection.
3.1.1 Signal Acquisition And Control
The SAAC Module is responsible for the signal acquisition and control. Video/audio signal can be acquired from multiple Hardware (HW) sources (e.g., TV card, surveillance camera, web-cam and microphone, DVD). It can also be acquired in different formats. Thus, the top modules should not be concerned about how the information is provided/encoded. Thus, the SAAC Module is responsible for providing a standardized mean for the upper modules to read the acquired information.
3.1.2 Encoding Engine
The Encoding Engine is composed by the Audio and Video Encoders. Their configuration options are defined by the Profiler. After acquiring the signal from the SAAC Module, this signal needs to be encoded into the requested format for subsequent transmission.
The Audio & Video Encoder Modules are used to compress/decompress the multimedia signals being acquired. The compression is required to minimize the amount of data to be transferred, so that the user can experience a smooth audio and video transmission. Both audio and video encoder modules should be implemented separately, in order to easily allow the integration of future audio or video codecs into the system.
The Profiler is the module that specifies the parameters for audio and video encoding. This module is represented as independent unit, since it could be integrated into the database.
3.1.3 Video Streamer Engine
The VSE component is responsible for streaming the captured audio/video data provided by the SAAC
module and for streaming any content previously recorded. It may also stream the web-camera data, when the video-call scenario is considered.
3.1.4 Video Recorder Engine
The VRE is the unit responsible for recording audio/video data coming from the available source. There are several recording options, but the recording procedure is always the same. First, it is necessary to specify the input channel to record as well as the beginning and ending time. Afterwards, according to the Scheduler status, the system needs to decide if it is an acceptable recording or not. Finally, it tunes the required channel and starts the recording with the defined quality level.
3.1.5 Scheduler
The Scheduler component manages the operations of the VSE and VRE and is responsible for scheduling the recording of any specific audio/video source. Consider that the system would have to acquire multiple video signals at the same time, with only one TV card (multiple recordings at the same time but with different channels). These behavior should not be allowed, because it will lead to unexpected/undesired results, while using only one TV Card. However, this might occur if the system had several input devices. In order to prevent this undesired situations, a set of policies have to be defined in the Scheduler. Those polices may be: the first recording dictates the remaining, it is not possible to record multiple sources (from the same TV Card) at the same time. These policies may be defined in a bash file created for the effect.
3.1.6 Video-Call Module
The process that implements this feature is very similar to the process that implements the audio and video streaming feature. The difference is the HW where the signals are acquired. Hence, VCM is composed of: a module for video and audio acquisition, like SAAC; a module for audio and video encoding according to the defined profile (Encoding Engine); and a module to transmit the encoded stream (VSE).
3.2 Client Side
In the client side there are two main modules:
- **Browser and required plug-ins**, in order to correctly display the streamed and recorded video;
- **Video-Call module**, to acquire the local video+audio and stream it to the corresponding recipient.
3.2.1 Browser and required plug-ins
The base software that implements this solution is a Web Browser and, if necessary, a plug-in for audio/video display. One of the main concerns is to support as many Web Browsers as possible, provided that there are plug-ins available for a proper functioning.
3.2.2 Video-Call Module
In order for the client to support the VCM, it is necessary the installation of some software (equivalent to the server-side) for acquisition, encoding and transmission of the local web-cam and microphone sources. The operation mode is the same as described in the server side VCM.
4 IMPLEMENTATION
The whole system was implemented over Linux Ubuntu operating system, using a set of open-source packages for the multimedia services. The installed open-source software packages were: GStreamer [10] core and its base, good, ugly and bad plug-ins to add support to Flumotion [5]; libvpx, to add support to WebM’s VP8 [26, 12] video format; Flumotion [5]; Ruby on Rails (RoR) Framework [3]; XMLTV [24, 18]; the latest version of a web browser; and, for data management, it was used a SQLite data-base, which implements a self-contained, server-less, zero-configuration, transactional SQL database engine.
4.1 User Interface and Authentication
The User Interface (UI) module was developed using the RoR Framework [3], which is an open-source web application development framework, that allows agile development methodologies. The UI is accessible through a browser with support to HTML5 (e.g., latest versions of Firefox and Chrome), in order to allow the displaying of the streaming content.
The user authentication was implemented using Devise [21], a flexible, easy to configure and manage authentication solution that is based on Warden [11].
4.2 Audio&Video acquisition, encoding and broadcasting
The audio and video acquisition (SAAC) is implemented by Flumotion, which acquires the signals from the available HW. It also makes use of several Bash Scripts that tune the HW (TV card) to the selected channel.
The Encoding Engine module is implemented by Flumotion. Flumotion is based in the concept of having one (or more) manager(s), where the tasks are defined, and one (or more) worker(s), associated to the defined tasks. This way, the manager is responsible for managing the workers (i.e., start and stop). Both manager and workers are defined in a XML document, by following a structure defined by Flumotion software. To implement the Encoding Engine, the following tasks were defined in Flumotion:
- **Producer**, responsible for producing stream data (usually in a raw format) and to feed it to other components.
- **Consumer**, which consumes the stream data. It might stream a feed to the network, making it available to the outside world, or it could capture a feed to disk.
- **Converter**, converts stream data. It can: encode or decode a feed; combine feeds to make a new feed (e.g., mux audio and video feeds); change the feed by changing the content, (e.g., take a master feed and a backup feed and if the master feed stops supplying data then it will output the backup feed), etc.
Thus, the producer component acquires the data from the HW, passes the data to a consumer component for encoding purposes (the video data may go to a converter component if it is necessary to scale down the frame) and then passes the stream to the consumer component responsible for the muxing. The audio stream is encoded with Vorbis codec and the video is encoded with VP8 code. It is at this stage that the profiles are set. Acquired video in a large format (4CIF), can be scaled down to CIF and QCIF, encoded to VP8 with different parameters according to the desired quality. After having the two streams in the desired format (Vorbis for audio and VP8 for video) they are muxed into the WebM container and streamed through Real Time Protocol (RTP) to the web.
Mapping the described implementation into the architecture in Fig. it results that Flumotion covers the SAAC module, the Encoding Engine and the VSE module. The management of the Flumotion manager and workers is done by Bash Scripts.
There is still one manager where all the described tasks are defined (signal acquisition, encoding and broadcasting), associated to different workers. The need for several workers is because it simplifies the management process in the scripts. When a request to change channel is made, the following procedure is done:
1. the system verifies if the user that requested the change has permission for this action. The permission is currently based on who was first using the system, but it can be added special permissions to users;
2. assuming that the user can change the channel, the worker which was acquiring the video stream from the HW is terminated;
3. the script to change the channel is invoked with the channel code. The codes are defined in the database and were acquired using the XawTV software;
4. the video acquisition worker is launched again;
5. the web page is reloaded and the new channel content is displayed.
To control the workers and manager, a file is created when they are launched to keep track of their PID. This is useful for restarting and changing the channel.
4.3 Recording Management
The VRE is also implemented by Flumotion software. The streamed video and audio can be recorded to disk by a consumer component. This task is defined in the manager XML file and associated to a worker.
To evaluate if a recording is valid, it has to pass the scheduler tests. The scheduler has a set of rules to evaluate a request. Those rules are:
a) there cannot be two simultaneous recordings at the same time in different channels (unless there are two or more TV cards available);
b) it is possible to record subsets of a previous defined recording;
c) while a recording is ongoing, it is impossible to change the channel;
d) the recording always has the highest priority, meaning that if there is a user watching a different channel prior to the recording, when the recording starts the channel will be changed to the one defined in the recording; this ensures a first come, first serve priority system.
The implementation of these rules is done by ruby scripts, which simplify this task. When a recording is classified as valid, the request is inserted into the database and a job is added to the Linux Cron table through the at command. When a scheduled job starts, the following procedure is done:
1) verify if some recording is ongoing (check the PID file for the recording worker);
2) if not:
• change to the channel defined in the recording (if necessary);
• launch the recorded worker;
• wait until the recording time ends.
If a recording is ongoing, then:
3) when the recording time ends, there are two different scenarios:
• there is no more recordings in progress: the recorder worker is terminated, the resulting file is renamed, copied to the videos folder, and added to the database. At this time the user will be able to view the recording;
• there are other recordings in progress: a subset of the file content is copied to the video folder using the FFmpeg tool [4] (the start time and end time are passed as parameters to get the subset) and the video is added to the database.
The recorded content, at the server, may then be downloaded. The original format of the recorded file is VP8+Vorbis in a WebM container (it is the streamed format and this way no more extra complexity is added to the recording process). However, a re-encoding option is available. The user may transcode the resultant file into H.264 video format and ACC audio format, or even add other formats. This functionality, media manipulation and coding blocks, is implemented using the FFmpeg tools.
4.4 Video-Call
Considering its personal use nature, with a strict peer-to-peer topology, the video-call module was implemented by reusing the Flumotion streaming server to broadcast the audio/video acquired from the local web-camera and microphone. This feature allows two or more users to share the streamed content between them. For this, each user needs the Flumotion software installed. After running the Flumotion Server and setting up the stream of the local web-cam and microphone, the application gives an URL which should be exchanged between the users and inserted into the fields. Local and Remote, in the Multimedia Terminal web-page. After the insertion of the two URL’s, the terminal presents in the same window the local and remote users, like a traditional video-call. This is a rudimentary solution which requires some future work. Future extensions are the integration of this feature with existent messaging programs as well as the encoding using a video-call protocol, such as Session Initiation Protocol (SIP) [25] and H.323 [13].
4.5 Programming Guide
A feature that was also implemented to add extra functionalities to the developed software was the addiction of a Electronic Programming Guide (EPG) [8]. The EPG was implemented using the XMLTV software [25], which connects to a Internet host and
retrieves several XML files (XMLTV [18]), one for each channel available in the country, Portugal (several countries are available). This information is used for displaying to the user the current and next show at the viewed channel and it also allows the user to set recordings while viewing the show list for a selected channel.
5 EVALUATION
This section presents the evaluation of the developed solution according to three different perspectives: (i) supported devices and Operating Systems (OS); (ii) resources usage; (iii) user usability and modifiability.
5.1 Supported Devices and OS
The server, where the software resides, needs to be a Unix based OS. Currently the OS where the server is running is Ubuntu 10.04 LTS Desktop Edition, but any other Unix OS that supports the software described in the implementation section should also support the developed software.
For the user interaction, the solution was tested under Firefox, Google Chrome, Chromium, Konqueror, Epiphany and Opera (latest versions). All of these Web-Browsers support the developed software with no need for extra add-ons. Regarding MS Internet Explorer and Apple Safari, the latest versions also support the software. However, they require the installation of a WebM plug-in in order to display the streamed content. Regarding the user interaction, any device with Android OS 2.3 or later, should offer full support. The user interface is represented in Fig. 2.
5.2 Resources Usage
The CPU computational load is mainly due to the audio&video acquisition, scaling, encoding and broadcasting. The server where this evaluation was conducted, was a Dual Core AMD Opteron Processor 170, where the two cores were used during the process, and with 2 GiB of memory. The CPU usage was measured in three different stages: no clients, one client and ten clients (being this solution for personal usage, ten people represent the size of a ten member family, which is uncommon) for the medium quality (video bit-rate 400 kbit/s and audio bit-rate 64kbit/s). The results are the following:
- with no clients, the total CPU usage was around 33%;
- with one client, the total CPU usage was 33.5%;
- with ten clients, the total CPU usage was 38.5%.
5.3 Usability and Modifiability
After inquiring several users about the usability of the developed solution (this analysis was conducted with several families, one family with six members, three with 4 and 12 singles), the average of the obtained results was that the user interface was easy to use (90% of a universe of 30 people), the recording functionality was also easy and intuitive to use (87%) and the video-call was clear about how to use after a brief explanation (70%). However, the current implementation was not the most suitable for the current user’s (video-call). The improvements for the video-call are described in the Conclusions section.
Regarding the user modifiability, the conclusions were made after explaining to the users how the system was designed and how RoR works and the reaction to the developed solution was:
- in order to easily perform modifications, it is required a user modification manual, with the implementation details;
- the users agreed that the usage of the RoR good practices were very useful (e.g., using obvious names for the functions, using clear and intuitive names for the views and controllers, the organization of the project, ...) for further development;
- users with small knowledge about programming language considered that the used programming language was quite easy to understand and with the developed features it would be easy to add other features by simply editing the existent code (RoR, bash scripts and the XML description of the Flumotion server).
Figure 2: User streaming interface.
As an example, to add video a surveillance feature the modifications would be: replicate the existent manager XML code, edit the video acquisition source, and workers name and used ports; reuse the video streaming web-page, edit the streaming URL, and add a link to the new video surveillance feature page in the global menu.
6 CONCLUSIONS
The proposed application comes with some base features, such as: TV streaming, TV recording and Video-Call. Nevertheless, its modular structure was designed to easily support other features (e.g. video surveillance services). The architecture is based in a client-server model. The server application lies in a Linux platform, where the signal is acquired, being possible to selected several sources (e.g., TV card, web-cam, microphone, surveillance cameras, etc.) and it can be accessed remotely, using the Internet. Due to the incorporated access control and authentication mechanisms this application can be used in a multi-user environment, with an easy and intuitive user interface, allowing an experienced user to modify any aspect he needs to edit, either by programming or by editing the configurations or the provided features.
Some future work should be considered regarding the Video-Call functionality. Currently, the users have to setup the audio&video streaming using the Flumotion tool, and after creating the streaming they have to share through other means (e.g., e-mail, instant message,...) the URL address. This feature may be overcome by incorporating a chat service, allowing the users to chat between them and provid the mean to share the URL for the video-call. Another solution is to implement a video-call based on protocols such as SIP signaling protocol, widely used for controlling communication sessions, such as voice and video calls over Internet Protocol (IP); or H.323 standard, which addresses call signaling and control, multimedia transport and control, and bandwidth control for point-to-point and multi-point conferences.
REFERENCES
|
{"Source-Url": "http://www.inesc-id.pt/pt/indicadores/Ficheiros/7408.pdf", "len_cl100k_base": 6002, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25008, "total-output-tokens": 7401, "length": "2e12", "weborganizer": {"__label__adult": 0.0005574226379394531, "__label__art_design": 0.0013265609741210938, "__label__crime_law": 0.00033593177795410156, "__label__education_jobs": 0.0004754066467285156, "__label__entertainment": 0.0008559226989746094, "__label__fashion_beauty": 0.0002052783966064453, "__label__finance_business": 0.0002505779266357422, "__label__food_dining": 0.0004572868347167969, "__label__games": 0.0010528564453125, "__label__hardware": 0.0228424072265625, "__label__health": 0.00024020671844482425, "__label__history": 0.0003368854522705078, "__label__home_hobbies": 0.00014734268188476562, "__label__industrial": 0.0005545616149902344, "__label__literature": 0.0001970529556274414, "__label__politics": 0.00019872188568115232, "__label__religion": 0.0006170272827148438, "__label__science_tech": 0.051971435546875, "__label__social_life": 7.987022399902344e-05, "__label__software": 0.07415771484375, "__label__software_dev": 0.84228515625, "__label__sports_fitness": 0.0002703666687011719, "__label__transportation": 0.0004363059997558594, "__label__travel": 0.00025844573974609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32235, 0.03078]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32235, 0.24008]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32235, 0.9181]], "google_gemma-3-12b-it_contains_pii": [[0, 3149, false], [3149, 8034, null], [8034, 12423, null], [12423, 14952, null], [14952, 19444, null], [19444, 23754, null], [23754, 27534, null], [27534, 32235, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3149, true], [3149, 8034, null], [8034, 12423, null], [12423, 14952, null], [14952, 19444, null], [19444, 23754, null], [23754, 27534, null], [27534, 32235, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32235, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32235, null]], "pdf_page_numbers": [[0, 3149, 1], [3149, 8034, 2], [8034, 12423, 3], [12423, 14952, 4], [14952, 19444, 5], [19444, 23754, 6], [23754, 27534, 7], [27534, 32235, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32235, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-05
|
2024-12-05
|
61e2c783efa75724c109c5f0fe9ba6953301fa32
|
SOFTWARE END-USER LICENSE AGREEMENT (EULA)
ATTENTION: YOU MAY NEED TO SCROLL DOWN TO THE END OF THIS EULA BEFORE YOU CAN AGREE TO THESE TERMS AND CONTINUE WITH THE SOFTWARE INSTALLATION.
IMPORTANT: THIS END USER LICENSE AGREEMENT ("EULA" or "AGREEMENT") IS A LEGAL AGREEMENT BETWEEN THE PERSON, COMPANY, OR ORGANIZATION THAT HAS LICENSED THIS SOFTWARE ("YOU" OR "LICENSEE") AND FEI SAS, A PART OF THERMO FISHER SCIENTIFIC LOCATED AT 39 RUE D’ARMAGNAC, IMM E2 - QUAI 8.2, 33800 BORDEAUX, FRANCE ("COMPANY"). READ IT CAREFULLY BEFORE COMPLETING THE INSTALLATION PROCESS AND USING THE SOFTWARE, BY INSTALLING AND/OR USING THE SOFTWARE, YOU ARE CONFIRMING YOUR ACCEPTANCE OF THE SOFTWARE AND AGREEING TO BECOME BOUND BY THE TERMS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO BE BOUND BY THESE TERMS, OR DO NOT HAVE AUTHORITY TO AGREE TO THESE TERMS, THEN DO NOT INSTALL OR USE THE SOFTWARE AND RETURN THE SOFTWARE TO YOUR PLACE OF PURCHASE.
1. Definitions.
(a) "Software" means one or more versions of Open Inventor® and any extensions, ImageDev and its extensions, or one or more versions of Avizo, AvizoTo2D, Avizo Trueput and any extensions (other than AvizoToGo), or one or more versions of Amira, Amira2D and any extensions, or Visilog and any extensions, or PerGeos and any extensions supplied by Company, and corresponding documentation, associated media, printed materials, and online or electronic documentation. For purposes of this Agreement, Software includes any updates to the Software which you are entitled to receive.
(b) "Licensee Network" means the network of computers owned, leased or otherwise controlled by Licensee, to which access is limited to authorized individuals or computers, such as a local area network, intranet or virtual private network.
(c) "License Key", code provided by Company to Licensee to activate the Software.
(d) "Error Correction" means computer code which corrects an error in the Software but which cannot be executed independently of the Software.
(e) "Software Update", means major (new features) or minor (bug fixes) release of the same software for which you currently have a license.
(f) "Software Upgrade" means: Different software of the Open Inventor product Family, more fully featured, than software for which you currently have license, as well as any purchase of additional license rights (e.g. a migration from a Node-locked License to a Floating License).
(g) "SDK" (Software Development Kit) means a static, non-linkable version of the Software, embedded in an Application Software, only in a binary non-linkable form that is not directly accessible to either the sub users or the end users of the Application Software.
(h) "Runtime" means a static, non-linkable version of the Software, embedded in an Application Software, only in a binary non-linkable form that is not directly accessible to either the sub users or the end users of the Application Software.
(i) "Licensee Application Software" means executable computer program, built using an SDK, and embedding a Runtime, by means of linkage or binding with the user-proprietary code.
(j) "Cloud Service" means an internet-accessible service maintained by Company or a third party contracted by Company by which Company may access certain information relating to maintenance and support of the Software.
2. License Grants
Company grants you the right to use the number of copies of the Software as specified on your contract or invoice, and for which you have paid the applicable license fees, under the following conditions:
(a) Academic License: If Company identifies a Software license as an Academic License, the Licensee must be an academic institution or other qualifying non-profit organization and may use up to the maximum number of copies of the Software that have been validly obtained pursuant to the License. Software provided through an Academic License may only be used for "Academic Use," which means use (i) by an individual employed by (or, with respect to academic institutions, enrolled in a course of study at) an accredited academic institution, organized and operated exclusively for the purpose of education or research, (ii) at the location of such academic institution; and (iii) solely for purposes directly related to teaching, training, degree-granting programs, and research and development that are part of the instructional functions of the institution. Without limiting the foregoing, Academic Licenses may not be used for commercial, professional or productive purposes, for commercial training or any other for-profit purposes.
(b) Node-locked License: a license to the Software limited to use on the single computer owned, leased or otherwise controlled by Licensee on which the Software is initially installed and for which a license key has been issued. You may only install the Software for use on one platform or operating system.
(c) Floating License: a license to the Software limited to use on the Licensee Network on which the Software is initially installed, connected to a server for which a Floating License Key has been issued for a specific maximum number of simultaneous users, or "Network License Seats." Company will provide to Licensee a License Key that will unlock the usage of the Software for a specific maximum number of Network License Seats. Provided that such option is made available by Company or purchased by the Licensee, one or more Network License Seats may be allocated for use on a computer temporarily disconnected from the Licensee network, for remote use for up to 180 days (or 6 months), as long as the allocated seat is unavailable for use on the Licensee Network. Company provides options to use Floating License on a LAN (Local Area Network) or on a WAN (Wide Area Network).
(d) Trial Version: a license of the Software, so identified, to be used only to review, demonstrate and evaluate the Software for a limited time period. The Trial Version may have limited features, may lack the ability for the end-user to save the end product, and will cease operating after a predetermined amount of time due to an internal mechanism within the Trial Version. You may not: (A) install or use more than one copy of a Trial Version of the Software; (B) download the Trial Version of the Software under more than one username; (C) alter the contents of a hard drive, operating system or computer system to enable the use of the Trial Version of the Software after the trial period expires; (D) disclose the results of software performance benchmarks obtained using the Trial Version to any third party without Company’s prior written consent; (E)
use the Trial Version of the Software for a purpose other than the sole purpose of determining whether to purchase a license to a commercial or academic version of the software; or (F) provide, install or use the Trial Version of the Software for any commercial training purpose.
(e) Developer or SDK License: a license of the Software, so identified, to be used for internal development of Licensee’s own application software product created using the Software (“Licensee Application Software”). Licensee is solely responsible for reliability and accuracy of any program output, including Licensee Application Software developed with the Software.
(f) Developer Academic License: If you entered into a specific agreement with Company (e.g. “Open Inventor Academic Program”), which entitles you to a Developer Academic License the following additional terms apply to the above Academic License. (A) Non-commercial Distribution of Licensee Application Software and Runtimes under Developer Academic License. If Licensee is qualified as an Developer Academic License user, all the Licensee Application Software developed or otherwise created by the Licensee using the SDK, and which embed a Runtime must be distributed free of charge, only within the context of their use for educational or research purposes, and must not generate any commercial revenue or get deployed by a corporation for its in-house use or be used in any other commercial manner. (B) The Developer Academic License does not grant rights to the Licensee to distribute the Software otherwise than in the Runtime form. (C) If applicable, Licensee must enter into a commercial licensing agreement with Company prior to distributing the Licensee Application Software for in-house use within a commercial enterprise or for any commercial purpose, including without limitation revenue generation. (D) The Developer Academic License does not grant rights to any Update, Upgrade, Maintenance or Support service.
The Software includes components provided by licensors to Company (“Third Party Licensors”), and may also include Open Source Software (“OSS”) components. Licenses from Third Party Licensors may have enforceable rights in the components included in the Software and may be able to enforce such rights directly against Licensee. Company’s warranty and indemnity obligations do not apply to third party components to the extent that (i) the third party license to Company requires that such software is distributed without warranty and/or (ii) the components are OSS.
4. Permitted Use.
(a) You may make one copy of the Software in machine-readable form solely for backup purposes. You must reproduce on any such copy all copyright notices and any other proprietary legends on the original copy of the Software. You may not sell or transfer any copy of the Software made for backup purposes.
(b) You agree that Company may audit your use of the Software for compliance with these terms at any time, upon reasonable notice. In the event that such audit reveals any use of the Software by you other than in full compliance with the terms of this Agreement, you shall reimburse Company for all reasonable expenses related to such audit in addition to any other liabilities you may incur as a result of such noncompliance.
(c) Your license rights under this EULA are nonexclusive, nontransferable, and non-assignable.
(d) Mandatory Product Activation. Any license rights granted under this Agreement may be limited to the first thirty (30) days after you first install the Software unless you supply information required to activate your licensed copy in the manner described during the setup sequence of the Software. You may need to activate the Software through the use of the Internet or telephone; toll charges or other provider charges may apply. There are technological measures in this Software that are designed to prevent unlicensed or illegal use of the Software. You agree to follow any requirements regarding such technological measures. You may also need to reactivate the Software if you modify your computer hardware, alter the Software, or install the Software on another computer. Product activation may be based on the exchange of information between your computer and Company. None of this information contains personally identifiable information nor can they be used to identify any personal information about you or any characteristics of your computer configuration.
5. Prohibited Actions.
(a) Other than as set forth in Section 2, you may not make or distribute copies of the Software, or electronically transfer the Software from one computer to another or over a network.
(b) You may not alter, merge, modify, adapt or translate the Software, or decompile, reverse engineer, disassemble, or otherwise reduce the Software to a human-perceivable form or modify the Enhanced Compressed Wavelet (“ECW”) file format in any way, including file conversion application converting ECW files to any other file format.
(c) Unless expressly permitted by Company, you may not rent, lease, or sublicense the Software.
(d) Unless expressly permitted by Company, you may not modify the Software or create derivative works based upon the Software.
(e) Licensee may not use the SDK to develop Licensee Application Software that competes with the Software.
In the event that you fail to comply with this EULA, Company may terminate the license and you must destroy all copies of the Software. All other rights of both parties and all other provisions of this EULA will survive such termination.
6. Software Updates.
If this copy of the Software is an update from an earlier version of the Software, before you may install or use the Software Update, you must: i) possess a valid license of an earlier version of the Software to be updated; ii) your Software must be within the Maintenance Period or you must have a current Maintenance contract. You may continue to use each earlier version copy of the Software to which this update copy relates on your computer after you receive this update copy, provided that, (i) the updated copy and the earlier version copy are installed and/or used on the same computer only and the earlier version copy is not installed and/or used on any other computer; (ii) you comply with the terms and conditions of the earlier version’s end user license agreement with respect to the installation and/or use of such earlier version copy; (iii) the earlier version copy or any copies thereof on any computer are not transferred to another computer unless all copies of this update copy on such computer are also transferred to such other computer; and (iv) you acknowledge and agree that any obligation Company may have to support and/or offer support for the earlier version of the Software may be ended upon availability of the update.
7. Software Upgrades.
If this copy of the Software is an upgrade from an earlier version of the Software, you must: (i) possess a valid full license of an earlier version of the Software used to upgrade to this upgrade copy ii) have your License covered by a Maintenance contract, in order to install and/or use this upgrade copy. You may NOT continue to use each earlier version copy of the Software to which this upgrade copy relates. The software upgrade
is considered as new Software and subject to the general terms of this Agreement or the End User License Agreement that accompanies the upgrade.
8. **Reservation of Rights.** Title to and ownership of Software, and all proprietary rights or intellectual property rights with respect to the Software, remains exclusively with Company or its licensors. The license does not constitute a sale of the Software or any portion or copy of it. Ownership of the source form of Licensee’s Application Software that makes calls to but does not contain all or any portion of Software remains the property of Licensee.
9. **Confidentiality.** Software is a trade secret and is proprietary to Company. Licensee shall maintain Software in confidence and prevent disclosure of Software using at least the same degree of care it uses for its own similar proprietary information, but in no event less than a reasonable degree of care. Licensee shall not disclose Software or any part thereof to anyone for any purpose, other than to employees or authorized end users for the purpose of exercising the rights expressly granted under this Agreement. The obligation under this Section shall survive any termination of the Agreement.
10. **Warranty.** Company warrants that for a period of thirty (30) days following the date the Software is shipped to Licensee (the “Maintenance Period”), the Software will materially conform to the user manuals and other documentation issued by Company in conjunction with the Software. LICENSEE ACKNOWLEDGES AND AGREES THAT LICENSEE’S SOLE AND EXCLUSIVE REMEDY AND COMPANY’S SOLE AND EXCLUSIVE OBLIGATION FOR ANY BREACH OF THE FOREGOING WARRANTY IS THE MAINTENANCE OBLIGATIONS SET FORTH IN MAINTENANCE SECTION BELOW. EXCEPT FOR THE FOREGOING WARRANTY, COMPANY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY, TITLE, NON-INFRINGEMENT OF THIRD PARTIES’ RIGHTS, AND FITNESS FOR A PARTICULAR USE. WITHOUT LIMITING THE FOREGOING, COMPANY DOES NOT WARRANT THAT THE FUNCTIONS CONTAINED IN SOFTWARE WILL OPERATE IN THE COMBINATION LICENSEE SELECTS OR THAT OPERATION OF SOFTWARE WILL BE UNINTERRUPTED OR ERROR-FREE.
11. **Liability Limitations.** Company AND ITS LICENSORS SHALL NOT BE LIABLE FOR ANY SPECIAL, INDIRECT, PUNITIVE, OR CONSEQUENTIAL DAMAGES RESULTING FROM USE OF SOFTWARE OR FOR THE RESULTS OBTAINED THROUGH THE USE OF THE SOFTWARE, INCLUDING ANY LICENSEE APPLICATION SOFTWARE. COMPANY’S CUMULATIVE LIABILITY FOR DAMAGES HEREUNDER, WHETHER IN AN ACTION IN CONTRACT, WARRANTY, TORT, NEGLIGENCE, STRICT LIABILITY, INDEMNITY, OR OTHERWISE, SHALL IN NO EVENT EXCEED THE AMOUNT OF LICENSE FEES PAID BY THE LICENSEE FOR THE SOFTWARE LICENSED UNDER THIS AGREEMENT. THESE LIMITATIONS SHALL APPLY NOTWITHSTANDING ANY FAILURE OF ESSENTIAL PURPOSE OF ANY LIMITED REMEDY.
12. **Defense.** Company will defend or settle any action brought against Licensee to the extent based on a claim that Software, used within the scope of the license, infringes any U.S. copyright and will pay the cost of any final settlement or judgment attributable to such claim provided Licensee (i) gives notice to Company of such action within 10 days of Licensee being aware that such action has commenced or is threatened, (ii) allows Company to be in a position to control the defense at its discretion in relation to this action, and any settlement negotiations, and (iii) cooperates with Company in the defense or settlement of such action. If Company believes Software is likely to be the subject of an infringement claim, it may elect to obtain for Licensee a license to continue using Software, replace or modify it to make it non-infringing or terminate the Agreement on written notice to the Licensee. Company shall have no obligation to defend (or any other liability) to the extent any claim involves anything other than the current, unaltered Software release if such would have avoided infringement or use of Software in combination with non-Company programs or data. In addition, Company will have no obligations hereunder if Licensee continues using Software although it has been informed by Company of an allegation that Software is infringing the abovementioned copyright. The foregoing states the entire obligation and liability of Company with respect to any infringement by Software of any intellectual property rights or other proprietary rights of Licensee or a third party.
13. **Termination.** This Agreement and the license may be terminated without fee reduction (i) by Licensee without cause on 30 days notice; (ii) by Company, in addition to other remedies, if Licensee is in default and fails to cure within 30 days following notice; (iii) on notice by either party hereto if the other party ceases to do business in the normal course, becomes insolvent, or becomes subject to any bankruptcy, insolvency, or equivalent proceedings. Upon termination for any reason, Licensee shall immediately return Software and all copies to Company and delete all Software and all copies from the Designated Equipment.
14. **Non-Waiver.** The delay or failure of either party to exercise any right provided in the Agreement shall not be deemed a waiver. If any provision is held invalid, all others shall remain in force.
15. **Choice of Law.** This Agreement, interpretation of this Agreement and any claims or disputes arising out of this Agreement shall be governed by the laws of France, exclusive of its conflicts of laws provisions and without regard to the United Nations Convention on Contracts for the International Sale of Goods. Any suit arising out of or relating to this Agreement shall be exclusively brought in the Bordeaux Court, France. Any action against Company under this Agreement must be commenced within one year after such cause of action accrues.
16. **Notice.** All notices that are required under this Agreement will be in writing and will be considered effective upon receipt, provided that there is proof of delivery by a third party or written acknowledgement by the recipient. The notices addressed to Company shall be sent to its address set out above. The notices addressed to Licensee shall be sent to its address set forth in the applicable price quotation.
17. **Government Restricted Rights.** This provision applies to all Software acquired directly or indirectly by or on behalf of the United States Government. The Software is a commercial product, licensed on the open market at market prices, and was developed entirely at private expense and without the use of any U.S. Government funds. If the Software is supplied to the Department of Defense, the U.S. Government acquires only the license rights customarily provided to the public and specified in this Agreement. If the Software is supplied to any unit or
agency of the U.S. Government other than the Department of Defense, the license to the U.S. Government is granted only with restricted rights. Use, duplication, or disclosure by the U.S. Government is subject to the restrictions set forth in the Commercial Computer Software License clause of FAR 52.227-19. Manufacturer is FEI SAS, a part of Thermo Fisher Scientific, 39 rue d’Armagac, Imm E2 - Quai 8.2., Bordeaux, F-33800, France.
18. Miscellaneous. This Agreement contains the entire understanding of the parties and supersedes all other agreements, oral or written, including purchase orders submitted by Licensee, with respect to the subject matter covered in this Agreement. Any other terms and conditions contained in a Licensee purchase order will not apply. This Agreement may be modified only by a writing executed by Company and Licensee. Licensee may not assign, pledge, or otherwise transfer this agreement, nor any rights or obligations hereunder in whole or in part to any entity. Paragraph headings are for convenience and shall have no effect on interpretation. In the event that it is necessary to undertake legal action to collect any amounts payable hereunder, Company shall be entitled to recover its costs and expenses including, without limitation, reasonable attorneys’ fees.
19. Maintenance. During the Maintenance Period, Company or its authorized licensee or distributor, will provide standard Software maintenance services, as applicable. Software maintenance services consist of (a) the provision of Software updates, (b) the provision of error corrections for the Software, and (c) the provision of Hotline support in connection with the Software. Software maintenance services will be provided in accordance with the terms of any Maintenance Contract to those customers who have purchased maintenance services for the applicable Software. Software maintenance services are, and will continue to be, available under this Agreement only to the extent that these services are made available by Company with respect to the Software, or any portion of the Software, to its customer base in general. Any changes or additions to Software, except changes or additions authorized by Company, as applicable, shall immediately terminate any maintenance obligation to Licensee. At the end of the Maintenance Period, standard Software maintenance services may be provided, as available, in accordance then current terms and charges for Maintenance Services. All notices of Software malfunctions shall be in writing with details sufficient to diagnose or reproduce said failure. Licensee will be responsible for any installation of any Software Updates and Software Upgrades. This Maintenance service does not apply to the Developer Academic License.
20. Export Controls. The Software and all related technical information or materials are subject to export controls and are licensable under the U.S. Government export regulations, as well as similar laws and regulations of other countries (Export Laws). You agree to comply fully with all applicable Export Laws to assure that neither the Software, nor any direct products thereof are (1) exported, directly or indirectly, in violation of Export Laws, or (2) are used for any purpose prohibited by Export Laws. The Software and any related technical information or materials may not be downloaded or otherwise exported or re-exported (i) into any country to which the U.S. has embargoed goods; or (ii) to anyone on the U.S. Treasury Department’s List of Specially Designated Nationals or the U.S. Commerce Department’s Table of Denial Orders. By downloading or using the Software, you are agreeing to the foregoing and you are representing and warranting that you are not located in, under the control of, or a national or resident of any such country or on any such list. Each party shall, at its sole cost and expense, obtain and maintain in effect all permits, licenses and other consents necessary to conduct its respective activities hereunder.
21. Use of Collected Data. Company and our agents may monitor the Software and collect data regarding your use of and the performance and operation of the Software, associated equipment, devices and peripherals, and use such data to provide support to users, detect and address threats to the functionality, security, integrity and availability of the Software, detect and address violations of this Agreement, and improve the Software (“Collected Data”). Collected Data shall exclude any personal information and output data generated by the Software, associated equipment, devices and peripherals. We and our agents will only use Collected Data on your behalf to provide the Software as permitted by applicable law. You hereby grant to Company and our agents a worldwide, royalty-free, fully paid, non-exclusive, license to copy, modify, and distribute internally and to you Collected Data in furtherance of the purposes stated in this Agreement. This license ends when Collected Data is no longer stored with Company. In addition, Company shall have a royalty-free, worldwide, transferable, sub- licensable, irrevocable, perpetual license to use or incorporate into the Software any suggestions, ideas, enhancement requests, feedback, recommendations or other information provided by you relating to the features, functionality or operation of the Software.
|
{"Source-Url": "http://assets.thermofisher.com/TFS-Assets/MSD/Licensing-Information/FEI_SAS_Thermo-Fisher-EULA_GLOBAL_With%20Usage%20Log_Nov2020.pdf", "len_cl100k_base": 5200, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 10739, "total-output-tokens": 5423, "length": "2e12", "weborganizer": {"__label__adult": 0.0011472702026367188, "__label__art_design": 0.0009164810180664062, "__label__crime_law": 0.01220703125, "__label__education_jobs": 0.00843048095703125, "__label__entertainment": 0.0007290840148925781, "__label__fashion_beauty": 0.0003561973571777344, "__label__finance_business": 0.061614990234375, "__label__food_dining": 0.0006647109985351562, "__label__games": 0.006336212158203125, "__label__hardware": 0.0013399124145507812, "__label__health": 0.0005402565002441406, "__label__history": 0.0003304481506347656, "__label__home_hobbies": 0.0002319812774658203, "__label__industrial": 0.0004723072052001953, "__label__literature": 0.00115966796875, "__label__politics": 0.0008120536804199219, "__label__religion": 0.0007653236389160156, "__label__science_tech": 0.0014324188232421875, "__label__social_life": 0.00033354759216308594, "__label__software": 0.447265625, "__label__software_dev": 0.450927734375, "__label__sports_fitness": 0.0006189346313476562, "__label__transportation": 0.0006852149963378906, "__label__travel": 0.0005993843078613281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26225, 0.00844]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26225, 0.01965]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26225, 0.92109]], "google_gemma-3-12b-it_contains_pii": [[0, 6676, false], [6676, 14026, null], [14026, 20841, null], [20841, 26225, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6676, true], [6676, 14026, null], [14026, 20841, null], [20841, 26225, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26225, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26225, null]], "pdf_page_numbers": [[0, 6676, 1], [6676, 14026, 2], [14026, 20841, 3], [20841, 26225, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26225, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0c004b91923255919936b2554f38f70724238bb9
|
ACCUJOB - JOB SEARCH AND OPTIMIZATION WEBSITE
1Akhil Chaitanya Ghanta, 2Manish CP, 3Sanjay Muzumdar, 4Dr. Swarnalata P
1Student, 2Student, 3Student, 4Professor
School of Computer Science and Engineering,
Vellore Institute of Technology, Vellore, India
Abstract: Internet is a resource which has become so important that we cannot imagine our lives without it in the today’s modern era. Internet has various function. This resource can be effectively used for the job searching process. The conventional methods should be modified and moved online. The concepts of software engineering have been applied to develop a website that can help in the job search process and make it easy and less time consuming as compared to the conventional methods. This website gives the percentage chances of the person getting recruited and tells them if they are qualified, under-qualified or over-qualified for the wide range of jobs that they have selected from the website. All the stages of the software development processes from requirements specification to testing have been documented and summarized in this paper.
Index Terms - internet, recruitment, qualification, job.
I. INTRODUCTION
In today's world, everything has become online and making everything available online is something that everyone is working on. In the times of pandemic when people were not able to go out, it was very difficult for them to find jobs and it is during this time that there have to be some websites or applications which can provide opportunities for job seekers to find the job they are looking for.
Considering this, we have decided to build a website called "Accu Job". This website helps the people who are searching for jobs in showing them the list of the jobs that they are eligible for helping them in finding their jobs quickly and also improving on the skills they have presently so that they can improve on the skills and develop themselves so that they will be industry ready.
While building this process, Software development life cycle was followed properly in order to achieve the goals that are needed in a proper manner. The SDLC model chosen was Waterfall and all the phases of the waterfall model were followed as the project demands. Following the Software development processes made the path leading to the application very understandable and easy.
Diving right into the project without having proper knowledge about the requirements and other stuff will only lead to the improper functioning of the application.
II. PROBLEM STATEMENT
AccuJob is a web app which aims at solving the area of uncertainty in the regular job-portal market. By uncertainty we mean, suppose that one candidate applies for a job and then he/she has to wait for a long time till they hear back from the company, or they might not hear back from the company at all. This means a lot of time has been wasted already which could be utilized doing skill development or job searching. AccuJob solves this problem by using two algorithms. A sorting and a scoring algorithm. The scoring algorithm scores the applicant based on the qualifications they have entered in the website and then the sorting algorithm ranks these applicants based on this score whenever they apply for a specific job. This tells the user the chance of them securing a job and if they feel it is less, they can apply for another job.
III. OBJECTIVE
In the current world having a job is really necessary for living a comfortable life. Getting a job is not at all an easy task as there is so much competition in the current world and also specifically due to this pandemic situation. This project tells the people their chances of being recruited in the company they have applied for.
IV. LITERATURE REVIEW
Internet usage has increased from practically zero to 70% of the population in the last ten years. Although research has lagged behind popular opinion, many people believe that the Internet has improved job search efficiency for job seekers. Additionally, it appears that effort has been divided between different job seeking activities as a result of the Internet. However, there is some
indication that the unemployed are getting more choosy about the positions they finally apply to. The unemployed are now more likely to have looked at adverts and to have contacted an employer directly.[7]
Proposed System
The system's objective is to enable job seekers to search for positions based on their talents, geography, and desired role. The job seekers are competent in all fundamental tasks, including updating and adding information as well as retrieving information. To ensure that they are shortlisted, job seekers can provide an updated version of their resume based on the capabilities the employer needs. The website also assists job seekers in creating a resume that adheres to standards and meets the level at which employers anticipate it to be. “The benefits include data centralization, real-time status reporting, user-friendliness, convenience, and protection, among others. This programme lessens the burden of managing and keeping track of the workings of individual departments in a university.[5]. The job seeker's submitted email address will be used to alert them of any updates pertaining to the job seeker. The employer can update the roles he needs filled as well as view and evaluate all of the candidates who have applied for open positions. The applicant will be notified through email of any updates on their application for the position the company is seeking. The list of candidates who have applied to the organization with all of their information, including the role they have applied for and a resume, will appear on the employers' dashboard.
Fig. 1. System Overview
V. SOFTWARE REQUIREMENTS SPECIFICATION (SRS)
5.1 Product Perspective
This is a new, self-contained product. Waterfall model has been chosen for the project. The reason for choosing Waterfall Model are Waterfall model is also known as the linear sequential life cycle model. In this model the whole operation is divided into phases and one phase is started only when the previous phase has been fully completed. Waterfall model is implemented when the following conditions occur: When the system requirements are known, clear and fixed. Product definition is stable in nature. Technology is understood. No ambiguous requirements. Project is short. Ample resources with expertise available freely. Waterfall model was chosen because - We are aware of our system requirements, which amounts to only a PC and a webhosting service. The product definition is a job portal that uses a simple arithmetic algorithm to rank job preferences for suitable candidates.
The technology needed here is a PC, a web hosting service and a working and functional knowledge of the following languages: Java-script, PHP, HTML. As of yet, there are no ambiguous requirements in the project. However there arises any, will be intimidated in the further versions of the SRS. The project is a small web application, and thus short.
5.2 Product Functions
Some of the major functions of the product are:
- The under-qualified candidates will get a message that they are not qualified enough for the job and they need to apply for some other positions and will be given some suggestions according to their skills.
- The over qualified people also will also receive a message that they are over qualified for the job and will be given suggestions.
- While the qualified people will be given ranks according to their skills that will in turn give them their chances of being recruited and similarly reduce the time consumed for the recruiter by giving them the best possible options.
- The percentage chances of the person getting recruited is another major attraction of the website.
5.3 User Classes and Characteristics
Majority of the users using our website are assumed to be of age group 21-25, these are the students who have freshly graduated from their colleges and now are in search of the jobs. This is the group that is well aware of the technical advancements as compared to other age groups. This can be used by student from any background either B Tech, BSc, BA or any other field. The people in other age groups looking for job are lesser so these are people who will use the website the least. The user here is expecting to find a job that suits him and also for which he’s equally qualified (not under-qualified or over qualified) such that he can live a peaceful life ahead.
5.4 Operating Environment
This is a web-based system and hence will require operating environment for a client and server GUI. This will be operating in the following:
i) Distributed Database \(\rightarrow\) Use of distributed database helps in
a) Management of data with different levels of transparency.
b) The performance of the website gets improved.
c) It is easy to expand i.e., increasing database size, adding more data.
d) The reliability on the website is increased because of the distributed database.
ii) Client / Server System \(\rightarrow\) We are client server system because
a) All the data is concentrated in a single place i.e., the server so there will data protection and security.
b) It will be easy to upgrade, replace or relocate the nodes in the client server.
iii) Operating System: Windows \(\rightarrow\) We are using Windows because majority of the people use Windows laptops only so it will be easy, but any other operating system is totally fine as it works in all kinds of Operating Systems.
iv) Database: MongoDB \(\rightarrow\) We are using MongoDB because:
a) Data Security is more
b) The performance is good.
v) Platform: NodeJS and JavaScript \(\rightarrow\) Java and is used for Back-End Development and HTML5, JavaScript and CSS for Front-End.
5.5 Design and Implementation Constraints
i) The speed of the website can be an issue. The page might generally take more than 5 to 7 seconds to load if we put more graphics (i.e., images or videos of bigger sizes) into the page or that will be fine.
ii) One more important design constraint may be the viewing mode of the website. The website is actually built for a normal screen size but if it is viewed from a mobile phone, it might be a problem in some cases only.
iii) The UI/UX part maybe an issue.
iv) As this is a basic web page, data security can be an issue. Data can be leaked.
v) The user needs to know English as the website is written in English language only.
vi) Sometimes there may be a delay in the delivery of OTP or any message or any mail to the user because of some issues.
vii) The responsibility of maintaining the website will fall completely on the maintenance team.
5.6 Interfaces
5.6.1 User Interface
As soon as the site opens on the top there is the logo of the website. Right next to it are many options like jobs, recruiters, companies, tools and login options available. In the jobs option you can see all the jobs that are available in the country from all fields. In the recruiter option you can see all the companies that are recruiting currently. Then in the tools section you can make changes in your profile sign out or see the notifications. The last option is the sign in option through which you can sign into your account. There is also a option of the preferred city of the user and also the area of his job search like Engineering, Teaching, Research etc. If any image is not displayed and if “Image” is displayed, the user needs to assumethat there was an image over there but due to some reasons the image is not opening in the particular user’s system. There will be an error message popping up if you don’t enter a proper valid email ID. The email ID must work.
5.6.2 Hardware interfaces
There are no major hardware requirements. Just the basic ones like a computer or laptop or mobile with internet connectivity will work.
5.6.3 Software interfaces
i) The system that the customer is using must have an HTML editor to view all the code that we have for developing the website. There are many available like Brackets, Visual Studio Code etc.
ii) Some tools are needed to upload files to the website like uploading Resume, CV.
iii) The communication between the database and the web portal consists of operation concerning both reading and modifying the data.
5.6.4 Communication interfaces
The communication between the systems is an important part as every page is dependent on another but in way the communication is achieved is not important as it handled by the underlying operating systems for the web portal. There are only two communication requirements:
Email: The person will be notified via email on every step of the job search process.
Mobile Number: The person will be notified via message on every step of the job search process.
5.7 System Features
5.7.1 Finding Jobs according to your wishes
5.7.1.1 Description and Priority
The stakeholders of this project are the End Users i.e. Employee, Employer, Database Administrator, Front End Developer, Back End Developer, Government. This website allows the users to find the jobs according to their wishes and tell them their percentage chances of getting employed according the required qualifications mentioned by the recruiter.
5.7.1.2 Stimulus/Response Sequences.
According to the qualifications entered by the user and the qualifications mentioned by the recruiter, both of them are compared and points are given on the basis of similarities of the qualifications and then the percentage is given, which determines whether the person is under-qualified, over-qualified or enough qualified.
5.7.2 Functional Requirements User Sign Up or Login:
5.7.2.1 Description and Priority:
This is for the user. If the user is first time user then he must create a new account for himself. If he has an account, he can directly login into the account. The priority of this particular requirement is high.
5.7.2.2 Stimulus/Response Sequences:
This can be stimulated by going to the top right of the page i.e. the right side of the navigation bar and click on either Sign Up or Login. On clicking it takes to a new page where the details are to be entered.
5.7.3 Functional Requirements:
The user has to do it to go the next page for uploading his documents. The user must have a valid email account.
5.7.3.1 Description and Priority:
After logging in, there will be options to enter your details like the name, DOB, Skills, interests, hobbies and also an option to upload the certificates which are a must and the priority is high
5.7.3.2 Stimulus/Response Sequences:
The options to enter the details will be visible properly on any screen and the user can understand easily on seeing it and there will be option after the details where you can upload the certificates and after writing everything, there will be a submit button which on clicking displays uploaded successfully.
5.7.4 Functional Requirements: Retrieve Password
These particular details can be used to find the percentage chances of him getting placed in a company.
5.7.4.1 Description and Priority:
If the user has forgotten his/her password and if he wants to login again and reset the password, he can use the forgot password button in the Login page. The priority of this step is medium as it is completely based on the user
5.7.4.2 Stimulus/Response Sequences:
The Forgot Password option will be available in the Login Page and on clicking the button, he/she will be redirected to another page where he has to enter either his email or phone number to which an OTP will be sent and on entering the correct OTP, he/she will be able to reset the password.

Fig. 3. Use Case Diagram
VI. SOFTWARE DESIGN SPECIFICATION (SDS)
6.1 Design Methodology
Some principles to be followed while making the website are:
- The design must be consistent across web pages. For example, each page of the site will have a navigation bar with all the links.
- The website must be responsive or mobile compatible.
- For easy navigation we will have a navbar which can be used to navigate to all the pages of the website.
- To establish effortless communication with the visitors, information will be organized by making good use of headlines and sub-headlines, cutting the waffle, and using bullet points.
6.2 Pseudocodes for core components
SCORING ALGORITHM TO DETERMINE QUALIFICATION SCORE FOR EMPLOYEE
Step 1 - START
Step 2 - Import numpy, mysql.connector
Step 3 - Declare cursor function // This function helps us iterate through the rows //
Step 4 - Import reference database
Step 5 - Import database containing qualifications
Step 6 - Start a WHILE loop
Step 6.1 - In the FOR function, iterate through each row of the employee table containing qualifications.
Step 6.2 - For each iteration, search for matching qualification in the reference database.
Step 6.3 - Allocate the allotted point of the qualification in the reference database to the similar database in the employee table.
Step 6.4 - Keep adding the scores till iteration stops.
Step 6.5 - Assign variable a = (sum of scores)
Step 6.6 - End the WHILE loop
Step 7 - Repeat the same process for all the employees
Step 8 - Create a table which contains all the EmployeeIDs (primary key) and their respective total scores.
Step 9 – STOP
SORTING ALGORITHM TO SORT THE EMPLOYEE BASED ON THEIR SCORE
Step 1 - START
Step 2 - Import numpy, mysql.connector
Step 3 - Declare a cursor function // This function helps us iterate through the rows //
Step 4 - Import the database created by the scoring algorithm
Step 5 - Read the data of each rows containing the employee ID and their respective score.
Step 6 - Apply QUICKSORT to this data.
Step 7 - Create a new table with the sorted list.
Step 8 - This list determines the applicant's scope of securing a job.
Step 9 - STOP
6.3 Architecture
The AccuJob system architecture is designed to maintain a database server which stores the information of all our clients and all the processes, a login/logout type interface, a server to run our sorting and scoring algorithms which determines job probability and a visual interface to display the end result of our processes. All this is carried out in synchronicity using transfer protocols, such as TCP/IP, FTP, HTTP etc.
6.4 Subsystem and Components
The user sees the homepage of the website, on it there are many options like jobs which will show all the available job titles or posts available on the website. Next option is recruiters which shows all the companies which have posted jobs on the site. Under the services section there are two options one is Sample Resume and another is Interview Questions. The former gives the job seekers some sample resumes from which the users can design their own resumes and the other options gives the list of interview questions which the user can see in order to prepare for his/her interview. Then there is a LOGIN and a signup option. Clicking on either gives two options for employee or for company.
For Job Seeker:
On clicking the Signup option for employee, the user is directed to the signup page, where the user needs to fill up some basic information like Name (first and last), email ID, phone number, password, address (line 1 and line 2), city, state, ZIP code and country. After filling up the details, user is taken to the profile page.
On clicking the sign in option, the sign in page opens where the user needs to enter a email ID and password. If the password and email ID is correct then dashboard opens which displays the jobs.
On the profile page the user needs to fill his/her name, qualifications, skills, area of interest (Field in which user wants to pursue his/her career), prefereed location for the job(city). After this is done, dashboard opens.
On the dashboard all the available jobs are available on the basis of area of interest and preferred location given by the user. Here the name of the company, job title, pay per month and last date to apply all these details are available. If the users like the mentioned things he/she can click on view details button.
On clicking view details, the details given by the company are available including salary, company logo, company name, last date to apply, job title, about the company, skills required for the jobs and most importantly it shows the percentage chances of user getting recruited for the job. If the user likes everything, he can click on Apply now option.
On clicking apply now a page opens up where the user needs to upload his/her resume and submit. For Company
On clicking the Signup option for company, the user is directed to the signup page, where the user needs to fill up some basic information like Company Name, company’s email ID, phone number, password, address (line 1 and line 2), city, state, ZIP code and country. After filling up the details, user is taken to profile page.
On clicking the sign in option, the sign in page opens where the user needs to enter the email ID and password. If the password and email ID is correct then dashboard opens which displays all the candidates.
On the profile page the user needs to fill the name of the company, Job title, available posts, salary, skills required, Job description, logo of the company and the last date to apply. After this is done, dashboard opens.
On the dashboard the Employers can see the list of people who have applied for the job along with their skills and qualifications. If the company is interested, they can click on View details. This page will show name of the candidate, the post he/she has applied for, about the candidate section includes Skills and qualifications of the candidate. It also shows the percentage chances of the candidate getting selected to the company. If the company likes it there is an option download resume from which the company can download the person’s resume. After reading the resume, if the employer likes the candidate, they can call him/her for the interview.
6.5 Database Schema
MongoDB database has been used for this project. New tables have been made for storing data. They are:
- Employee – Employee table is related to all the data of the particular employee.
- Employer – Profile table is related to all the data of the particular employee.
VII. ANALYSIS MODELS
Fig. 4. Activity Diagram
Fig. 5. State Transition Diagram
Fig. 6. Class Diagram
Fig. 7. Entity Relationship Diagram
VIII. IMPLEMENTATION AND TESTING
In this part of the paper, we shall be able to see the testing methods, testing processes, and the test results that we obtained when performing these tests on AccuJob. To facilitate testing, Cyclomatic Complexity, and automated testing tool Selenium has been used.
8.1 Routes or Modules
There were various algorithms that facilitated the execution of Cyclomatic Complexity. The pseudocode for these algorithms are numerous, and we have provided the codes which facilitate the Cyclomatic Complexity.
\[ \text{Cyclomatic complexity} = E - N + 2P \]
Where,
- \( E \) = number of edges in the flow graph
- \( N \) = number of nodes in the flow graph
- \( P \) = number of nodes that have exit points
So cyclomatic complexity = 12 – 10 +2 = 4
8.2 Testing
Testing is a stage in software development that is as important as implementation. Testing helps us uncover error, flaws and defects in our code and implementation of the project. Based on the Cyclomatic Complexity calculated, test cases were generated. Manual testing was carried out on these components first and the results were noted down. The results of manual testing are summarized in the following table.
<table>
<thead>
<tr>
<th>Test Case ID</th>
<th>Test Objective</th>
<th>Test Data</th>
<th>Expected Results</th>
<th>Actual Results</th>
<th>Test Pass/Fail</th>
</tr>
</thead>
<tbody>
<tr>
<td>TEST CASE 1</td>
<td>LOGIN (Employee)</td>
<td>username: root password: cmpt@123</td>
<td>Employee Dashboard</td>
<td>Display Employee Dashboard</td>
<td>Pass</td>
</tr>
<tr>
<td>TEST CASE 2</td>
<td>LOGIN (Employer)</td>
<td>username: apple password: apply@123</td>
<td>Employee Dashboard</td>
<td>Display Employee Dashboard</td>
<td>Pass</td>
</tr>
<tr>
<td>TEST CASE 3</td>
<td>SIGN UP (Employee)</td>
<td>FirstName: Rohit LastName: Barry username: rbt PhoneNumber: 867-684-3211 Skills: C++, Python...</td>
<td>Employee Login</td>
<td>Display Employee Login</td>
<td>Pass</td>
</tr>
<tr>
<td>TEST CASE 4</td>
<td>SIGN UP (Employer)</td>
<td>CompanyName: Apple username: apple PhoneNumber: 867-684-3211 Skills Required: C++, Python...</td>
<td>Employee Login</td>
<td>Display Employee Login</td>
<td>Pass</td>
</tr>
<tr>
<td>TEST CASE 5</td>
<td>Employee Dashboard</td>
<td>View Details of the Company button</td>
<td>Apply Page (Employer)</td>
<td>Display Resume Upload Page (Employees)</td>
<td>Pass</td>
</tr>
<tr>
<td>TEST CASE 6</td>
<td>Apply Page (Employee)</td>
<td>Apply Now button</td>
<td>Resume Upload Page (Employees)</td>
<td>Display Resume Upload Page (Employees)</td>
<td>Pass</td>
</tr>
<tr>
<td>TEST CASE 7</td>
<td>Resume Upload Page (Employees) before uploading</td>
<td>Upload file option</td>
<td>Resume Upload Page (Employees) after uploading</td>
<td>Display Resume Upload Page (Employees) after uploading</td>
<td>Pass</td>
</tr>
<tr>
<td>TEST CASE 8</td>
<td>Resume Upload Page (Employees) after uploading</td>
<td>Submit button</td>
<td>Employee Dashboard (without the details of the company applied)</td>
<td>Display Employee Dashboard (without the details of the company applied)</td>
<td>Pass</td>
</tr>
<tr>
<td>TEST CASE 9</td>
<td>Employer Dashboard</td>
<td>View details button</td>
<td>Call for interview page displaying the details</td>
<td>Display Call for interview page displaying the details</td>
<td>Pass</td>
</tr>
</tbody>
</table>
Fig. 10 Test Case Table part 1
<table>
<thead>
<tr>
<th>Test Case 10</th>
<th>Call for interview page displaying the details of the particular employer (before downloading the resume)</th>
<th>Download resume button</th>
<th>Call for interview page displaying the details of the particular employer (after downloading the resume)</th>
<th>Display Resume Page (Employees)</th>
<th>Pass</th>
</tr>
</thead>
<tbody>
<tr>
<td>TEST CASE 11</td>
<td>Call for interview Page displaying the details of the particular employer (after downloading the resume)</td>
<td>Submit button</td>
<td>Employee Dashboard</td>
<td>Display Employee Dashboard</td>
<td>Pass</td>
</tr>
</tbody>
</table>
Fig. 11 Test Case Table part 2
There are total 17 test cases. They have been displayed in the above tables. We will look at the program flow diagram for the create new profile route to understand how the test cases were derived from it.
Once the user (employee/employer) finishes registration and uploaded the relevant skills that are required for the creation, maintenance, and functioning of the account then we can use the inserted information as a part of the testing processes.
The testing tool used for the testing of AccuJob was the automated testing software, Selenium. It tested various aspects of the website, such as the time duration of each request, errors, failed tests, passed tests, DNS and SSL behavior, etc.
Here are a few screenshots of the testing results obtained.

In Fig. 14 we can see the various tests that the automated software has performed and the test targets that the test was supposed to perform.
The testing results hence help us understand that the website is reliable and can maintain its efficiency under high pressure conditions. It is also fairly accurate and errorless.
IX. RESULTS AND DISCUSSION
Fig. 14. Automated Testing Results for Employee Home page
Fig. 15. Index Page
Fig. 16. Home Page
Fig. 17. Employee login page
Fig. 18. Employee Dashboard
The AccuJob team treats this project as a journey, because at the starting of the project, this was just an idea and then each member realized the skills required, learnt it and also successfully implemented it. The team has gotten exposed to an in depth level as to how to make a web app and how to make it user friendly and intuitive.
By implementing the things learnt in the subject, the path became easier, as we knew how to proceed and what model to follow (Waterfall model) and how to conduct testing, how to manage documentation, etc.
Another important technical aspect the team learnt in this time was designing and implementing a user friendly and intuitive GUI (Graphic User Interface) which makes the web application attractive as well as functionally fluid.
As taught in the subject, testing and planning were not treated as two separate entities nor were there any specific phases to execute these operations. They were maintained and executed all along the development of the application, at times when the modules were independently created and also when they were joined to function as a whole.
At the end, it is necessary to say how the team has experienced Software Engineering. The whole team feels that it is an essential subject, as everybody can learn how to code, however, without knowing how to document things properly, or what model to follow so as to get the best results in the shortest time, things can become a mess. Software Engineering Methodologies helps us prevent that and increase efficiency to the optimum level.
X. ACKNOWLEDGMENT
We profusely thank our professor Dr. Swarnalatha P. for her constant guidance and motivation to help us create this project and also this final paper. Also, completing this project without utilizing the scope and opportunity provided by Vellore Institute of Technology, Vellore would have been impossible, and so, a large amount of appreciation goes to the institute too.
REFERENCES
|
{"Source-Url": "https://ijcrt.org/papers/IJCRT2210242.pdf", "len_cl100k_base": 6413, "olmocr-version": "0.1.51", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 41247, "total-output-tokens": 7434, "length": "2e12", "weborganizer": {"__label__adult": 0.0007815361022949219, "__label__art_design": 0.0009202957153320312, "__label__crime_law": 0.000732421875, "__label__education_jobs": 0.09521484375, "__label__entertainment": 0.0002105236053466797, "__label__fashion_beauty": 0.0005540847778320312, "__label__finance_business": 0.0024089813232421875, "__label__food_dining": 0.0010824203491210938, "__label__games": 0.001369476318359375, "__label__hardware": 0.0010824203491210938, "__label__health": 0.0005855560302734375, "__label__history": 0.0004758834838867187, "__label__home_hobbies": 0.0002963542938232422, "__label__industrial": 0.0008020401000976562, "__label__literature": 0.0007781982421875, "__label__politics": 0.0006527900695800781, "__label__religion": 0.0007505416870117188, "__label__science_tech": 0.005298614501953125, "__label__social_life": 0.0004630088806152344, "__label__software": 0.007663726806640625, "__label__software_dev": 0.87548828125, "__label__sports_fitness": 0.0006461143493652344, "__label__transportation": 0.0011777877807617188, "__label__travel": 0.0004982948303222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31581, 0.02893]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31581, 0.07494]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31581, 0.93106]], "google_gemma-3-12b-it_contains_pii": [[0, 4150, false], [4150, 7072, null], [7072, 10745, null], [10745, 13663, null], [13663, 16461, null], [16461, 22021, null], [22021, 22391, null], [22391, 22450, null], [22450, 23656, null], [23656, 26572, null], [26572, 27061, null], [27061, 27349, null], [27349, 27427, null], [27427, 29373, null], [29373, 31581, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4150, true], [4150, 7072, null], [7072, 10745, null], [10745, 13663, null], [13663, 16461, null], [16461, 22021, null], [22021, 22391, null], [22391, 22450, null], [22450, 23656, null], [23656, 26572, null], [26572, 27061, null], [27061, 27349, null], [27349, 27427, null], [27427, 29373, null], [29373, 31581, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31581, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31581, null]], "pdf_page_numbers": [[0, 4150, 1], [4150, 7072, 2], [7072, 10745, 3], [10745, 13663, 4], [13663, 16461, 5], [16461, 22021, 6], [22021, 22391, 7], [22391, 22450, 8], [22450, 23656, 9], [23656, 26572, 10], [26572, 27061, 11], [27061, 27349, 12], [27349, 27427, 13], [27427, 29373, 14], [29373, 31581, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31581, 0.06452]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
33ffe31c90f7afa5e6dc447a500155d961e7b587
|
CSCI 104
Runtime Complexity
Mark Redekopp
David Kempe
Runtime
• It is hard to compare the run time of an algorithm on actual hardware
– Time may vary based on speed of the HW, etc.
• The same program may take 1 sec. on your laptop but 0.5 second on a high performance server
• If we want to compare 2 algorithms that perform the same task we could try to count operations (regardless of how fast the operation can execute on given hardware)...
– But what is an operation?
– How many operations is: i++ ?
– i++ actually requires grabbing the value of i from memory and bringing it to the processor, then adding 1, then putting it back in memory. Should that be 3 operations or 1?
– Its painful to count 'exact' numbers operations
• Big-O, Big-Ω, and Θ notation allows us to be more general (or "sloppy" as you may prefer)
Complexity Analysis
• To find upper or lower bounds on the complexity, we must consider the set of all possible inputs, I, of size, n
• Derive an expression, T(n), in terms of the input size, n, for the number of operations/steps that are required to solve the problem of a given input, i
– Some algorithms depend on i and n
• Find(3) in the list shown vs. Find(2)
– Others just depend on n
• Push_back / Append
• Which inputs though?
– Best, worst, or "typical/average" case?
• We will always apply it to the "worst case"
– That's usually what people care about
Note: Running time is not just based on an algorithm, BUT algorithm + input data
Big-O, Big-Ω
- \( T(n) \) is said to be \( O(f(n)) \) if...
- \( T(n) < a*f(n) \) for \( n > n_0 \) (where \( a \) and \( n_0 \) are constants)
- Essentially an upper-bound
- We'll focus on big-O for the worst case
- \( T(n) \) is said to be \( Ω(f(n)) \) if...
- \( T(n) > a*f(n) \) for \( n > n_0 \) (where \( a \) and \( n_0 \) are constants)
- Essentially a lower-bound
- \( T(n) \) is said to be \( Θ(f(n)) \) if...
- \( T(n) \) is both \( O(f(n)) \) AND \( Ω(f(n)) \)
Worst Case and Big-Ω
• What's the lower bound on `List::find(val)`
– Is it $\Omega(1)$ since we might find the given value on the first element?
– Well it could be if we are finding a lower bound on the 'best case'
• Big-Ω does **NOT** have to be synonymous with 'best case'
– Though many times it mistakenly is
• You can have:
– Big-O for the best, average, worst cases
– Big-Ω for the best, average, worst cases
– Big-Θ for the best, average, worst cases
Worst Case and Big-$\Omega$
- The key idea is an algorithm may perform differently for different input cases
- Imagine an algorithm that processes an array of size $n$ but depends on what data is in the array
- Big-$O$ for the worst-case says **ALL** possible inputs are bound by $O(f(n))$
- Every possible combination of data is at MOST bound by $O(f(n))$
- Big-$\Omega$ for the worst-case is attempting to establish a lower bound (at-least) for the worst case (the worst case is just one of the possible input scenarios)
- If we look at the first data combination in the array and it takes $n$ steps then we can say the algorithm is $\Omega(n)$.
- Now we look at the next data combination in the array and the algorithm takes $n^{1.5}$. We can now say worst case is $\Omega(n^{1.5})$.
- To arrive at $\Omega(f(n))$ for the worst-case requires you simply to find **AN** input case (i.e. the worst case) that requires at least $f(n)$ steps
Deriving T(n)
• Derive an expression, T(n), in terms of the input size for the number of operations/steps that are required to solve a problem
• If is true => 4
• Else if is true => 5
• Worst case => T(n) = 5
```cpp
#include <iostream>
using namespace std;
int main()
{
int i = 0;
x = 5;
if(i < x)
{
x--; 1
}
else if(i > x)
{
x += 2; 1
}
return 0;
1
1
1
1
1
1
1
1
```
Deriving $T(n)$
- Since loops repeat you have to take the sum of the steps that get executed over all iterations
- $T(n) =$
$= \sum_{i=0}^{n-1} 5 = 5 \times n$
- Or you can setup a relationship like:
- $T(n) = T(n - 1) + 5$
$= T(n - 2) + 5 + 5$
$= \sum_{i=0}^{n-1} 5 = 5 \times n$
$= \sum_{i=0}^{n-1} O(1) = O(n)$
```cpp
#include <iostream>
using namespace std;
int main()
{
for(int i=0; i < N; i++){
x = 5;
if(i < x){
x--;
}
else if(i > x){
x += 2;
}
}
return 0;
}
```
Common Summations
- \[ \sum_{i=1}^{n} i = \frac{n(n+1)}{2} = \theta(n^2) \]
- This is called the arithmetic series
- \[ \sum_{i=1}^{n} \theta(i^p) = \theta(n^{p+1}) \]
- This is a general form of the arithmetic series
- \[ \sum_{i=1}^{n} c^i = \frac{c^{n+1}-1}{c-1} = \theta(c^n) \]
- This is called the geometric series
- \[ \sum_{i=1}^{n} \frac{1}{i} = \theta(\log n) \]
- This is called the harmonic series
Skills You Should Gain
• To solve these running time problems try to break the problem into 2 parts:
• FIRST, setup the expression (or recurrence relationship) for the number of operations
• SECOND, solve
– Unwind the recurrence relationship
– Develop a series summation
– Solve the series summation
Loops
• Derive an expression, \( T(n) \), in terms of the input size for the number of operations/steps that are required to solve a problem
\[
T(n) = \sum_{i=0}^{n-1} \sum_{j=0}^{n-1} \theta(1) = \sum_{i=0}^{n-1} \theta(n) = \Theta(n^2)
\]
```cpp
#include <iostream>
using namespace std;
const int n = 256;
unsigned char image[n][n]
int main()
{
for(int i=0; i < n; i++){
for(int j=0; j < n; j++){
image[i][j] = 0;
}
}
return 0;
}
```
Matrix Multiply
- Derive an expression, \( T(n) \), in terms of the input size for the number of operations/steps that are required to solve a problem
- \( T(n) = \)
\[
= \sum_{i=0}^{n-1} \sum_{j=0}^{n-1} \sum_{k=0}^{n-1} \theta(1) = \theta(n^3)
\]
```
#include <iostream>
using namespace std;
const int n = 256;
int a[n][n], b[n][n], c[n][n];
int main()
{
for(int i=0; i < n; i++){
for(int j=0; j < n; j++){
c[i][j] = 0;
for(int k=0; k < n; k++){
c[i][j] += a[i][k]*b[k][j];
}
}
}
return 0;
}
```
Sequential Loops
- Is this also $n^3$?
- No!
- 3 for loops, but not nested
- $O(n) + O(n) + O(n) = 3*O(n) = O(n)$
```cpp
#include <iostream>
using namespace std;
const int n = 256;
unsigned char image[n][n]
int main()
{
for(int i=0; i < n; i++)
{
image[0][i] = 5;
}
for(int j=0; j < n; j++)
{
image[1][j] = 5;
}
for(int k=0; k < n; k++)
{
image[2][k] = 5;
}
return 0;
}
```
Counting Steps
• It may seem like you can just look for nested loops and then raise n to that power
– 2 nested for loops => $O(n^2)$
• But be careful!!
• You have to count steps
– Look at the update statement
– Outer loop increments by 1 each time so it will iterate N times
– Inner loop updates by dividing x in half each iteration?
– After 1st iteration => $x=n/2$
– After 2nd iteration => $x=n/4$
– After 3rd iteration => $x=n/8$
– Say $k^{th}$ iteration is last => $x = n/2^k = 1$
– Solve for $k$
– $k = \log_2(n)$ iterations
– $O(n\log(n))$
```cpp
#include <iostream>
using namespace std;
const int n = 256;
int main()
{
for(int i=0; i < n; i++){
int y=0;
for(int x=n; x != 1; x=x/2){
y++;
}
cout << y << endl;
}
return 0;
}
```
Analyze This
- Count the steps of this example?
- \( T(n) = T(n-1) + n-1 \)
- \( 0 + 1 + ... + n-2 + n-1 \)
- \( (n-1)n/2 \)
```cpp
#include <iostream>
using namespace std;
const int n = 256;
int a[n];
int main()
{
for(int i=0; i < n; i++){
a[i] = 0;
for(int j=0; j < i; j++){
a[i] += j;
}
}
return 0;
}
```
Analyze This
• Count the steps of this example?
\[ \sum_{i=0}^{\lfloor \lg(n) \rfloor} \sum_{j=0}^{2^i} 1 \]
\[ = \sum_{i=0}^{\lfloor \lg(n) \rfloor} 2^i \]
• Use the geometric sum eqn.
\[ = \sum_{i=0}^{n-1} a^i = \frac{1-a^n}{1-a} \]
• So our answer is...
\[ \frac{1-2^{\lfloor \lg(n) \rfloor+1}}{1-2} = \frac{1-2*2^n}{-1} = O(n) \]
```cpp
for (int i = 0; i <= log2(n); i ++)
for (int j=0; j < (int) pow(2,i); j++)
cout << j;
```
Another Example
- Count steps here...
- Think about how many times if statement will evaluate true
- \( T(n) = \sum_{i=0}^{n-1}(\theta(1) + O(n)) \)
- \( T(n) = \)
```c++
for (int i = 0; i < n; i++)
{
cout << "i: " ;
int m = sqrt(n);
if ( i % m == 0 ){
for (int j=0; j < n; j++)
cout << j << " " ;
}
cout << endl;
}
```
Another Example
- Count steps here...
- Think about how many times if statement will evaluate true
- \( T(n) = \sum_{i=0}^{n-1} (\theta(1) + O(n)) \)
- \( T(n) = \sum_{i=0}^{n-1} \theta(1) + \sum_{k=1}^{\sqrt{n}} \sum_{j=1}^{n} \theta(1) \)
- \( T(n) = \theta(n) + \sum_{k=1}^{\sqrt{n}} \theta(n) \)
- \( T(n) = \theta(n) + \theta(n \cdot \sqrt{n}) \)
- \( T(n) = \theta(n^{3/2}) \)
```cpp
for (int i = 0; i < n; i++)
{
cout << "i: " ;
int m = sqrt(n);
if( i % m == 0){
for (int j=0; j < n; j++)
cout << j << " ";
}
cout << endl;
}
```
What about Recursion
• Assume N items in the linked list
• $T(n) = 1 + T(n-1)$
• $= 1 + 1 + T(n-2)$
• $= 1 + 1 + 1 + T(n-3)$
• $= n = O(n)$
```cpp
void print(Item* head)
{
if(head==NULL) return;
else {
cout << head->val << endl;
print(head->next);
}
}
```
Binary Search
• Assume N items in the data array
• \( T(n) = \)
– \( O(1) \) if base case
– \( O(1) + T(n/2) \)
• \( = 1 + T(n/2) \)
• \( = 1 + 1 + T(n/4) \)
• \( = k + T(n/2^k) \)
• Stop when \( 2^k = n \)
– Implies \( \log_2(n) \) recursions
• \( O(\log_2(n)) \)
```c
int bsearch(int data[],
int start, int end,
int target)
{
if(end >= start)
return -1;
int mid = (start+end)/2;
if(target == data[mid])
return mid;
else if(target < data[mid])
return bsearch(data, start, mid, target);
else
return bsearch(data, mid, end, target);
}
```
AMORTIZED RUNTIME
Example
• You love going to Disneyland. You purchase an annual pass for $240. You visit Disneyland once a month for a year. Each time you go you spend $20 on food, etc.
– What is the cost of a visit?
• Your annual pass cost is spread or "amortized" (or averaged) over the duration of its usefulness
• Often times an operation on a data structure will have similar "irregular" costs that we can then amortize over future calls
Amortized Array Resize Run-time
- What is the run-time of insert or push_back:
- If we have to resize?
- $O(n)$
- If we don't have to resize?
- $O(1)$
- Now compute the total cost of a series of insertions using resize by 1 at a time
- Each insert now costs $O(n)$... not good
Amortized Array Resize Run-time
- What if we resize by adding 5 new locations each time
- Start analyzing when the list is full...
- 1 call to insert will cost: 5
- What can I guarantee about the next 4 calls to insert?
- They will cost 1 each because I have room
- After those 4 calls the next insert will cost: 10
- Then 4 more at cost=1
- If the list is size n and full
- Next insert cost = n
- 4 inserts after than = 1 each
- Cost for 5 inserts = n+5
- Runtime = cost / insert = (n+5)/5 = O(n)
Consider a Doubling Size Strategy
• Start when the list is full and at size n
• Next insertion will cost?
– O(n+1)
• How many future insertions will be guaranteed to be cost = 1?
– n-1 insertions
– At a cost of 1 each, I get n-1 total cost
• So for the n insertions my total cost was
– n+1 + n-1 = 2*n
• Amortized runtime is then:
– Cost / insertions
– O(2*n / n) = O(2)
= O(1) = constant!!!
Another Example
• Let's say you are writing an algorithm to take a $n$-bit binary combination (3-bit and 4-bit combinations are to the right) and produce the next binary combination
• Assume all the cost in the algorithm is spent changing a bit (define that as 1 unit of work)
• I could give you any combination, what is the worst case run-time? Best-case?
– $O(n)$ => 011 to 100
– $O(1)$ => 000 to 001
Another Example
• Now let's consider the program that generates all the combinations sequentially (in order)
– Starting at 000 => 001 : cost = 1
– Starting at 001 => 010 : cost = 2
– Starting at 010 => 011 : cost = 1
– Starting at 011 => 100 : cost = 3
– Starting at 100 => 101 : cost = 1
– Starting at 101 => 110 : cost = 2
– Starting at 111 => 000 : cost = 3
– Total = 14 / 8 calls = 1.75
• Repeat for the 4-bit
– 1 + 2 + 1 + 3 + 1 + 2 + 1 + 4 + ...
– Total = 30 / 16 = 1.875
• As n gets larger...Amortized cost per call = 2
Importance of Complexity
<table>
<thead>
<tr>
<th>N</th>
<th>O(1)</th>
<th>O(log₂n)</th>
<th>O(n)</th>
<th>O(n*log₂n)</th>
<th>O(n²)</th>
<th>O(2ⁿ)</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>2</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>20</td>
<td>1</td>
<td>4.3</td>
<td>20</td>
<td>86.4</td>
<td>400</td>
<td>1,048,576</td>
</tr>
<tr>
<td>200</td>
<td>1</td>
<td>7.6</td>
<td>200</td>
<td>1,528.8</td>
<td>40,000</td>
<td>1.60694E+60</td>
</tr>
<tr>
<td>2000</td>
<td>1</td>
<td>11.0</td>
<td>2000</td>
<td>21,931.6</td>
<td>4,000,000</td>
<td>#NUM!</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://ee.usc.edu:80/~redekopp/cs104/slides/L07_Runtime.pdf", "len_cl100k_base": 4629, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 50549, "total-output-tokens": 5932, "length": "2e12", "weborganizer": {"__label__adult": 0.0004546642303466797, "__label__art_design": 0.0003724098205566406, "__label__crime_law": 0.00042629241943359375, "__label__education_jobs": 0.0004191398620605469, "__label__entertainment": 7.534027099609375e-05, "__label__fashion_beauty": 0.0001672506332397461, "__label__finance_business": 0.0001854896545410156, "__label__food_dining": 0.0005674362182617188, "__label__games": 0.0008749961853027344, "__label__hardware": 0.0018138885498046875, "__label__health": 0.0006761550903320312, "__label__history": 0.00027680397033691406, "__label__home_hobbies": 0.00014340877532958984, "__label__industrial": 0.0004897117614746094, "__label__literature": 0.00021791458129882812, "__label__politics": 0.00029468536376953125, "__label__religion": 0.0006070137023925781, "__label__science_tech": 0.0157623291015625, "__label__social_life": 7.957220077514648e-05, "__label__software": 0.003002166748046875, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.0004718303680419922, "__label__transportation": 0.0007586479187011719, "__label__travel": 0.00027632713317871094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13220, 0.03331]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13220, 0.53208]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13220, 0.75575]], "google_gemma-3-12b-it_contains_pii": [[0, 55, false], [55, 837, null], [837, 1499, null], [1499, 1988, null], [1988, 2460, null], [2460, 3409, null], [3409, 3869, null], [3869, 4430, null], [4430, 4852, null], [4852, 5169, null], [5169, 5652, null], [5652, 6233, null], [6233, 6679, null], [6679, 7497, null], [7497, 7856, null], [7856, 8306, null], [8306, 8671, null], [8671, 9252, null], [9252, 9542, null], [9542, 10164, null], [10164, 10182, null], [10182, 10613, null], [10613, 10901, null], [10901, 11420, null], [11420, 11829, null], [11829, 12239, null], [12239, 12789, null], [12789, 13220, null]], "google_gemma-3-12b-it_is_public_document": [[0, 55, true], [55, 837, null], [837, 1499, null], [1499, 1988, null], [1988, 2460, null], [2460, 3409, null], [3409, 3869, null], [3869, 4430, null], [4430, 4852, null], [4852, 5169, null], [5169, 5652, null], [5652, 6233, null], [6233, 6679, null], [6679, 7497, null], [7497, 7856, null], [7856, 8306, null], [8306, 8671, null], [8671, 9252, null], [9252, 9542, null], [9542, 10164, null], [10164, 10182, null], [10182, 10613, null], [10613, 10901, null], [10901, 11420, null], [11420, 11829, null], [11829, 12239, null], [12239, 12789, null], [12789, 13220, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13220, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13220, null]], "pdf_page_numbers": [[0, 55, 1], [55, 837, 2], [837, 1499, 3], [1499, 1988, 4], [1988, 2460, 5], [2460, 3409, 6], [3409, 3869, 7], [3869, 4430, 8], [4430, 4852, 9], [4852, 5169, 10], [5169, 5652, 11], [5652, 6233, 12], [6233, 6679, 13], [6679, 7497, 14], [7497, 7856, 15], [7856, 8306, 16], [8306, 8671, 17], [8671, 9252, 18], [9252, 9542, 19], [9542, 10164, 20], [10164, 10182, 21], [10182, 10613, 22], [10613, 10901, 23], [10901, 11420, 24], [11420, 11829, 25], [11829, 12239, 26], [12239, 12789, 27], [12789, 13220, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13220, 0.01463]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
d7e728821449efe957c065b1ba3357eadd735b83
|
[REMOVED]
|
{"Source-Url": "http://ls13-www.cs.tu-dortmund.de/homepage/publications/gedikli/2011_Conference_EC-Web_Long_c.pdf", "len_cl100k_base": 6893, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 32681, "total-output-tokens": 8508, "length": "2e12", "weborganizer": {"__label__adult": 0.0003674030303955078, "__label__art_design": 0.0005273818969726562, "__label__crime_law": 0.0004329681396484375, "__label__education_jobs": 0.0010271072387695312, "__label__entertainment": 0.00021207332611083984, "__label__fashion_beauty": 0.00025916099548339844, "__label__finance_business": 0.0005803108215332031, "__label__food_dining": 0.0004513263702392578, "__label__games": 0.000904560089111328, "__label__hardware": 0.0012617111206054688, "__label__health": 0.000873565673828125, "__label__history": 0.0003769397735595703, "__label__home_hobbies": 0.00012743473052978516, "__label__industrial": 0.0004010200500488281, "__label__literature": 0.0005669593811035156, "__label__politics": 0.000385284423828125, "__label__religion": 0.0004456043243408203, "__label__science_tech": 0.243408203125, "__label__social_life": 0.0002315044403076172, "__label__software": 0.055389404296875, "__label__software_dev": 0.69091796875, "__label__sports_fitness": 0.0002582073211669922, "__label__transportation": 0.0004701614379882813, "__label__travel": 0.0002582073211669922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32285, 0.04107]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32285, 0.6424]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32285, 0.88988]], "google_gemma-3-12b-it_contains_pii": [[0, 2339, false], [2339, 5102, null], [5102, 8596, null], [8596, 10873, null], [10873, 13970, null], [13970, 16449, null], [16449, 19336, null], [19336, 20487, null], [20487, 22550, null], [22550, 26066, null], [26066, 29095, null], [29095, 32285, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2339, true], [2339, 5102, null], [5102, 8596, null], [8596, 10873, null], [10873, 13970, null], [13970, 16449, null], [16449, 19336, null], [19336, 20487, null], [20487, 22550, null], [22550, 26066, null], [26066, 29095, null], [29095, 32285, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32285, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32285, null]], "pdf_page_numbers": [[0, 2339, 1], [2339, 5102, 2], [5102, 8596, 3], [8596, 10873, 4], [10873, 13970, 5], [13970, 16449, 6], [16449, 19336, 7], [19336, 20487, 8], [20487, 22550, 9], [22550, 26066, 10], [26066, 29095, 11], [29095, 32285, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32285, 0.08333]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
99740276bf7b38b9b73c7758c574d9d84babfcb6
|
[REMOVED]
|
{"Source-Url": "http://www.es.ele.tue.nl/sadf/publications/GFHBTS14.pdf", "len_cl100k_base": 6616, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 20488, "total-output-tokens": 9196, "length": "2e12", "weborganizer": {"__label__adult": 0.0005521774291992188, "__label__art_design": 0.0008268356323242188, "__label__crime_law": 0.0005764961242675781, "__label__education_jobs": 0.0007081031799316406, "__label__entertainment": 0.00021827220916748047, "__label__fashion_beauty": 0.0002734661102294922, "__label__finance_business": 0.0004224777221679687, "__label__food_dining": 0.0006403923034667969, "__label__games": 0.0011949539184570312, "__label__hardware": 0.0059814453125, "__label__health": 0.0011148452758789062, "__label__history": 0.0005164146423339844, "__label__home_hobbies": 0.00020134449005126953, "__label__industrial": 0.0012569427490234375, "__label__literature": 0.0004096031188964844, "__label__politics": 0.0005564689636230469, "__label__religion": 0.0008068084716796875, "__label__science_tech": 0.471923828125, "__label__social_life": 0.00010776519775390624, "__label__software": 0.00684356689453125, "__label__software_dev": 0.50244140625, "__label__sports_fitness": 0.0004940032958984375, "__label__transportation": 0.0017023086547851562, "__label__travel": 0.0002994537353515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32856, 0.02365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32856, 0.39025]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32856, 0.83926]], "google_gemma-3-12b-it_contains_pii": [[0, 5853, false], [5853, 12314, null], [12314, 18734, null], [18734, 24808, null], [24808, 32856, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5853, true], [5853, 12314, null], [12314, 18734, null], [18734, 24808, null], [24808, 32856, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32856, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32856, null]], "pdf_page_numbers": [[0, 5853, 1], [5853, 12314, 2], [12314, 18734, 3], [18734, 24808, 4], [24808, 32856, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32856, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
9b8c7f7550c15ed292cef9372458424277c6edb1
|
Definition of ASMs
Syntax and Semantics
Università di Pisa, Dipartimento di Informatica, boerger@di.unipi.it
Universität Ulm, Abteilung Informatik, alexander.raschke@uni-ulm.de
See Ch. 7 of Modeling Companion
http://modelingbook.informatik.uni-ulm.de
Syntax: ASM program/rule (over given signature $\Sigma$)
Update rule: $f(t_1, \ldots, t_n) := t$ is an ASM program
- for every $n$-ary function symbol $f \in \Sigma$ where $n \geq 0$ and $t_i, t$ are expressions (terms) over $\Sigma$
- meaning: evaluate $(t_1, \ldots, t_n, t)$, use the result $(a_1, \ldots, a_n, a)$ to update the interpretation of $f$ at argument $(a_1, \ldots, a_n)$ to value $a$.
Conditional rule: if $\text{Condition}$ then $P$ else $Q$ is an ASM program
- for each Boolean expression $\text{Condition}$ and ASM programs $P, Q$
- meaning: if $\text{Condition}$ evaluates to $\text{true}$ execute $P$, otherwise $Q$.
Block (Par) rule: $P \text{ par} Q$ is an ASM program
- for any ASM programs $P, Q$
- meaning: execute $P$ and $Q$ in parallel, simultaneously in the given state.
Let rule: \textbf{let } x = t \textbf{ in } P \text{ is an ASM program }
- for each expression (term) \( t \) and ASM program \( P \)
- meaning: evaluate \( t \), assign the computed value to \( x \) and then execute \( P \) with this value for \( x \) (‘call by value’).
NB. The scope of \( x \) is \( P \).
Call (Macro) rule: \( Q(t_1, \ldots, t_n) \) is an ASM program
- for every rule declaration \( Q(x_1, \ldots, x_n) = P \) where
- \( P \) is an ASM program
- \( t_i \) are expressions
- all free variables in \( P \) are among \( x_1, \ldots, x_n \)
- meaning: execute \( Q \) with parameters \((t_1, \ldots, t_n)\) (‘call by name’).
Syntax: choose and forall programs
Forall rule: \textbf{forall }x \textbf{ with } \textit{Property} \textbf{ do } P \textbf{ is an ASM program}
- for each Boolean-valued expression \textit{Property} and ASM program \( P \)
- meaning: execute simultaneously every \( P(x) \) where \( x \) satisfies the \( \textit{Property}(x) \) (in the given state).
NB. The scope of \( x \) ranges over \textit{Property} and \( P \).
Choose rule: \textbf{choose }x \textbf{ with } \textit{Property} \textbf{ do } P \textbf{ is an ASM program}
- for each Boolean-valued expression \textit{Property} and ASM program \( P \)
- meaning: choose an \( x \) satisfying \( \textit{Property}(x) \) and execute with it \( P(x) \).
NB. The scope of \( x \) ranges over \textit{Property} and \( P \).
Signature (vocabulary) $\Sigma$ (of an ASM program) is a set of function symbols $f^n$ of arity $n \geq 0$ (comprising all those which occur in the expressions of the ASM program)
– including 0-ary functions (constants) $true, false, undef$ (static)
– possibly including a 0-ary (dynamic) function $self$
– possibly including a (dynamic) unary function $new$
Predicates/Relations are treated as characteristic functions (with values in $\{true, false, undef\}$)
Sometimes $skip$ is used as ASM program which does nothing
An ASM is defined over a signature by a main program (with name of arity 0), a set of rule declarations and a set of initial states.
A 'derived' \( f \) has a fixed definition for each \( f(x) \). For 'controlled' \( f \), each \( f(x) \) can be read and written by and only by the given ASM program \( P \). For 'monitored' resp. 'out' \( f \), \( f(x) \) is read-only resp. write-only for \( P \).\(^1\)
\(^1\) Figure from AsmBook, © 2003 Springer-Verlag Berlin Heidelberg, reused with permission
A domain (superuniverse) $D$ together with an interpretation of each function symbol $f^n$ in $\Sigma$ as a function $f_S : D^n \rightarrow D$ is called a state $S$ (of the given ASM program) with $true$, $false$, $undef$ interpreted by pairwise distinct elements.
Expressions $t$ are evaluated in state $S$ in the usual way, denoted by $\text{eval}(t, S, env)$.
- The environment is an interpretation of all free variables (in the given ASM program) by elements of the superuniverse.
- $S$ and/or $env$ are omitted if they are clear from the context.
Elements of the superuniverse are also called elements of a state.
States (the function interpretation) may change, but the superuniverse does not change (see Reserve set below).
State changes by sets of function updates
- a location (in $S$) is a pair $(f^n, (v_1, \ldots, v_n))$ (memory unit)
- with $f^n \in \Sigma$ and elements $v_i$ (of $S$)
- $f^n_S(a_1, \ldots, a_n)$ is called content of $l$ in $S$, denoted $S(l)$
- $f$ is called the function symbol of $l$, denoted $fctSymbol(l)$
- an update (in $S$) is a location/value pair $(l, v)$ where
- $l = (f^n, (v_1, \ldots, v_n))$ is a location (of $S$)
- $v$ is an element (in $S$), -- used as value to update the content of $l$
- an update set is a set of updates
- an update set is consistent if it does not contain two updates for the same location (i.e. with different values)
- firing an update set $U$ in state $S$ yields the sequel $S + U$ of $S$ whose content of any location $l$ is defined by:
$$S + U(l) = \begin{cases} v & \text{if there is some } (l, v) \in U \\ S(l) & \text{if there is no } (l, v) \in U \end{cases}$$
The current state $S$ of a given ASM program is denoted by $\text{currstate}$.
- $\text{currstate}$ is viewed as a derived function.
- $\text{currstate}$ is implicitly parameterized by an ASM program or a program executing agent.
The values of $\text{currstate}$ can be seen in two ways:
- structurally: as a family of function tables over $D$: $(D, (f_S)_{f \in \Sigma})$ — what in logic is called an algebra (or Tarski structure)
- elementwise: as the set of all memory units
$$((f, (v_1, \ldots, v_n)), f_S(v_1, \ldots, v_n))$$
over $D$ and $\Sigma$ with their value, i.e. pairs of locations with their content
ASM program semantics: computed update sets
An ASM \( P \)rogram \( Yields \) in a \( S \)tate with a given \( env \)ironment (interpretation of its free variables) an \( U \)pdate set recursively:
\[
Yields(\text{skip}, S, env, \emptyset) \quad -- \text{skip does nothing}
\]
\[
Yields(f(t_1, \ldots, t_n) := t, S, env, \{(f, (v_1, \ldots, v_n)), v\}) \quad -- \text{assign}
\]
where \( v_i = \text{eval}(t_i, S, env) \) and \( v = \text{eval}(t, S, env) \)
\[
Yields(\text{if Cond then } P \text{ else } Q, S, env, U) \quad \text{if}
\]
\[
Yields(P, S, env, U) \text{ and eval(Cond, S, env) = true}
\]
\[
\text{or } Yields(Q, S, env, U) \text{ and eval(Cond, S, env) = false}
\]
\[
Yields(P \text{ par } Q, S, env, U \cup V) \quad \text{if}
\]
\[
\text{Upd}(P, S, env, U) \text{ and Upd}(Q, S, env, V)
\]
Copyright CC BY–NC-SA 4.0
ASM program semantics: let, forall, Call
\[ \text{Yields}(\text{let } x = t \text{ in } P, S, env, U) \text{ if} \]
\[ \text{Yields}(P, S, env[x \mapsto \text{eval}(t, S, env)], U) \quad \text{-- call by value} \]
\[ \text{Yields}(\text{forall } x \text{ with Prop do } P, S, env, \bigcup_{a \in I} U_a) \text{ if} \]
\[ \text{forall } a \in I \text{ Yields}(P, S, env[x \mapsto a], U_a) \]
\[ \text{where } I = \{ a \mid \text{eval}(\text{Prop}(a), S, env[x \mapsto a]) = \text{true} \} \quad \text{-- forall} \]
\[ \text{Yields}(Q(t_1, \ldots, t_n), S, env, U) \text{ if} \]
\[ \text{Yields}(P(x_1/t_1, \ldots, x_n/t_n), S, env, U) \]
\[ \text{where } Q(x_1, \ldots, x_n) = P \text{ is a rule declaration} \quad \text{-- call by name} \]
NB. Up to here, \( U \) is even a function of \( P, S, env \). There is no non-determinism. So one can write \( U = \text{Upd}(P, S, env) \) instead of \( \text{Yields}(P, S, env, U) \).
Yields\((\textbf{choose } x \textbf{ with } \textit{Prop} \textbf{ do } P, S, env, U)\)
\hspace{1cm} \text{if forsome } a \textbf{ with } \textit{eval}(\textit{Prop}(a), S, env[x \mapsto a]) = \textit{true} \\
\hspace{1cm} \text{Yields}(P, S, env[x \mapsto a], U) \\
Yields\((\textbf{choose } x \textbf{ with } \textit{Prop} \textbf{ do } P, S, env, \emptyset)\) -- if no choice do nothing
\hspace{1cm} \text{if forall } a \textbf{ eval}(\textit{Prop}(a), S, env[x \mapsto a]) = \textit{false}
An ASM with main rule $P$ can make a move (or step) from state $S$ (with given $env$) to the sequel state $S' = S + U$, written $S \Rightarrow_P S'$, if $Yields(P, S, env, U)$ for a consistent set $U$ of updates.
The updates in $U$ are called internal to distinguish them from updates of monitored or shared locations; the sequel is called the next internal state.
A run or execution of $P$ is a finite or infinite sequence $S_0, S_1, \ldots$ of states (of the signature of $P$) such that
- $S_0$ is an initial state,
- for each $n$
- either $S_n \Rightarrow_P S'_n$ and $S_{n+1} = S'_n + U$ with a consistent update set $U$ produced by the environment for monitored or shared locations
- or $P$ cannot make a move in state $S_n$ (i.e. produces an inconsistent update set). In this case $S_n$ is called the last state in the run.
Reserve set and the function `new`
- `new (X)` provides a ‘fresh’ element and makes it an element of $X$
- ‘Fresh’ elements come from a (dynamic) Reserve set which contains elements of a state that are not in the domain or range of any basic function of the state.
- Parallel calls of `new` are assumed to provide different elements.
The effect of `let x = new (X) in P` is often described by:
```
import x do
X(x) := true
P
```
with corresponding `import` rules. See AsmBook for details.
A multi-agent ASM $\mathcal{M}$ is a family
\[(ag(p), pgm(p))_{p \in \text{Process}}\]
of single-agent ASMs consisting of a set of Processes $p$ viewed as
- agents $ag(p)$ which execute step by step (‘sequentially’)
- each its ASM program $pgm(p)$ (of signature $\Sigma_p$)
- interacting with each other via reading/writing in designated (shared or input/output) locations.
$ag : \text{Process} \rightarrow \text{Agent}$, $pgm : \text{Process} \rightarrow \text{AsmRule}$ may be dynamic.
Atomic reads/writes in concurrent ASM runs
- A single agent ASM
- performs in each state $S_n$ of a run both, reads and writes, as one read&write step (one atomic action) resulting in the sequel state $S'_n$
- is synchronized with its environment which (in one atomic step) updates $S'_n$ to the next state $S_{n+1}$ in the run.
- In concurrent ASM runs, different agents
- may perform their read/write actions asynchronously, reading in one state and writing to another state, each agent at its own speed,
- interact via reads/writes of interaction (i.e. in/shared/out) locations.
- Thus we emulate an atomic read&write step of $pgm(p)$ by a program $ConcurStep(pgm(p))$ to perform either directly this atomic read&write step of $pgm(p)$ or three consecutive atomic actions:
- read&SaveGlobalData, LocalWriteStep, WriteBack
which in a concurrent run may happen asynchronously, in different states (at different moments of time).
Multi-Agent ASM: Semantics
A concurrent run of a multi-agent ASM $\mathcal{M}$ is
- a sequence $(S_0, P_0), (S_1, P_1), \ldots$ of states $S_n$, subsets $P_n \subseteq \text{Process}$
- such that each state $S_{n+1}$ is obtained from $S_n$ by applying to it all the updates computed by any process $p \in P_n$
- formally $S_{n+1} = S_n + \bigcup_{p \in P_n} U_p$ where for given environment $Yields(\text{ConcurStep}(\text{pgm}(p)), S_n, env, U_p)$ holds.
The run terminates in state $S_n$ if the updates computed by the agents in $P_n$ are inconsistent.
NB. We define $\text{ConcurStep}(\text{pgm}(p))$ such that each of its possible substeps (which together emulate one read&write step of $\text{pgm}(p)$), when executed by $p \in P_n$, is an atomic single-agent read&write step in $S_n$.
NB. The signature of states is the union of $\Sigma_p$ for all $p \in \text{Process}$.
Interactive and local states in concurrent ASM runs
When \( p \in P_n \) contributes to build \( S_{n+1} \) by executing one of the \texttt{ConcurSteps} of \( \text{pgm}(p) \), it is in one of three (resp. two) modes:
- in \textit{interactive} mode \( p \) reads in state \( S_n \) the data needed to perform the step described by the given \( \text{pgm}(p) \). To build \( S_{n+1} \) out of \( S_n \):
- either \( p \) directly computes its update set \( U_p \), in \( S_n \), and applies it to \( S_n \), possibly updating some interaction locations
- or \( p \) does \texttt{SaveGlobalData} locally and switches to locally compute \( U_p \), updating only local locations
- in \textit{localEmulation} mode \( p \) computes a local copy of \( U_p \), using the previously saved global data, and switches to \texttt{WriteBack} to interaction locations, updating only local locations
- in \textit{writeBack} mode \( p \) will \texttt{WriteBack} to those (globally visible) interaction locations whose values it has updated locally (by executing an assignment \( f_p(s) := t \) in its preceding mode = \textit{localEmulation}).
NB. In the 2-step version \textit{writeBack} mode is suppressed.
Three-step version of **ConcurStep**\((pgm)\)
**LOCALWriteStep**\((pgm)\) results from replacing in *pgm*
- every in/shared/out function symbol \(f\) by a new local function symbol \(f_p\), where \(p = ag(pgm)\) (used to locally **SAVEGLOBALDATA**)
- adding **INSERT**\((updData(f_p, s, t), GlobalUpd)\) in parallel to each \(f_p(s) := t\) (used to define **WRITEBACK** to shared/out locations)
Two-step version of $\text{ConcurStep}(pgm)$
**choose** $M \in \{\text{Read\&WriteStep}(pgm), \text{ReadStep}(pgm)\}$
**if** $mode = \text{localEmulation}$ **then**
$\text{LocalEmulation}(pgm)$
$mode := \text{interactive}$
**where**
$\text{Read\&WriteStep}(pgm) =$
**if** $mode = \text{interactive}$ **then** $pgm$
$\text{ReadStep}(pgm) =$
**if** $mode = \text{interactive}$ **then**
$\text{SaveGlobalData}(pgm)$
$mode := \text{localEmulation}$
$\text{LocalEmulation}(pgm) =$
$\text{LocalWriteStep}(pgm) \; \text{seq} \; \text{WriteBack}(pgm)$
**NB.** Turbo ASM operator `seq` guarantees atomic 1-step execution.
Submachines of ConcurStep\( (p) \)
\( \text{SaveGlobalData}(pgm) \) and \( \text{WriteBack}(pgm) \) transfer values between the globally visible interaction functions and their local copies.
- \( \text{SaveGlobalData}(pgm) \) copies the current values of monitored and shared function terms \( f(t) \) of \( pgm \) into a local copy \( f_p(t) \) which is controlled by \( p \) (\( p = \text{self} = ag(pgm) \)):
\[
\text{SaveGlobalData}(pgm) = \forall f \in \text{Monitored} \cup \text{Shared} \, f_p := f
\]
NB. \( f := g \) abbreviates \( \forall \text{args} \, f(\text{args}) := g(\text{args}) \)
- \( \text{WriteBack}(pgm) \) is the inverse copying, of the just updated local values for output and shared function terms, back to the ‘global’ terms in \( pgm \) (NB. \( \text{GlobalUpd} \) is local, controlled by \( p = \text{self} \)):
\[
\text{WriteBack}(pgm) = \\
\forall \text{updData}(f_p, s, t) = ((f_p, \text{args}), \text{val}) \in \text{GlobalUpd} \\
f(\text{args}) := \text{val} \\
\text{GlobalUpd} := \emptyset \quad \text{-- } \text{GlobalUpd \ assumed to be initially empty}
\]
Ambient ASMs
- Syntax extension: `amb exp in P` is an ASM program
- for each expression and ASM program `P`
- Function classification extension:
\[
\text{AmbDependent}(f) \text{ iff } \text{forsome } e, e' \text{ with } e \neq e' \text{ forsome } x \ f(e, x) \neq f(e', x)
\]
Otherwise `f` is called `AmbIndependent`.
- `eval` extension by ambient parameter: see below
- Semantics extension:
to avoid a signature blow up by dynamic ambient nesting, we treat `amb` as a stack where new ambient expressions are `pushed` (passed by value):
\[
\text{Yields}(\text{amb exp in } P, S, \text{env, amb, U) if } \text{Yields}(P, S, \text{env}, \text{PUSH(eval(exp, S, env, amb), amb), U})
\]
Copyright CC BY–NC-SA 4.0
evaluation function extension by ambient parameter:
Case AmbDependent(f):
\[ \text{eval}(f(t_1, \ldots, t_n), S, \text{env}, \text{amb}) = \]
\[ f_S(\text{amb}, \text{eval}(t_1, S, \text{env}, \text{amb}), \ldots, \text{eval}(t_n, S, \text{env}, \text{amb})) \]
Case AmbIndependent(f):
\[ \text{eval}(f(t_1, \ldots, t_n), S, \text{env}, \text{amb}) = \]
\[ f_S(\text{eval}(t_1, S, \text{env}, \text{amb}), \ldots, \text{eval}(t_n, S, \text{env}, \text{amb})) \]
NB. Since often the interpretation \text{env} of free variables is omitted, the \text{ambient} is called the \text{environment}.
Copyright CC BY–NC-SA 4.0
References
- E. Börger and A. Raschke: Modeling Companion for Software Practitioners. Springer 2018
http://modelingbook.informatik.uni-ulm.de
It is permitted to (re-) use these slides under the CC-BY-NC-SA licence
https://creativecommons.org/licenses/by-nc-sa/4.0/
i.e. in particular under the condition that
- the original authors are mentioned
- modified slides are made available under the same licence
- the (re-) use is not commercial
|
{"Source-Url": "https://modelingbook.informatik.uni-ulm.de/downloads/DefnAsm.pdf", "len_cl100k_base": 5440, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 50897, "total-output-tokens": 6728, "length": "2e12", "weborganizer": {"__label__adult": 0.00022482872009277344, "__label__art_design": 0.0003681182861328125, "__label__crime_law": 0.00024700164794921875, "__label__education_jobs": 0.0007557868957519531, "__label__entertainment": 6.955862045288086e-05, "__label__fashion_beauty": 0.00010114908218383788, "__label__finance_business": 0.00028133392333984375, "__label__food_dining": 0.00026488304138183594, "__label__games": 0.00047087669372558594, "__label__hardware": 0.0006761550903320312, "__label__health": 0.0002510547637939453, "__label__history": 0.00017595291137695312, "__label__home_hobbies": 0.00010883808135986328, "__label__industrial": 0.0004148483276367187, "__label__literature": 0.000278472900390625, "__label__politics": 0.0001971721649169922, "__label__religion": 0.00031113624572753906, "__label__science_tech": 0.0298919677734375, "__label__social_life": 7.456541061401367e-05, "__label__software": 0.01239013671875, "__label__software_dev": 0.95166015625, "__label__sports_fitness": 0.0001621246337890625, "__label__transportation": 0.0002961158752441406, "__label__travel": 0.00013685226440429688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17164, 0.00512]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17164, 0.60355]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17164, 0.72964]], "google_gemma-3-12b-it_contains_pii": [[0, 254, false], [254, 1058, null], [1058, 1710, null], [1710, 2488, null], [2488, 3143, null], [3143, 3510, null], [3510, 4245, null], [4245, 5163, null], [5163, 5782, null], [5782, 6625, null], [6625, 7556, null], [7556, 8054, null], [8054, 8891, null], [8891, 9392, null], [9392, 9881, null], [9881, 10826, null], [10826, 11711, null], [11711, 12912, null], [12912, 13309, null], [13309, 13944, null], [13944, 15045, null], [15045, 15780, null], [15780, 16405, null], [16405, 16864, null], [16864, 17164, null]], "google_gemma-3-12b-it_is_public_document": [[0, 254, true], [254, 1058, null], [1058, 1710, null], [1710, 2488, null], [2488, 3143, null], [3143, 3510, null], [3510, 4245, null], [4245, 5163, null], [5163, 5782, null], [5782, 6625, null], [6625, 7556, null], [7556, 8054, null], [8054, 8891, null], [8891, 9392, null], [9392, 9881, null], [9881, 10826, null], [10826, 11711, null], [11711, 12912, null], [12912, 13309, null], [13309, 13944, null], [13944, 15045, null], [15045, 15780, null], [15780, 16405, null], [16405, 16864, null], [16864, 17164, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17164, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17164, null]], "pdf_page_numbers": [[0, 254, 1], [254, 1058, 2], [1058, 1710, 3], [1710, 2488, 4], [2488, 3143, 5], [3143, 3510, 6], [3510, 4245, 7], [4245, 5163, 8], [5163, 5782, 9], [5782, 6625, 10], [6625, 7556, 11], [7556, 8054, 12], [8054, 8891, 13], [8891, 9392, 14], [9392, 9881, 15], [9881, 10826, 16], [10826, 11711, 17], [11711, 12912, 18], [12912, 13309, 19], [13309, 13944, 20], [13944, 15045, 21], [15045, 15780, 22], [15780, 16405, 23], [16405, 16864, 24], [16864, 17164, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17164, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9737a180b537a54dfd24b6d7b21aceb7dc6b3df5
|
VyZX: A Vision for Verifying the ZX Calculus
Adrian Lehmann*
University of Chicago
adrianlehmann@uchicago.edu
Ben Caldwell*
University of Chicago
caldwellb@uchicago.edu
Robert Rand
University of Chicago
rand@uchicago.edu
Optimizing quantum circuits is a key challenge for quantum computing. The PyZX compiler broke new ground by optimizing circuits via the ZX calculus, a powerful graphical alternative to the quantum circuit model. Still, it carries no guarantee of its correctness. To address this, we developed VyZX, a verified ZX-calculus in the Coq proof assistant. VyZX provides two distinct representations of ZX diagrams for ease of programming and proof: A graph-based representation for writing high-level functions on diagrams and a block-based representation for proving ZX diagrams equivalent. Through these two different views, VyZX provides the tools necessary to verify properties and transformations of ZX diagrams. This paper explores the proofs and design choices underlying VyZX and its application and the challenges of verifying a graphical programming language.
1 Introduction
As quantum computers transition from fiction to a feature of our daily lives, there has been a surge of interest in quantum optimizers [21, 23, 25, 16, 1, 29]. The goal of a quantum optimizer is to reduce the number of bottlenecks in a quantum circuit, whether those be two-qubit gates in the near term or T gates in the longer term. Many of these optimizers do some form of model checking [23, 29] or translation validation [25, 16] to gain confidence that their optimizations are correct, out of awareness that bugs in quantum optimizers are both common and costly [16]. Of particular note, the VOQC compiler [14] is fully verified in the Coq proof assistant, guaranteeing that its optimizations preserve the semantics of the original quantum circuit.
Unfortunately, the quantum circuit model has many weaknesses, particularly from an optimization perspective. Quantum circuits come equipped with a variety of different gate sets: A good optimizer for the Clifford+T gate set is not guaranteed to perform well on IBM’s gates or Google’s. They are also rigid: They consist of a large sequence of vertically and horizontally ordered gates, whereas an optimizer only cares about the connections between gates. In this spirit, Kissinger and van de Wetering developed PyZX [16], an optimizer for the ZX calculus [5], a graphical language for quantum computing in which only connectivity matters [8]. Like prior tools PyZX checks for correctness by translation validation, either converting diagrams to their underlying linear maps in NumPy and checking if all elements of the linear map are equal up to a global nonzero scalar, or “optimizing” a circuit concatenated with its adjoint, and checking to see if it returns the identity. Unfortunately, these methods are slow and are not guaranteed to succeed as showing circuit equivalence is known to be QMA-complete in the general case [15].
Drawing inspiration from VOQC and PyZX and the verified classical compiler CompCert [19], we present VyZX, a formalization of the ZX calculus in the Coq [9] proof assistant. VyZX is intended to be a fully verified implementation of the PyZX compiler and a platform for mechanized reasoning about the ZX calculus and related graphical calculi. Given the versatility of the ZX-calculus, VyZX should allow us to tackle correctness issues in a range of domains, including lattice surgery [4], circuit simulation [18], and natural language processing [7].
* Equal contribution
© A. Lehmann, B. Caldwell & R. Rand
This work is licensed under the
Creative Commons Attribution License.
Unfortunately, while “only connectivity matters” is an excellent slogan for a graphical language, it poses significant challenges for formal verification. Computers do not talk in pictures; Internally, they impose a rigidity akin to that of the circuit model. Even the standard representations of graphs, like adjacency lists and adjacency matrices, are ill-suited to inductive reasoning of the sort Coq excels in. To address this, we formalize two views of ZX diagrams: block representation, in which wires and nodes are composed horizontally and vertically, and graph representation, which is more faithful to the standard representation of ZX diagrams.
We explore the motivation for creating \( \text{VyZX} \) in Section 2. We layout the design decisions underlying \( \text{VyZX} \) and discuss their potential in Section 3. There we also cover the inductive definition for ZX diagrams, how we apply semantics to them, and how we prove equivalence of two diagrams. We discuss how to convert from standard quantum circuits to our inductive diagrams (Section 3.5) and how we can view our inductive diagrams as graphs (Section 4). We sketch out a path from our current formalization of the ZX calculus to a full-fledged quantum optimizer that is integrated with the \( \text{VOQC} \) compiler and conclude with the many potential use-cases for \( \text{VyZX} \) in Section 5. All code referenced in this paper can be found at \( \text{https://github.com/inQWIRE/VyZX} \).
2 Verified Optimization and the ZX-Calculus
2.1 Verified Optimization
Quantum circuit optimizers take varied approaches to verify that their optimizations are well behaved. Some compilers, such as the one created by Nam et al. [21], solely rely on unit testing to ensure correctness, though unit testing can only show the presence – not absence – of bugs. Quantinuum’s \( \text{t|ket} \) [24] goes a step further and uses a Hoare logic system to check that certain postconditions, such as “the circuit contains no wire swaps”, hold for various optimization passes. This is a useful system for a quantum optimizer to have, but checking that the postcondition holds does not guarantee that the optimization has returned an equivalent circuit to the input circuit. This gap in how we can verify quantum circuit optimizations has been filled by a few other compilers.
The quantum compilers \( \text{CertiQ} \) [23] and Quartz [29] attempt to alleviate this concern by adding systems to check for circuit equivalence. They use these systems throughout their development process to check optimizations. To check equivalence, they generate some proof obligations as SMT formulas and pass them along to Z3. This steps beyond compilers like \( \text{t|ket} \) as it actually attempts to validate the optimization. The key feature of compilers like Quartz and \( \text{CertiQ} \) is that they automate this validation of optimization passes. While this is a valuable way to make it easy to engineer new optimization passes, it is incomplete. Not every optimization can be verified by an SMT solver, and compilers using such solvers hence include certain optimization without validation. This undermines the validation itself if the entire thing is not validated. If we want to verify the optimizer completely, we need a stronger system that allows us to prove the correctness, not just pass it to an SMT solver.
The flaws of existing compilers inspired the development of the \( \text{VOQC} \) verified compiler [14]. \( \text{VOQC} \) uses \( \text{SQIR} \) [13] and QuantumLib [26] to provide a full-stack verification pipeline. All three libraries above are written in the Coq proofs assistant, providing them with strong correctness guarantees. \( \text{VOQC} \) ingests \( \text{SQIR} \) circuits from \( \text{QASM} \) [10], then applies optimization passes that are proven correct at compile time in Coq, and hence do not require any automation as we have seen with \( \text{CertiQ} \). Upon completion \( \text{SQIR} \) circuits are converted back into \( \text{QASM} \). \( \text{SQIR} \) can also handle multiple different gate sets, which are related to a base gate set that is used for proof.
2.2 ZX calculus
In contrast to the circuit model used by VOQC and other optimizers, ZX diagrams are a graphical representation of quantum operations. The ZX calculus [6] uses such diagrams together with a set of rewrite rules to manipulate quantum operations. Fundamentally, ZX diagrams are graphs with green and red nodes\(^1\), called Z and X spiders, with \textit{in} inputs and \textit{out} outputs, along with a rotation angle \(\alpha \in [0, 2\pi)\). If the rotation angle is 0 it can be omitted. The semantics of Z spiders and X spiders are shown in Figure 1.

Figure 1: Z and X spiders with their standard bra-ket semantics.
These spiders are connected through edges. Edges can either be regular edges or Hadamard edges, which are represented as dotted lines in diagrams and implicitly add a Hadamard gate on their path. These Hadamard edges can be treated as syntax for regular edges with three nodes, see Figure 5. For more on the ZX-calculus, we recommend John van de Wetering’s excellent survey of the topic [27].
2.3 ZX Calculus Optimization
Our work is inspired by PyZX optimizations, which is based on work by Duncan et al. [11] and Kissinger and van de Wetering [17]. Duncan et al.’s describe how to optimize ZX diagrams with graph-theoretic rules. They begin by outlining a restriction on ZX diagrams, called \textit{graph-like form}. Such diagrams are restricted to having only one kind of node (the Z spider), one kind of edge (Hadamard edge), no self-loops, and the condition that each node has a unique parent (i.e., either only one input or output). They show that one can freely convert between unrestricted ZX diagrams and graph-like ZX diagrams with equivalent semantics. Using graph-like form, they use graph operations, such as local complementation and pivoting, to reduce the number of Clifford gates (corresponding to nodes containing multiples of \(\pi/2\)) in their circuits by combining nodes.
Since the algorithm above does not reduce the number of non-Clifford gates in a circuit, Kissinger and van de Wetering [17] devised an optimization strategy for ZX calculus to reduce the T-gate count, \(T\) gates being the most expensive operations for error-corrected quantum computers. To achieve this, they remove any non-Clifford angles from spiders by splitting them into so-called \textit{phase gadgets}, as shown in Figure 2. Then, the resulting Clifford gates are optimized as previously described. Finally, phase gadgets are fused back into the diagram resulting in an optimized diagram. In merging multiple phase gadgets spiders, the phase gadgets themselves can be merged, creating angles that are multiples of \(\pi/2\) that can be fused into the Clifford part of the diagram, reducing the number of non-Clifford gates. With these optimizations, the ZX-based optimizer PyZX achieves state-of-the-art performance [16] on Maslov’s reversible benchmark suite [20].
VOQC and PyZX both stand out as significant quantum optimizers for their use of formal verification and the ZX-calculus, respectively. With VyZX, we set out to combine these two ideas into one quantum circuit optimizer.
---
\(^1\)We chose accessible shades of green and red for this paper (see https://zxcalculus.com/accessibility.html)
3 VyZX
In designing VyZX, we wanted to make it easy to write recursive or inductive functions over diagrams. In a proof assistant like Coq, inductive structures allow for inductive proofs. Having the ability to use inductive proofs was a core goal for our definition as it would greatly simplify proofs. For inspiration, we looked at diagrams for symmetric monoidal categories as described by Selinger [22]. We reduced the basic requirements for our string diagrams to:
1. The unit object, which is the empty diagram,
2. The single wire,
3. Morphisms, which take \( n \) inputs to \( m \) outputs,
4. Braids, which swap two wires,
5. Sequential composition, which composes two diagrams in sequence, and
6. Tensor products, which arrange two diagrams in parallel.
These core objects give us our base language, consisting of a set of base morphisms, an empty diagram, a way to swap wires around, and the ability to compose diagrams in sequence and parallel. When we wish to apply this to the ZX calculus, we must decide what our language’s signature will be. This signature constitutes the morphisms of our string diagram. For the ZX calculus, a simple signature could include just the Z and X spiders. We also include caps and cups in our signature to make diagrams easier to write, which are standard morphisms for string diagrams. The building blocks for creating string diagrams inductively are given in Figure 3, which we expand upon to build different representations for ZX diagrams.

Figure 3: From left to right, the braid, sequential composition, tensor product, cap, and cup for symmetric monoidal string diagrams.
3.1 Block Representation ZX Diagrams
Our first goal with VyZX is to create diagrams that can be used for proof in Coq. We accomplish this by giving an inductive definition for ZX diagrams. We refer to this representation as block representation ZX diagrams in reference to how they may be stacked together and how stacks can line up with one another. Each ZX diagram holds information about how many inputs and outputs it has, allowing us to define composition in a way that matches the outputs and inputs of two diagrams through a Compose constructor. We also have a Stack operation that places one diagram on top of another. Our base constructors are the Z_Spider, X_Spider, Cap, Cup, Swap, and Empty diagrams. The type of a ZX diagram is given by \( \text{ZX} : \mathbb{N} \rightarrow \mathbb{N} \rightarrow \text{Type} \) and the constructors are given by Figure 4. These eight constructors allow us to write simple recursive functions and inductive proofs over ZX diagrams while allowing us to describe arbitrary diagrams. Graphically, these constructors correspond to the diagrams seen in Figure 3 with the addition of the Z and X spiders shown in Figure 1.
3.2 Semantics of diagrams
Given our inductive definition for ZX diagrams, we can write a simple function (Algorithm 1) for computing the semantics of a given ZX diagram. We use the matrix definition given in QuantumLib [26] to compute the semantics. In QuantumLib a matrix is simply a function with type $\mathbb{N} \rightarrow \mathbb{N} \rightarrow \mathbb{C}$ that takes in a row and column index and returns the associated complex number. First, we define the $Z$ and $X$ spider semantics, as stated in Section 2.2, letting $\times$ be matrix multiplication\(^2\), $I_{m \times n}$ be the $m$ by $n$ identity matrix, $H$ be the Hadamard matrix, $\otimes$ the Kronecker product, and $H^{\otimes n}$ be the $n$th power of $H$ with the Kronecker product. With spider semantics complete, we define our other base constructors, stacks and composes using the Kronecker and matrix products.
3.3 Proportionality of diagrams
Trivially two syntactically equal diagrams are to be considered equal. For making useful statements, however, we require a notion of semantic equivalence. Intuitively, one might define that as equivalence of matrices produced by $\text{ZX\_semantics}$ (as shown in Algorithm 1). In the ZX calculus, though, we only care about equivalence up to multiplication by a constant factor, as rules will introduce constant factors and we are able to rebuild any constant factor if necessary using ZX constructions [27].
Within VyZX we define a relation called *proportional* and give it the notation $\propto$. We say $\text{zx}_1 \propto \text{zx}_2$ if there is a non-zero complex number $c$ such that $\text{ZX\_semantics} \text{zx}_1 = c \times \text{ZX\_semantics} \text{zx}_2$. We prove that $\propto$ is an equivalence relation as we might expect. We then prove that our composition operators respect proportionality: That is, if $\text{zx}_1 \propto \text{zx}_1'$ and $\text{zx}_2 \propto \text{zx}_2'$ then $\text{Compose} \text{zx}_1 \text{zx}_2 \propto \text{Compose} \text{zx}_1' \text{zx}_2'$ and $\text{Stack} \text{zx}_1 \text{zx}_2 \propto \text{Stack} \text{zx}_1' \text{zx}_2'$. We add this fact to Coq as a *parametric morphism*, allowing us to rewrite using our equivalences even inside a broader diagram. With proportionality defined, we proceed to verify different rewrite rules within the ZX calculus by proving their diagrams to be proportional.
\(^2\)We chose to align our notation with QuantumLib rather than mathematical convention
Algorithm 1 ZX Diagram Semantics
```
function Z_SPIDER_SEMANTICS(in, out, α)
return [1 0 0 0 ... 0 0]
⇒ Equivalent to 0|0⟩· · · 0|0⟩ + eiα|1⟩· · · |1⟩
function X_SPIDER_SEMANTICS(in, out, α)
return H⊗out × Z_SPIDER_SEMANTICS(in, out, α) × H⊗in
⇒ Equivalent to +|+⟩· · · +|+⟩ + eiα|−⟩· · · |−⟩
function ZX_SEMANTICS(zx : ZX in out) :
switch zx do
case Empty
return I₁ × I₁
case Swap
return [1, 0, 0, 0; 0, 1, 0, 0; 0, 0, 1, 0; 0, 0, 0, 1]
case Cap
return [1, 0, 0, 1]ᵀ
case Cup
return [1, 0, 0, 1]
case Z_Spider in out α
return Z_SPIDER_SEMANTICS(in, out, α)
case X_Spider in, out α
return X_SPIDER_SEMANTICS(in, out, α)
case Stack zx₁ zx₂
return ZX_SEMANTICS(zx₁) ⊗ ZX_SEMANTICS(zx₂)
case Compose zx₁ zx₂
return ZX_SEMANTICS(zx₂) × ZX_SEMANTICS(zx₁)
```
3.4 Proving the Correctness of the ZX-Calculus
We can now show the rules to manipulate and simplify VYZX diagrams. For readability, we show most rules in graph representation.
Common gates We translated common gates from the circuit model to the ZX-calculus and proved their semantic correctness, shown in Figure 5.
Figure 5: Quantum gates represented in the ZX calculus. Note that the H node omits the its label and is common to ZX diagrams as syntactic sugar for the rotations above.
Stack & Compose distribute Sequential composition and stacking distribute as long as the individual diagrams have compatible dimensions by the rules stated in Section 3.1. Figure 6 shows this property visually. This fact is central to proving statements in block representation as it enables the diagram’s structure to be changed while keeping the components the same.
3.4 Proving the Correctness of the ZX-Calculus
Bi-Hadamard color changing We define a color-swapped ZX diagram as a ZX diagram with the same structure but with all Z spiders being replaced by X spiders (while keeping the angle) and vice versa; all other constructions such as Cap, Cup, and Swap do not change [8]. Henceforth, we denote the color-swapped version of a ZX diagram $zx$ as $\oplus zx$. For a given spider, one can swap the spider’s color while keeping the angle by composing a stack of Hadamards to the in and outputs. This “bi-Hadamard” construction is shown in Figure 7a. Further, we see this holds for all other non-compositional ZX diagram components (SWAPs, Caps, and Cups) since they do not have color and do not cause rotation. We go on to show that, in fact, the bi-Hadamard rule is true for all color swapped ZX diagrams, as shown in Figure 7b.
Color swapping Using the previous fact about bi-Hadamard constructions, we prove that if a rule can be applied to a ZX diagram $zx_1$ transforming it into $zx_2$, then it can be applied to the color swapped diagram of $zx_1$ transforming it into the color swapped diagram of $zx_2$. With this fact in mind, we only show one color configuration for any rule, understanding that it applies to the color-swapped version. In practice, this allows us to prove any rule only for one color configuration and get the color-swapped lemma for free. Since many proofs are computationally expensive, this greatly speeds up verification.
Bi-algebra rule The bialgebra rule, while not intuitive, is crucial for many ZX proofs as it allows for the rearranging of edges between nodes [27]. Unlike the rules we have so far, it requires proof at matrix level and hence has a computationally expensive proof. Though, once proven, it can be repeatedly applied, and given it rearranges edges can be used to prove many future facts.
Hopf rule The Hopf rule, like the bialgebra rule, deals with the interaction between Z and X spiders. It says that two edges between an X and a Z spider can be removed. Intuitively, this tells us that no matter how much “information” we know about the X basis, given by our input, we do not get any information about the orthogonal Z basis [27]. In practice, the Hopf rule allows us to disconnect specific nodes in the diagram instead of changing their connection. As with the bi-algebra rule, this rule is proven directly on matrices and requires computation, though since the intermediate matrices are smaller this proof is less computationally expensive.
**Bi-\(\pi\) rule** Any X spider is equal to itself, surrounded by Z rotations by \(\pi\). Intuitively this rule is true due to the orthogonal nature of the X and Z basis and the fact that we are performing in total a full rotation. A corollary of this rule is the \(\pi\)-copy rule [27] as shown and derived in the figure on the right. The \(\pi\)-copy rule is not yet implemented, given it semantically builds on adjacency rather than a block like construction, since any input or output could be the one with the \(\pi\) spider on it. Generally speaking, rules that modify an arbitrary subset of the inputs and outputs are hard to represent in block representation and are left to be proven in for graph representation. Since the \(\pi\)-copy rule is what is usually presented in the literature, the bi-\(\pi\) is an interesting case study of a rule tailored to block representation that is equivalent to a more traditional rule.

**Figure 8:** The \(\pi\)-copy rule derivation using the bi-\(\pi\) rule
**Spider fusion/splitting** One of the most important rules is that spiders connected by an arbitrary (non-zero) number of edges can be fused into a single node with the angles added [27]. This rule is shown in Figure 9a. Further, the reverse is true: any spider can be split such that the two new spiders add to the original angle. A corollary of this is that we can split phase-gadgets (as discussed in Section 2.3) off nodes.
Since this rule fundamentally works based on adjacency rather than block representation construction, we have not fully implemented it at the time of writing. We plan to follow with the general fusion rule, using the graph representation described in Section 4. We do, however, have a restricted version proven where two nodes are just connected to each other by a single wire, which will form the basis of the general spider fusion proof. Figure 9b shows this restricted version. We implemented this by using the bra-ket semantics of spiders, as shown in Figure 1, instead of the direct matrix semantics shown in Algorithm 1. This allows us to use QuantumLib’s algebraic rewrites of complex matrices to combine the angles easily and shows why having both versions of our semantics is valuable for proof. Given the algebraic rewrite this rule is computationally very efficient.

(a) The spider fusion rule: Two connected nodes fused into a new node with added angles.
(b) A restricted version of spider fusion currently supported by VyZX
Figure 9: Spider fusion rule
### 3.5 SQIR Ingestion
We want VyZX to be able to ingest arbitrary circuits. To achieve this, VyZX reads in circuits written in SQIR, an intermediate representation for the VOQC compiler embedded in Coq. By choosing SQIR we maintain interoperability with another verified compiler. SQIR represents circuits with \(q\) qubits as compositions of arbitrary \(x, y, z\) rotations acting on qubit \(n\) and CNOTs between arbitrary qubits \(n, m\), so this transformation and its verification is not trivial.
Circuit ingestion works by using arbitrary swaps, meaning instead of only having swap gates that swap two adjacent qubits, we have swap gates able to swap arbitrary two qubits\(^3\). To construct an arbitrary swap, we first build a construction to shift qubit 1 to \(n\), whereby all other qubits are shifted up. We also build the inverse (i.e., shifting qubit \(n\) to 1). Figure 10a shows this construction. With this we can easily construct our arbitrary SWAP gate (shift 1 to \(n\), then \(n-1\) to 1). The discussion below will be in block representation that has defined swaps (see Section 3.1) of two adjacent qubits as the composition of 3 CNOTs.
Using arbitrary swaps and shifts, we can now interpret any wire crossing. Hence, in block representation we can construct a CNOT acting on two qubits \(n, c\) by swapping one qubit next to the other qubit, applying the CNOT, and swapping back, as shown in Figure 10c. To then convert such a circuit, \(\sim\text{zx}\) converts rotations \(x, y, z\) into the construction shown in Figure 10b. Composition of SQIR terms is also represented by composition in our block representation IR.
4 Graph Representation
As discussed in Section 3.1, we can view diagrams in both block representation and graph representation. In graph representation, we construct ZX diagrams solely based on node adjacency. This representation brings some interesting properties that will allow us to optimize diagrams or prove further rules (as mentioned in Section 3.4). We can see that any graph representation diagram can be deformed arbitrarily, as long as inputs and outputs are kept in order. Figure 11 illustrates this: Here, we see two ZX diagrams equal up to deformation.
\(^3\)For the implementation we chose to have an IR with first-class arbitrary swaps that will then be translated into our base block representation IR (preserving semantics) using the following constructions.
One of our future goals is to add a graph representation for ZX diagrams, allowing us to act on diagrams. Creating a verified semantics for graph representation remains a work-in-progress since, fundamentally, this requires topologically-sorted graph traversal, which is a hard problem to implement in proof assistants. Further, converting graph representation diagrams into block representation is also challenging; if overcome, we could provide semantics to graph representation in terms of the corresponding block representation diagrams. We devised a method to convert to graph representation, which is a central problem. In the following, we shall outline this conversion.
Before converting into graph representation, we convert ZX diagrams into a restricted yet just as universal form; this form is similar to Duncan et al’s [11] graph-like form (as described in Section 2.3), differing only in the existence of self-loops and two or more possible links between spiders. The restrictions are as follows:
1. Only Hadamard edges
2. Only one type of spider (Z spiders)
3. Every spider has one input and zero to two outputs or vice-versa
Restrictions 1 & 2 are in place to make our graph more conventional (and akin to graph-like diagrams) by having nodes and edges. Restriction 3, however, is in place for ease of proving. To divide up the proofs into logical steps, there are separate intermediate representations that build up all three restrictions. These IRs exist to divide proofs into logical components and are mostly transparent. Developers, however, can choose to use less restricted forms.
Given our restricted form, our algorithm proceeds as follows: A procedure numbers all ZX diagram components (i.e. the constructors) with a unique integer. Then it proceeds to number edges of components as follows: Each component with $n$ inputs and $m$ outputs will produce a pair of lists sized $n$ and $m$, where every position in the list indicates the closest fundamental component (spider or cap/cup). A non stacking/composing component with $n$ inputs, $m$ outputs, and node number $x$ will return lists of size $n$ and $m$, each with all entries being $x$.
When stacking two diagrams, the procedure concatenates respective lists, and when sequencing diagrams, it carries forward the outer lists. Figure 12 shows an example of this process. It is important to note that we treat caps and cups like spiders at this stage. Once the edge numbering is complete, we
**Algorithm 2** block representation to graph representation conversion
```plaintext
function NUMBERNODES(zx)
assignFreshNumber(zx)
if zx = Stack zx1 zx2 OR zx = Compose zx1 zx2 then
NumberInnerNodes(zx1)
NumberInnerNodes(zx2)
function NUMBEREDGES(zx)
if zx = Stack zx1 zx2 then
(in1, out1) = NumberEdges(zx1)
(in2, out2) = NumberEdges(zx2)
return (in1 ++ in2), (out1 ++ out2)
else if zx = Compose zx1 zx2 then
(in1, _) = NumberEdges(zx1)
(_, out2) = NumberEdges(zx2)
return in1, out2
else
return NodeNumber(zx), NodeNumber(zx)
function CREATEEDGES(zx)
if zx = Compose zx1 zx2 then
(_, out1) = GetEdgeNumbers(zx1)
(in2, _) = GetEdgeNumbers(zx2)
for ((in, out) ∈ (out1, in2)) do
AddEdge(in, out)
end for
if zx = Compose zx1 zx2 OR Stack zx1 zx2 then
CreateEdges(zx1)
CreateEdges(zx2)
return
end function
```
will traverse the diagram once more, and at every Compose, we will match the output edge numbers of the left diagram with the input edge numbers of the right diagram and mark each of those as an edge. Looking at our example in Figure 12, we see wires labeled (1, 2), (3, 2), and (3, 4) bridging the main composition. We use the information from the algorithm to infer which inputs/outputs of the diagram are connected to which node by looking at the outermost annotation. So in our example, we see by looking at the outermost labels that input 1 is connected to node 1, input 2 to node 3, output 1 to node 2, and output 2 to node 4. We can then annotate the entire diagram’s inputs and outputs by looking at the outermost labels. **Algorithm 2** shows a pseudocode description of these processes.
## 5 Future Directions
### 5.1 Circuit extraction
Once we have converted our block representation diagrams to graph representation diagrams, we will extract these graph representation diagrams to SQIR circuits. To accomplish this, we plan to follow Backens extraction work [3]. Our systems should allow us to define a notion of *gflow*. This graph-theoretical property is sufficient for extracting ZX diagrams into circuits, which will be a valuable tool to verify that optimizations do not break extractability. Circuit extraction will complete the core of VyZX as now we will be able to ingest circuits, write functions over them, and extract the circuits back to SQIR.
Once we have verified optimizations and extractions, we are able to pursue a couple of interesting projects, including optimization and simulation. As we are sharing base libraries with VOQC, the natural next idea is to integrate VyZX with VOQC properly.
5.2 VOQC integration
Once we complete graph representation conversion and circuit extraction, we plan to build an optimization pass using PyZX-like optimizations, as described in Section 2.3. Instead of building a standalone optimizer, we plan on integrating our work into VOQC. Since we can ingest from (and later extract to) SQIR, we have a common IR that will allow us to have a wholly verified pipeline and interoperate with ease. It will be interesting to see whether non-ZX optimizations combined with ZX optimizations yield a benefit. VOQC has the advantage of a complete interface that allows for pass selection, allowing us to expose the ZX-based optimizations to users. Furthermore, integration into VOQC will allow us to benchmark our optimizer against other state-of-the-art optimizers like PyZX [16] and Quartz [28].
5.3 The ZH and the ZW Calculi
VyZX’s design is focused exclusively on the ZX calculus, but the principles described here could easily be applied to other similar calculi such as the ZW calculus [12] or ZH calculus [2], which have broad applications to quantum communication and the description of quantum oracles, respectively. In fact, block construct diagrams let us easily translate between various graphical calculi. The translation can be verified by Coq by a simple inductive proof over the signature for the calculus and our shared string diagram constructions. It may be of interest to develop additional optimizations based on these different calculi, and a small extension to VyZX could allow such optimizations to be verified. With the different IRs we have right now, we are confident that extensions of the calculus are easy to integrate.
6 Conclusion
VyZX is a formal verification framework for the ZX-calculus. Creating VyZX came with several challenges that are unique to verifying a graphical language. Finding a way to easily assign semantics to a graph is inherently difficult due to traditional graph structures not being idiomatic in proof assistants. Block representation provides an inductive description of a graph that allows for easy proof while preserving the expressiveness that graphs provide. As block representation made it challenging to write diagrams, we developed a graph representation that can act as a way to implement programs over ZX diagrams. With these two views in place, we proved several core ZX diagram equivalences and added circuit ingestion from SQIR. With all these tools in place, we believe VyZX has a future as a basis for writing programs over ZX diagrams. In particular, our next step is to build a verified circuit optimizer in the style of PyZX. The core of VyZX will continue to be improved as we approach new problems in implementing programs such as a verified circuit simulator. We are confident that with VyZX’s evolution, it will provide a robust foundation for future work on the verification of graphical quantum calculi and their applications.
References
|
{"Source-Url": "http://rand.cs.uchicago.edu/files/VyZX.pdf", "len_cl100k_base": 7647, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 43446, "total-output-tokens": 10718, "length": "2e12", "weborganizer": {"__label__adult": 0.0006361007690429688, "__label__art_design": 0.0005440711975097656, "__label__crime_law": 0.0005688667297363281, "__label__education_jobs": 0.0008335113525390625, "__label__entertainment": 0.0001710653305053711, "__label__fashion_beauty": 0.0002918243408203125, "__label__finance_business": 0.0004646778106689453, "__label__food_dining": 0.0007596015930175781, "__label__games": 0.001026153564453125, "__label__hardware": 0.0026111602783203125, "__label__health": 0.00160980224609375, "__label__history": 0.0005307197570800781, "__label__home_hobbies": 0.00019884109497070312, "__label__industrial": 0.0012683868408203125, "__label__literature": 0.0005311965942382812, "__label__politics": 0.0005517005920410156, "__label__religion": 0.001110076904296875, "__label__science_tech": 0.3984375, "__label__social_life": 0.00016617774963378906, "__label__software": 0.005756378173828125, "__label__software_dev": 0.57958984375, "__label__sports_fitness": 0.0006504058837890625, "__label__transportation": 0.0014562606811523438, "__label__travel": 0.0003590583801269531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40044, 0.04133]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40044, 0.54439]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40044, 0.85605]], "google_gemma-3-12b-it_contains_pii": [[0, 3675, false], [3675, 7849, null], [7849, 11119, null], [11119, 13932, null], [13932, 16407, null], [16407, 18204, null], [18204, 20744, null], [20744, 23794, null], [23794, 25716, null], [25716, 28191, null], [28191, 31010, null], [31010, 34603, null], [34603, 38422, null], [38422, 40044, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3675, true], [3675, 7849, null], [7849, 11119, null], [11119, 13932, null], [13932, 16407, null], [16407, 18204, null], [18204, 20744, null], [20744, 23794, null], [23794, 25716, null], [25716, 28191, null], [28191, 31010, null], [31010, 34603, null], [34603, 38422, null], [38422, 40044, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40044, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40044, null]], "pdf_page_numbers": [[0, 3675, 1], [3675, 7849, 2], [7849, 11119, 3], [11119, 13932, 4], [13932, 16407, 5], [16407, 18204, 6], [18204, 20744, 7], [20744, 23794, 8], [23794, 25716, 9], [25716, 28191, 10], [28191, 31010, 11], [31010, 34603, 12], [34603, 38422, 13], [38422, 40044, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40044, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
de05da6d39bdee4b64dbad2ae8c001e48b54cb58
|
IT Ecosystems: A new Paradigm for Engineering Complex Adaptive Software Systems
Andreas Rausch, Jörg P. Müller, Dirk Niebuhr, Sebastian Herold
Technische Universität Clausthal
Email: firstname.lastname@tu-clausthal.de
Ursula Goltz
Technische Universität Braunschweig
Email: goltz@ips.cs.tu-bs.de
Abstract—Today’s software-intensive systems are among the most complex artifacts created by men. This is due to ever increasing requirements and functionality of the software on the one hand, and to rising structural complexity with respect to size, interconnectedness, and distribution on the other hand. Engineering and controlling these systems pushes existing software engineering approaches to (and beyond) their limits [1].
This paper describes the concept of IT ecosystems as a new approach for addressing this challenge from the perspective of software engineering. The concept and approaches described were developed in a large interdisciplinary research project (www.it-oekosysteme.org); we present first results including a validation scenario of a smart airport, which has been devised and implemented in the project, aiming at a comprehensive approach to IT ecosystems engineering.
I. INTRODUCTION
Software now pervades all areas of work and society. Public administration, management, organization and production companies as well as day-to-day personal life are no longer conceivable without the use of software. Software-controlled devices can be found in every household.
The continuous increase in size and functionality of software-intensive systems [1] have now made them among the most complex man-made systems [2]. The reasons for the steady increase in their complexity are twofold: On the one hand, the set of requirements imposed on software-intensive systems becomes larger and larger; the extrinsic complexity increases. This includes, for example, features, depth of functionality, adaptability, and variability. On the other hand, the structures of software-intensive systems, e.g., in terms of size, scope, distribution and networking of the system, are themselves becoming more complex; this leads to an increase in the intrinsic complexity of the system.
The expectations in software-intensive systems have been growing and continue to do so with their steadily increasing penetration into people’s private, social, and professional lives. Buyers and users of these systems expect:
• A significantly higher flexibility, adaptability, intuitive usability and timely response to changes in both the software system itself as well as in the processes for the expected life cycle and demands.
• A high degree of reliability (Dependability [3]) of the software system and the surrounding development, operation, and administration processes.
In the long run, the continuously growing complexity of software-intensive systems, and the rising user expectations have led to a situation where the classical methods and techniques of computer science reach their limits. As an analogy, let us consider the field of classical engineering: There, a single (even large) building can still be planned, explained and implemented centrally; however, the planning, establishment and development of a city need to be performed using very different methods and models. Similarly, the mechanisms required in computer science to develop and control software-intensive systems is also facing a paradigm shift.
To react to this challenge, in this paper we put forward the proposal to interpret software-intensive systems as part of a larger IT ecosystem, thus leading a step in the direction of such necessary paradigm shift. In the NTH School for IT Ecosystems (www.it-oekosysteme.org), we are involved in a comprehensive research program on concepts, architectures, platforms and tools to enable and support this paradigm shift.
The main contributions of the paper are the definition of a conceptual model of an IT ecosystem, and the specification of two scenarios: a general system scenario addressing some generic important properties of IT ecosystems, and a specific validation scenario, a smart airport. The paper is structured as follows: Basic characteristics of IT ecosystems are defined in Section II. Section III discusses the conceptual core components of the IT ecosystems paradigm in more detail. Two validation scenarios are defined and discussed in Section IV. The paper ends with a conclusion and outlook to future research in Section V.
II. CHARACTERISTICS OF IT ECOSYSTEMS
IT ecosystems are a class of systems that obey to certain characteristics and fulfill certain requirements. In analogy to the concept of an ecosystem in biology, IT ecosystems achieve reliability by means of some higher-level regulation system, through which they maintain equilibrium between the forces applied by the participating individuals. It is the balance between controllability of the whole ecosystem, and the autonomy of the system participants is the key characteristic of an IT ecosystem. When this balance is disturbed, the IT ecosystem breaks and it is no longer manageable. For an IT ecosystem to remain active and continuously evolve, we must understand this balance and the mechanisms necessary to achieve and maintain it.
A number of key research questions need to be addressed in this context: What kind of systems can be regarded as useful IT ecosystems? How can you recognize an IT ecosystem, i.e., how can you decide for a given system whether it is (or: should be conceived as) an IT ecosystem? Obtaining systematic and scientifically-based answers to these questions is ultimately a goal of NTH School for IT Ecosystems.
IT ecosystems are complex adaptive Systems of Autonomous Systems — i.e., complex system compounds consisting of interacting autonomous individual systems, which are adaptive as a whole, based on local adaptiveness (see outer ring in Figure 1). This means that not every large system can be considered as an IT ecosystem: The complexity of the interaction between the IT ecosystem and the resulting adaptivity is central to the autonomy of individual systems. It must also consider different life cycles of the individual systems.
Herein lies an important difference from the traditional understanding of hierarchical systems: A hierarchical system consists of subsystems, whose interactions tend to be globally predictable, controllable, and designable. An IT ecosystem is composed of individual autonomous systems whose behavior and interactions change over time. These changes are usually not centrally planned, but arise from independent processes and decisions within and outside the IT ecosystem.
In addition, IT ecosystems are mixed human-machine artifacts: humans in the IT ecosystem (see the inner ring in Figure 1) interacts with the individual systems in the IT ecosystem; this way, humans become an integral, active part of the IT ecosystem system – their requirements, goals, and behavior must be considered by modeling them as autonomous system components. Humans act as users, administrators, operators within the IT ecosystem. The very complex and multifaceted interaction and relationship between people and individual systems of an IT ecosystem is a key aspect. Only by including this aspect, a holistic approach can be established. The requirements, needs and expectations of humans in the individual systems of an IT ecosystem are subject to special dynamics and interaction. Thus, the individual systems need to be able to change continuously to meet the changing demands and adapt to changing behavior of humans. On the other side, changing expectations of humans will create new demands and needs.
III. IT ECOSYSTEM AS A PARADIGM OF AUTONOMY AND CONTROLLABILITY
As previously discussed, an IT ecosystem is made up of autonomous individual systems designed to interact with each other. Except for these interactions, the individual systems are considered as closed systems that can be created with the classical methods of software development and validation. However, in doing so, adaptivity, evolution and autonomy must already be considered. The individual systems themselves may consist of subsystems or components, or are used as sensors, actuators, or the interface to a physical environment.
The system compound as a whole can no longer be described and controlled by using classical methods. In addition to the complexity caused by the size of the system compound and its adaptability due to the autonomy of individual systems and their different life cycles (see Section II), humans are considered as a part of the IT ecosystem, too. The resulting IT ecosystem can be described and understood only by taking a holistic view. This is a necessary condition for the controllability of the overall system. However, this holistic approach leads to a very complex system with a high degree of autonomy, which in turn makes it difficult to control.
This leads us to a dilemma: In order to control the system, we need to look at it holistically; doing so increases the degree of autonomy, which in turn reduces controllability. To solve this dilemma, we must turn to the notions of autonomy and controllability in the IT ecosystem.
We distinguish three levels of autonomy in an IT ecosystem (see the middle ring in Figure 1). It should be noted that the higher the degree of autonomy, the stronger the human is involved in this autonomy:
1) **Adaptation** is an ability of an IT ecosystem: They provide mechanisms to ensure their short-term autonomy. By adaptation, we mean the property of the compound system and its autonomous individual systems to reconfigure and reorganize themselves in order to fulfill context-sensitive tasks in the system. Adaptation is therefore the short-term and often pre-planned capability of individual components and their interaction to adapt. Here, primarily the functionality is concerned, as shown in Figure 1 in the middle ring; adaptation is often achieved by modifying component configurations – parameters are set which alters the functional behavior of system components.
Individual systems may consist of (semi-)autonomous components, which we refer to as agents [4]. Adaptivity is necessary because the tasks to be performed vary greatly, and may be in conflict with other tasks, or because availability or functionality of the agents changes. These tasks are performed in a (semi-)autonomous manner through coordination or cooperation between the agents [5]. During task performance, plans are made and plan status and viability must be constantly checked. If deviations are detected, it may be necessary to reschedule [6].
2) **Modification** we understand the ability of an IT ecosystem to provide short- and medium-structural adaptability which is grounded in the autonomy of the individual systems. IT ecosystems are open and dynamic systems: new components and individual systems may enter into the system, with possibly unknown interface structure and behavior. Already known components and individual systems may change their behavior or leave the IT ecosystem. Thus, modification means adapting functionality and structure can be adapted to new requirements and constraints similar to the way proposed in autonomic computing [7] or organic computing [8], respectively. Subsystems are dissolved and new components are added. To conclude, modification is extended adaptation, since it includes both functional and structural changes, as shown in Figure 1 in the middle ring. Also humans can enter or leave the IT ecosystem, as may do other physical objects carried by humans, such as hardware components.
3) **Evolution** is the ability of an IT ecosystem to develop under changing external conditions in the medium- to long-term, and to sustainably reveal autonomous behavior. It includes the fundamental long-term development of the IT ecosystem in all its aspects, in particular through change and adaptation of monitoring, configuration, and control mechanisms, including structural and functional aspects. Evolution in terms of IT ecosystems, thus includes adaptation and modification. Evolution extends these concepts by the capability of changing the rules in the
IT ecosystem itself (see Figure 1, middle ring). Therefore, implementing evolution as manual, computer-supported, or (partially) automated further development of the IT ecosystem poses the biggest challenge with respect to long-term control and controllability. Evolution will be triggered by sustainable changes in environmental conditions or by fundamental changes in the expectations of users and operators of the IT ecosystem. It can be driven by human operators and users, but it also needs to be partly or fully automated in some cases. Evolution can mean either that the management, control and regulatory mechanisms are altered, or that individual components or entire systems are replaced or modified.
These three levels of autonomy are required from all the participants in the IT ecosystem. However, at the same time care needs to be taken that the IT ecosystem as a whole remains under control and thus ensures its superordinated goal and function. For this to be achieved, the participating autonomous systems and components accept a set of general rules, conceivable e.g., as traffic rules in the context of a smart city, to ensure the proper functioning of the entire system.
A key aspect in the explanation of the above effects and interrelationships is the question of the existence of a balanced equilibrium between autonomy and controllability of the IT ecosystem as shown in Figure 2. The IT ecosystems approach is based on:
- the existence of equilibrium states, i.e., states that ensure a smooth functioning of the overall system (e.g., the flow of traffic in a city)
- forces that aim at disturbing the balance (such as a rear-end collision on a major road junction), and
- centralized or decentralized mechanisms to maintain or re-establish the balance (such as a central traffic control system or communications and distributed intelligence of vehicular navigation systems).
If equilibrium states can be established permanently, we have achieved the goal of providing desirable autonomy, while at the same time ensuring controllability.
To ensure controllability, IT ecosystems must feature a set of concepts outlined in Figure 2:
1) Communities of autonomous systems and individual players should form themselves dynamically. An essential feature of these communities is common and jointly accepted functional objectives. Individual systems and components can be simultaneously be members of several communities. These communities may change over time, dissolve, and new ones can be created. This is part of the adaptation of the functionality in the IT ecosystem (cf. Figure 1).
2) Structures required for organizing and implementing the functional objectives of the community form dynamically. These structures define roles, responsibilities, communication channels, and interaction mechanisms in the communities. Like the communities themselves, organization structures can also change, thus leading to a modification of the structures in the IT ecosystem (see Figure 1).
3) Commonly accepted rules govern the behaviour and interactions of communities and their organizational structures. Control within IT ecosystems (in a sense of ensuring adherence to these rules) can be realized by different means. An important approach in this context are electronic institutions [9]. The rules in the IT ecosystem can be changed through the concept of evolution. As shown in Figure 3, institutions should ensure management, control and regulation mechanisms – the basic rules within the organizational structure of communities, in the IT ecosystem. These mechanisms can be explicit, e.g., centralized or federated via dedicated components, or implicit, for example, realized by market mechanisms, local incentive and preference structures of individual systems or components to achieve a specific behavior of the system.
The concept of equilibrium in IT ecosystem enables us to provide mechanisms for control, monitoring, and regulation, and to ensure rule compliance via electronic institutions. In case these rules are violated, the adaptation, modification, and evolution mechanisms provided by the IT ecosystem can re-establish the balance. Based on these mechanisms, equilibrium concepts are defined and approaches to detection, prevention and treatment of disorders in the IT ecosystem are described and implemented.
IV. IT ECOSYSTEMS SCENARIOS
The above properties allow an assessment of applicability and usefulness of the IT ecosystem metaphor for different system scenarios. However, it is clear that these criteria are not entirely clear-cut. There is a gray area here - we consider, for example, a large automated high bay warehouse, that include the autonomous vehicles transporting goods and merchandise orders and deliveries. Is such a system an IT ecosystem — or not?
Answering the question forces a detailed analysis of system scenarios. As a result, we in the following, we illustrate and study the notion of IT ecosystems by means of two scenarios: a generic application system / infrastructure scenario (Section IV-A, and a specific instance of the generic scenario describing a smart airport (Section IV-B).
A. System scenario: Application System + System Infrastructure
A compound system includes an application system that uses functions of an underlying infrastructure system. Application system and infrastructure system are developed and/or operated largely...
independent from each other, by different organizations. Changes in the infrastructure system lead to disruptions, faults, and subsequent need for action in the application system. Conversely, changes in the application system will place new demands on and require adjustments to the infrastructure.
Consider, for example, a PC-application environment based on the MS Windows operating system in a large company. Here, it is required to provide a number of individual servers such as mail server, calendar management, web server, database server, or workflow services, in order to support corporate functions and processes. This system is thus a system compound. Humans are part of the system and interact (as customers or employees) with the individual systems. Furthermore, the system is capable of adaptation – it performs load balancing and coordinates user requests in order to deal with hot spots. The system will undergo modification: Employees can join or leave the system e.g. with mobile devices. The system must integrate new types of services to provide the enterprise with new functionality.
Finally, also evolution takes place in this system, e.g., via automatic mechanisms for updating individual systems. The evolution and emergence of new individual systems can produce requirements for adapting, modifying, or evolving other component systems and the infrastructure. For example, the integration of a high-definition video conferencing systems require the support of real-time transport protocols by the operating system as well as an upgrade of the corporate network to Gigabit LAN.
Required guarantees can be provided in this system e.g., via access rights (the access of certain users is restricted to parts of the system) or via service level contracts and corresponding service level enforcement mechanisms to assert users certain functions in a certain quality. In a company such as Deutsche Börse, mechanisms are established, e.g. to recognize and regulate irregular behavior such as panic selling. Since this system scenario has all the aforementioned characteristics, such systems are IT ecosystems.
B. Validation Scenario: Smart Airport
The second scenario which we propose to validate approaches for IT ecosystems, is much more specific. It describes an exemplary sequence of events on a usual day at an airport like Frankfurt Airport\(^1\). We assume that an IT ecosystem is established at this airport, consisting of several IT components and subsystems. We will accompany Bob, Anna, and Chris during a travel to show the benefits they would gain from an IT ecosystem. We have developed a demonstrator which will enable the scenario presented here to show the impact of our research results. Figure 4 illustrates the systems being parts of the overall IT ecosystem application scenario.

In the scenario the protagonists Bob, Anna and Chris use small devices called SmartFolks. SmartFolks can be imagined as devices with some computing power like PDAs. The SmartFolks themselves represent their owners within the IT ecosystem and act as an interface to the IT ecosystem.
1) (Journey to the Airport). While the first protagonist named Anna is leaving her home, her SmartFolk reminds her as she closes the door that she forgot some things. Due to sensors in the drawer of her desk the SmartFolk detects that she left her identity card there and reasons that both her passport and travel documents are there too. The sensor system is able to work with all kinds of objects Anna has defined in her reminder list. On the way to her car she remembers that she wanted to buy some sunglasses. After a quick look at her wristwatch she decides to catch up on it at the airport and adds the glasses to the SmartFolk’s shopping list.
2) (Parking at the Airport). The flight itinerary is available on Anna’s SmartFolk. As she is on her way to the airport the SmartFolk guides her to a parking lot conveniently located to her departure terminal. The airport system takes care that not all SmartFolk users are transferred to the same free parking lot and that they will have free access route. Anna chooses a different parking lot than the suggested one; the system recognizes the discrepancy and asks Anna to give reasons for that. Anna gives the feedback that she chose a parking lot in the shadow as it is a very sunny day.
3) (Traffic Accident). Chris is also driving to the airport while a traffic accident occurs near his current location, blocking the entrance to one of the parking garages. Observation systems, e.g., SmartCameras, integrated in the car and in the airport infrastructure notice the accident and send a distress signal to the Traffic Management Center (TMC). The information is broadcasted and spread amongst other system components. After the TMC has received and processed the message, it reacts by adjusting and redirecting traffic. Chris, located near the accident, follows the new directions stated by his navigation system and arrives at a different parking garage.
4) (Orientation). Upon arrival at the airport, the SmartFolk leads Anna to a nearby SmartBase. SmartBases are displayless and interfaceless sources of information spread across the airport. Compared to classical InfoKiosk or PointOfSale systems, SmartBases need much less and simpler components leading to lower costs, less energy usage and more resilience against vandalism. The user interface for accessing the information is provided by Anna’s SmartFolk which communicates wirelessly with a SmartBase. Not all SmartBases are connected to a backbone network, some use some form of energy harvesting instead. The SmartBases hold a plethora of information: Duty formalities, real estate offers, classifieds, flight&train schedules, etc. While Anna is accessing information relevant to her, the SmartFolk also downloads additional bits of information. This “parasitic” information will be automatically uploaded to other SmartBases as Anna passes them. After some time the SmartFolk will silently delete the “parasitic” information based on expiry criteria.
5) (Transportation Request). At an entrance of the airport, Anna requests transportation using her SmartFolk and waits for an autonomous transportation vehicle (SmartTransport), to bring her to the designated check-in desk. However, at the same time, several large groups of travelers arrive at the train and bus station near Anna’s entrance and are moving towards her position. She does not know that, at this moment, most SmartTransports are at a location far away from this entrance,
and, by coincidence, the majority also reports a low battery power level and need to visit a recharge station. Noticing the growing crowd of travelers at her location, Anna is surprised that after a short while, a sufficient number of SmartTransports is arriving to cope with the waiting passengers.
6) (Shopping during Waiting Time). Anna has noted on her shopping list that she needs sunglasses. While she is at the airport the SmartFolk compares the entries on her shopping list, with proposals made by shops that are near to Anna. The sensors detect that Anna is either on the escalator or on the moving walkway. The SmartFolk offers two possibilities for the next steps: Either, go shopping and then eat something, or the other way around. Both possibilities are suggested via video and Anna can choose the option according to her preferences. The feedback of interviews like those from several SmartFolk users are evaluated statistically. Bob, another SmartFolk user in the airport, never reacts to the advertising of duty-free shops. At an interactive request he responds that being on business trips he has no time to go shopping. Because this is also mentioned by other people the SmartFolk developers integrate a new rule into the system: For traveling businessmen do not consider the way to duty-free shops.
7) (Waiting Time, Goods Transport). While Anna is still waiting for the check-in, she observes the autonomous transport and delivery of goods to a nearby airport shop. Several transport vehicles have to pass a narrow opening along their way concurrently causing a small congestion. The vehicles organize and coordinate themselves, so the waiting time is spread evenly among them.
8) (Check-in). Now, Anna is joining the queue for the check-in desk but a tourist party blocks her way. Fortunately, she arrived early and therefore is not in hurry.
9) (Baggage Drop-off). At the check-in desk Anna asks herself how her baggage will be transported over the airport. This is done by an autonomous transportation service. SmartTransports of different sizes perform this task by self-organization. The baggage items must be carried between different locations in the airport like check-in desks, baggage security check stations, start and landing zones of airplanes, etc. Additionally, there are observation systems (e.g., SmartCameras, sensors, RFID readers) placed around the area, which gather and provide information (e.g. the current traffic volume), changing requirements or arising disturbances. This information is used by the SmartTransports (in terms of self-organization and interaction) to optimize transportation.
10) (Waiting Time). After checking in Anna is bored waiting for her flight. She walks around the airport hall and passes some info points placed on the airport. One of them displays ideas for improving the check-in devices and provides the possibility to add own ideas. Watching some clips by other passengers Anna gets a better idea: With the help of her handbag Anna reenacts that she puts luggage on a conveyor at the check-in counter below instead of lifting it. In the past she was often annoyed with this issue. With her SmartFolk Anna films her action and, after this, sends the clip to the info point. After a specific period of time the developers of the check-in devices download the passengers’ ideas from the info points and thus gain proposals for improvement.
11) (Passport Check). Now, Anna decides to go to the gate of her flight. To reach this area she has to pass the passport check where she holds her passport beneath a small device. The turnstile before her is released, and Anna passes the check point. In a queue beside her Anna recognizes how another traveler has some problems and after his third illegal try an alarm sound starts and a security man comes along.
12) (Waiting Time). After passing the security check Anna has to wait an hour until boarding. In order to use the waiting time meaningfully, she decides to search for more information concerning her travel destination. The SmartFolk recommends sights and presents photos along her travel route. Pictures are partly from public sources (e.g., www.flickr.com) and partly from passengers currently arriving from there. As participants do not want to share their private photos, intimate pictures are not sent to Anna. With this information Anna gets a good overview of the sights she definitively wants to see.
13) (Boarding). After some time of waiting, Anna boards the airplane. Due to the dimensions of the airport, she has to take another SmartTransport from the gate to her plane. As previously stated in step 3, the airport contains a TMC for traffic management and control inside the airport (The norms and additional traffic rules must be defined by the TMC, which can be considered an "Organization"). After Anna’s airplane is taking off, a broken autonomous vehicle or obstacle has been detected by the SmartCameras installed on the bus and around the airport which blocks the first established route.
14) (Departure, Travel Time, Returning). During Anna’s journey the airport system is enhanced whereas the system architecture and the application itself are maintained. Amongst others, an update to the rule base is installed: No advertisements for duty-free shops are displayed to traveling salesmen except this person is inside the shop or has enough time (see Step 6). While Anna and Bob are traveling, Chris returns from his journey, where he bought a newly developed SmartFolk. Now he is curious whether the developers did a good job and whether the new device smoothly integrates with the airport IT ecosystem.
15) (Catastrophe). A catastrophe exercise was conducted and filmed by the security cameras. The participants were interviewed afterwards whether the existing system acts as they expected. One criticized aspect was that participants who want to rescue victims were evacuated first and afterwards they had to go in again. After the analysis the application was enhanced according to the participants needs. The new version of the SmartFolk is enhanced by an evacuation application. In case of a catastrophe only the evacuation application is available. This application provides two configurations: the Evacuation and the Helper configuration. A SmartFolk user now has the possibility to choose two rescue relevant configurations: she can select the Evacuation configuration in case he wants to ensure his own life, and choose the Helper configuration if she decides to save the life of as many people as possible. There is a catastrophe at the airport. A plane crashes in the waiting hall of Terminal A. A fire breaks out. All software agents located at the airport are informed; the SmartFolk provides the evacuation application. Chris is close to the waiting hall of Terminal A. His new SmartFolk offers him the two configuration possibilities provided by the evacuation application. The first opportunity is to get information regarding his evacuation and the second opportunity is to help injured persons. Chris decides to help injured people and is directed to the first patient.
16) (At the train station). In the meantime, Bob wants to get his connecting train as fast as possible. The system detects who has to come first and ensures the minimal property by means of verification that nobody misses his train. While the crowd around Bob starts moving the SmartFolk calms Bob down and informs him that he still has enough time until his train arrives.
17) (Return Journey). As Bob’s train enters the station his SmartFolk recognizes the new context and shifts its environment profile from “silent” to “mobile”, i.e., the vibration alarm is activated and the volume of the ring tone is increased.
The presented scenario can be seen as a IT ecosystem. Adaptation in the abovementioned sense happens for example by the TMC directing traffic, or when Anna’s SmartFolk connects after arrival to the airport’s IT system. The available set of components running on a SmartFolk ist modified in the case of accidents (only the evacuation application is available). Furthermore, evolution happens when the basic rule for non-shopping businessmen is added.
Note, that the proposed approach of IT ecosystems uses this airport scenario as example only and does not compete with approaches dealing with airports or single aspects of these complex systems [10]–[16]. On the contrary, because this domain has been investigated very well, those situations, in which the levels of autonomy and controllability are required, are well understood but not supported by a systematic (and domain-independent) engineering approach.
V. CONCLUSION AND OUTLOOK
In this paper, we have defined a new approach towards complex, software-intensive systems: IT ecosystems. Our approach combines the systems-of-systems view as defined e.g. by [17] with research on ultra-large scale systems [1], but extends these approaches by the use of the multiagent systems metaphor [6] in order to express autonomy and decentralized control. Also, by acknowledging the role of the human as part of the IT ecosystem, our approach opens up new research venues linking control theory and software engineering with human-machine interaction and psychology. Last but not least, our model borrows notions of balance and equilibria from biologically inspired ecosystems research. The main contributions of the paper are a conceptual model of an IT ecosystem, and the specification of two scenarios: a general system scenario addressing some generic important properties of IT ecosystems, and a specific validation scenario, a smart airport.
A current limitation of our work lies in the somewhat restricted suitability of the currently considered airport scenario as regards its IT ecosystem characteristics. In the terminology of Maier [17], this scenario can be best classified as belonging to the simplest class of complex systems of systems, so called directed systems, in which an overarching system purpose exists and the complex system is built and controlled to this purpose, with very limited long-term evolution. So far, we have not yet investigated more open, complex, and evolutionary system types, such as collaborative or virtual systems, and corresponding scenarios, which remain areas of future work. Still, as the scenario description shows, even the smart airport contains considerable complexity worth investigating, mainly introduced by including humans in different roles as part of this IT ecosystem. Future work will apply the concepts developed in this paper to other domains including urban traffic management [18] and social network engineering.
VI. ACKNOWLEDGMENTS
This publication was supported by NTH Focused Research School for IT Ecosystems. Many of the thoughts included in this article origin from discussions with Professors and Scientific staff in the NTH Focused Research School for IT Ecosystems. Our special thanks go to Ingrid Schmees and Sandra Lange for their assistance in editorial revision of the article.
REFERENCES
|
{"Source-Url": "http://winf.in.tu-clausthal.de/winf/publications/2012/Rausch+2012DEST.pdf", "len_cl100k_base": 6692, "olmocr-version": "0.1.48", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18961, "total-output-tokens": 8407, "length": "2e12", "weborganizer": {"__label__adult": 0.0006136894226074219, "__label__art_design": 0.0008420944213867188, "__label__crime_law": 0.0006890296936035156, "__label__education_jobs": 0.00214385986328125, "__label__entertainment": 0.00021731853485107425, "__label__fashion_beauty": 0.0002853870391845703, "__label__finance_business": 0.0007238388061523438, "__label__food_dining": 0.0005793571472167969, "__label__games": 0.00138092041015625, "__label__hardware": 0.0020771026611328125, "__label__health": 0.0012331008911132812, "__label__history": 0.0007758140563964844, "__label__home_hobbies": 0.00020575523376464844, "__label__industrial": 0.0009055137634277344, "__label__literature": 0.0007824897766113281, "__label__politics": 0.0005297660827636719, "__label__religion": 0.0006513595581054688, "__label__science_tech": 0.378662109375, "__label__social_life": 0.0002770423889160156, "__label__software": 0.0220794677734375, "__label__software_dev": 0.5771484375, "__label__sports_fitness": 0.0003867149353027344, "__label__transportation": 0.0063323974609375, "__label__travel": 0.00046443939208984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39478, 0.01451]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39478, 0.28283]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39478, 0.9342]], "google_gemma-3-12b-it_contains_pii": [[0, 6189, false], [6189, 12212, null], [12212, 17633, null], [17633, 24309, null], [24309, 31455, null], [31455, 39478, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6189, true], [6189, 12212, null], [12212, 17633, null], [17633, 24309, null], [24309, 31455, null], [31455, 39478, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39478, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39478, null]], "pdf_page_numbers": [[0, 6189, 1], [6189, 12212, 2], [12212, 17633, 3], [17633, 24309, 4], [24309, 31455, 5], [31455, 39478, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39478, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
7e827a8defee6f98251d37df62e718fe6dc1fd80
|
Structured Synchronous Reactive Programming for Game Development
Case Study: On Rewriting Pingus from C++ to CÉu
Francisco Sant’Anna
Department of Computer Science
Rio de Janeiro State University (UERJ)
Rio de Janeiro, Brazil
francisco@ime.uerj.br
Abstract—We present a qualitative case study of rewriting the video game Pingus from C++ to the structured synchronous reactive language CÉu. CÉu supports reactive control-flow primitives that eliminate callbacks and let programmers write code in direct and sequential style. Structured reactivity helps describing complex control-flow relationships in the game logic more concisely. We show gains in productivity for four behaviors in Pingus through a qualitative analysis of the proposed implementations in CÉu in comparison to the originals in C++. We also categorize the behaviors in recurrent control-flow patterns that likely apply to most games.
Keywords—Control Flow ; Event-Driven Programming ; Game Logic ; Synchronous Reactive Programming ;
I. INTRODUCTION
Pingus is an open-source puzzle-platform video game based on Lemmings. The objective of the game is to guide a group of penguins through a number of obstacles towards a designated exit (Figure 1). Pingus is developed in standard object-oriented C++, “the lingua franca of game development” [14]. The codebase is about 40,000 lines of code (loc), divided into the engine, level editor, auxiliary libraries, and the game logic itself.
According to Tim Sweeney (of Unreal Engine fame), about half the complexity in game development resides in simulation (aka game logic), but which only accounts for 10% of the CPU budget [24]. The high development costs contrasting with the low impact on performance appeals for alternatives with productivity in mind, especially considering that it is the game logic that varies the most between projects. Sweeney states that “will gladly sacrifice 10% of our performance for 10% higher productivity”.
Object-oriented games rely on the observer pattern [14] to handle events from the environment (e.g., key presses and timers) and also as a notification mechanism between entities in the game logic. The observers are short-lived callbacks that have to execute as fast as possible to keep the game reactive to incoming events in real time. For this reason, callbacks cannot use long-lasting locals and loops, which are elementary capabilities of classical structured programming [2, 12, 18]. In this sense, callbacks actually disrupt structured programming, becoming “our generation’s goto”.2
In this work, we advocate structured synchronous reactive programming (SSRP) as a more productive alternative for game logic development. We present a qualitative case study of rewriting Pingus from C++ to CÉu. CÉu [20] is a Esterel-based [5] programming language that originally targets embedded soft real-time systems. It aims to offer a concurrent, safe, and expressive alternative to C. SSRP lets developers write code in direct style, recovering from the inversion of control imposed by event-driven execution [2, 12, 18]. CÉu supports logical parallelism with a resource-efficient implementation in terms of memory and CPU usage. The runtime is single threaded and does not rely on garbage collection for memory management [19]. Existing work in the context of embedded sensor networks evaluates the expressiveness of CÉu in comparison to event-driven code in C and attests a reduction in source code size (around 25%) with a small increase in memory usage (around 5–10%) and comparable CPU responsiveness [19]. CÉu has
1Official Pingus repository: github.com/Pingus/pingus/
2“Callbacks as our Generations’ goto Statement”: tirania.org/blog/archive/2013/Aug-15.html
Pools // assigns a value to a variable
Events with the characteristics that follow:
1. Reactive: code only executes in reactions to events.
- code
- emit
2. Synchronous: reactions run atomically and to completion on each line of execution, i.e., there’s no implicit preemption or real parallelism.
- spawn
- await
CÉU is designed for control-intensive applications, supporting concurrent lines of execution, known as trails, and instantaneous broadcast communication through events. CÉU provides an await statement that blocks the current running trail allowing the program to execute its other trails; when all trails are blocked, the reaction terminates and control returns to the environment to process upcoming events. Listing 1 shows a compact reference of CÉU.
Listing 2 shows a simple example in the context of embedded systems that blinks an LED every 500ms in the background and awaits either a button click or 1 hour to terminate.
II. An Overview of CÉU
CÉU is a synchronous reactive language in which programs evolve in a sequence of discrete reactions to external events with the characteristics that follow:
- Reactive: code only executes in reactions to events.
- Synchronous: reactions run atomically and to completion on each line of execution, i.e., there’s no implicit preemption or real parallelism.
CÉU is designed for control-intensive applications, supporting concurrent lines of execution, known as trails, and instantaneous broadcast communication through events. CÉU provides an await statement that blocks the current running trail allowing the program to execute its other trails; when all trails are blocked, the reaction terminates and control returns to the environment to process upcoming events. Listing 1 shows a compact reference of CÉU.
Listing 2 shows a simple example in the context of embedded systems that blinks an LED every 500ms in the background and awaits either a button click or 1 hour to terminate.
be computed in bounded time [23], ensuring that games progress with time.
III. THE PINGUS CODEBASE AND REWRITING PROCESS
In Pingus, the game logic accounts for almost half the size of the codebase: 18.173 from 39.362 locs (46%) spread across 272 files. However, about half of the game logic relates to non-reactive code, such as dealing with configurations and options, saved games and serialization, maps, and level descriptions, string formatting, collision detection, graph algorithms, etc. This part remained unchanged in our rewrite and relies on the seamless integration between CéU and C/C++ [19]: the type systems are compatible and the integration happens at the source code level. This enables accessing data and calling C/C++ from CéU and vice-versa. Therefore, we only rewrote 9.186 locs spread across 126 files. In order to only consider relevant code in the analysis, we then removed all headers, declarations, trivial getters & setters, and other innocuous statements, resulting in 4.135 condensed locs spread across 70 implementation files originally written in C++. We did the same with the implementation in CéU, resulting in 3.697 condensed locs.
Figure 2 summarizes the effective game logic codebase in the two implementations.
Although the analysis in this work is qualitative, the rows with lower ratio numbers in Figure 2 do correlate with the parts of the game logic that we consider more susceptible to SSRP. For instance, the Pingu behavior (row 4, ratio 0.80) contains complex animations that are affected by timers, game rules, and user interaction. In contrast, the Option screen (row 9, ratio 0.97) is a simple UI grid with trivial mouse interactions.
The rewriting process consisted of identifying sets of callbacks in C++ implementing control flow in the game and translating them to CéU using appropriate structured constructs. As an example, a double mouse click is characterized by a first click, followed by a maximum amount of time, followed by a second click. This behavior depends on different events (clicks and timers) which have to occur in a particular order. In C++, the implementation involves callbacks crossing reactions to successive events which manipulate state variables explicitly. As a general rewriting rule, we identify control-flow behaviors in the C++ codebase by looking for class state members with identifiers resembling verbs, statusues, and counters (e.g., pressed, particle_thrown, mode, and delay_count). Good chances are that such variables encode some form of control-flow progression that crosses multiple callback invocations. Not all state follows these conventions, but they helped finding classes that are heavy on control flow quickly at the beginning of the process.
IV. CONTROL-FLOW PATTERNS & CASE STUDIES
During the rewriting process, we have identified four abstract cause/effect control-flow patterns which likely apply to most games:
1) Finite State Machines: Event occurrences lead to transitions between states and trigger actions comprising the behavior of a game entity.
2) Continuation Passing: The completion of a long-lasting activity in the game may carry a continuation, i.e., some action to execute next.
3) Dispatching Hierarchies: Entities form a dispatching hierarchy in which a container that receives a stimulus automatically forwards it to its managed children.
4) Lifespan Hierarchies: Entities form a lifespan hierarchy in which a terminating container entity automatically destroys its managed children.
We describe representative game behaviors in detail distributed in the four patterns and analyze their implementations in C++ and CéU.
A. Finite State Machines
Event occurrences lead to transitions between states and trigger actions comprising the behavior of a game entity.
1) Case Study: Detecting Double-Clicks in the Armageddon Button: In Pingus, a double click in the Armageddon button at the bottom right of the screen literally explodes all penguins.
Listing 3 shows the C++ implementation for the class ArmageddonButton with methods for rendering the button and handling mouse and timer events. The code in the figure focus on the double click detection and hides unrelated parts with <...>. The methods update (In 28–34) and on_click (In 28–34) are examples of short-lived callbacks, which are pieces of code that execute atomically in reaction to external input events. The callback on_click reacts to mouse clicks detected by the base class RectComponent (In 2), while the callback update continuously reacts to the passage of time, frame by frame. The class first initializes the variable press_time (In 3) to track the first click (In 32). It also initializes the variable press_count (In 4) to count the time since the first click (In 16–17). If another click occurs within 1 second, the class signals the double click to the application (In 29–30). Otherwise, the press_count and press_time state variables are reset (In 18–21). Figure 3 illustrates how we can model the double-click behavior in C++ as a state machine. The circles represent the state of the variable press_count, and the arrows represent the callbacks manipulating it. Note in Listing 3 how the accesses to the state variables are spread across the entire class: the distance between the initialization of press_count (In 3) and the last access to it (In 32) is over 40 lines in the original file. Arguably, this dispersion of code across methods makes the
3 We used SLOCCount to count only non-blank, non-comment lines in the codebase: www.dwheeler.com/slocount
4 Effective codebase: github.com/an000/p/tree/master
5 Double click animation: github.com/an000/p/#1
Table 3. Ceu/C++ Comparison
<table>
<thead>
<tr>
<th>Path</th>
<th>Ceu</th>
<th>C++</th>
<th>Ceu/C++ Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>game/</td>
<td>2064</td>
<td>2268</td>
<td>0.91</td>
</tr>
<tr>
<td>./</td>
<td>710</td>
<td>679</td>
<td>1.05</td>
</tr>
<tr>
<td>objs/</td>
<td>470</td>
<td>478</td>
<td>0.98</td>
</tr>
<tr>
<td>pingu/</td>
<td>884</td>
<td>1111</td>
<td>0.80</td>
</tr>
<tr>
<td>./</td>
<td>343</td>
<td>458</td>
<td>0.75</td>
</tr>
<tr>
<td>actions/</td>
<td>541</td>
<td>653</td>
<td>0.83</td>
</tr>
<tr>
<td>worldmap/</td>
<td>468</td>
<td>493</td>
<td>0.95</td>
</tr>
<tr>
<td>screens/</td>
<td>1109</td>
<td>1328</td>
<td>0.84</td>
</tr>
<tr>
<td>option/</td>
<td>347</td>
<td>357</td>
<td>0.97</td>
</tr>
<tr>
<td>others/</td>
<td>762</td>
<td>971</td>
<td>0.78</td>
</tr>
<tr>
<td>misc/</td>
<td>56</td>
<td>46</td>
<td>1.22</td>
</tr>
<tr>
<td>misc</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>misc</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>misc</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Figure 2. The Pingus codebase directory tree.
Listing 3. C++: Detecting double-clicks in the Armageddon button.
```cpp
ArmageddonButton::ArmageddonButton(<...>):
RectComponent(<...>),
pressed(false); // button is not initially pressed
press_time(0); // how long since 1st click?
<br
} <...
{} <...
void ArmageddonButton::draw (<...>) {
<...>
}
void ArmageddonButton::update (float delta) {
<...>
if (pressed)
press_time += delta;
if (press_time > 1.0f)
pressed = false; // give up, 1st click was too long ago
} else
...<...>
press_time = 0;
}
void ArmageddonButton::on_click (<...>) {
if (pressed)
send_armageddon_event();
} else
pressed = true;
}
```
```ceu
do var RectComponent but = <...>;
<...>
loop do
await but.on_click;
par/or do
await 1s;
with
await but.on_click;
break;
end
end <...>
emit game.armageddon;
end
```
Figure 3. State machine for detecting double-clicks in the Armageddon button.
Do you have any questions or need further clarification on the content? Let me know!
level of the class, in which only instance members remain active between invocations. In particular, locals and loops cannot persist across invocations.
2) Summary & Pattern Uses in Pingus: In comparison to explicit state machines, the structured constructs of CÉU introduce some advantages as follows:
- They encode all states with direct sequential code, eliminating callbacks and shared state variables for control-flow purposes.
- They handle all states (and only them) in the same contiguous block, improving code encapsulation.
Object-oriented games also adopt the state pattern to model state machines with subclasses describing each possible state [14]. However, this approach is not fundamentally different from Pingus’ use of switch or if branches to decode state.
In Pingus, the player may assign actions to specific penguins, as illustrated in Figure 4. The Bomber action explodes the clicked penguin, throwing particles around and also destroying the terrain under its radius. We model the explosion animation with a sequential state machine with effects associated to specific frames, such as playing a sound and throwing the particles. Pingus supports other 15 actions in the game. Five of them implement at least one state machine and are considerably smaller in CÉU in terms of locs (Figure 5). For the other 11 actions without state machines, the reduction in locs is negligible. This asymmetry illustrates the gains in expressiveness when describing state machines in direct style.
Among all 65 implementation files in CÉU, we found 29 cases in 25 files that use structured mechanisms to substitute states machines. They typically manifest as await statements in sequence (e.g., In 5,9 in Listing 4).
B. Continuation Passing
The completion of a long-lasting activity in the game may carry a continuation, i.e., some action to execute next.
1) Transition from Story to Credits to Worldmap Screen: The campaign world map has clickable blue dots in the two extremes of the map road to show introductory and closing ambiences stories, respectively. For introductory stories, the game returns to the world map after showing the story pages. For closing stories, the game also shows a Credits screen before returning to the world map. From the click in the story dot until the return to the world map, the game animates the story with a timer-based scrolling and also reacts to user input to advance it.
In C++, the class StoryDot in Listing 5 (In 1–12) first reads the level file (In 5) to check whether it is a closing story and should, after termination, show the Credits screen. The boolean variable show_credits (In 2,5,10) is passed to the class StoryScreen (In 10) and represents the screen continuation, i.e., what to do after showing the story. The class StoryScreen (not shown) then forwards the continuation even further to the auxiliary class StoryScreenComp (In 16–40). When the method next_text has no story pages left to display (In 32–38), it decides where to go next, depending on the continuation flag show_credits (In 33).
In CÉU, the loop of Listing 6 controls the flow between the screens as a direct sequence of statements. We first invoke the Worldmap (In 2), which shows the map and lets the player interact with it (e.g., walking around) until a dot is clicked. If the player selects a story dot (In 4–9), we invoke the Story and await its termination (In 5). After showing the story, we check the returned values (In 6) to perhaps show the Credits screen (In 8). The enclosing loop restores the Worldmap and repeats the process.
Figure 6 illustrates the continuation-passing style of C++ and the direct style of CÉU for screen transitions:
1) Main Loop → Worldmap:
- C++ uses an explicit stack to push the Worldmap screen (not shown in Listing 5).
- CÉU invokes the Worldmap screen expecting a return value (Listing 6, In 2).
2) Worldmap (blue dot click) → Story:
- C++ pushes the Story screen passing the continuation flag (Listing 5, In 10).
- CÉU stores the Worldmap return value and invokes the Story screen (Listing 6, In 2,5).
3) Story → Credits:
- C++ replaces the current Story screen with the Credits screen (Listing 5, In 34).
---
Figure 4. Assigning the Bomber action to a pingu.
<table>
<thead>
<tr>
<th>Action</th>
<th>CÉU</th>
<th>C++</th>
<th>Explicit State</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bomber</td>
<td>23</td>
<td>50</td>
<td>4 state variables</td>
</tr>
<tr>
<td>Bridger</td>
<td>75</td>
<td>100</td>
<td>2 state variables</td>
</tr>
<tr>
<td>Drown</td>
<td>6</td>
<td>15</td>
<td>1 state variable</td>
</tr>
<tr>
<td>Exiter</td>
<td>7</td>
<td>22</td>
<td>2 state variables</td>
</tr>
<tr>
<td>Splashed</td>
<td>6</td>
<td>19</td>
<td>2 state variables</td>
</tr>
</tbody>
</table>
---
"Bomber action animation: github.com/an000/p/#2"
Figure 6. Continuation (C++) vs Direct (CÉU) Styles.
Listing 5. C++: Transition from Story to Credits and Worldmap screen.
```cpp
1 StoryDot::StoryDot(FileReader& reader) :
2 show_credits(false), // do not show by default
3 {
4 \(<...>\)
5 reader.read("credits", show_credits);
6 // from file
7 }
8
9 void StoryDot::onClick() {
10 \(<...>\)
11 push(<StoryScreen>(show_credits));
12 \(<...>\)
13 }
14
15 // class separator ///
16
17 StoryScreenComp::StoryScreenComp (<...>) :
18 show_credits(show_credits),
19 <...>
20 {
21 <...>
22 \(<...>\) // draw and update page
23 }
24
25 void StoryScreenComp::next_text() {
26 if (!displayed) {
27 <...>
28 } else {
29 <...>
30 if (!pages.empty()) {
31 <...>
32 } else {
33 if (show_credits) {
34 replace(<Credits>(<...>));
35 } else {
36 pop();
37 }
38 }
39 }
40 }
```
Listing 6. CÉU: Transition from Story to Credits and Worldmap screen.
```c
1 loop do
2 var int ret = await Worldmap();
3 if ret==CREDITS or ret==BACK then
4 \(<...>\)
5 var bool is_click = await Story();
6 if is_click and ret==CREDITS then
7 \(<...>\)
8 await Credits();
9 end
10 else
11 \(<...>\)
12 end
13 end
```
- CÉU invokes the Credits screen after the await Story returns (Listing 6, ln 8).
4) Credits → Worldmap:
- C++ pops the Credits screen, going back to the Worldmap screen (not shown in Listing 5).
- CÉU uses an enclosing loop to restart the process (Listing 6, ln 1–13).
In contrast with C++, the screens in CÉU are decoupled from each other and only the Main Loop touches them: the Worldmap has no references to Story, which has no references to Credits. Changing the screen arrangements is a matter of adjusting the main loop only.
2) Summary & Pattern Uses in Pingus: The direct style of CÉU has some advantages in comparison to the continuation-passing style of C++:
- It uses structured control flow (i.e., sequences and loops) instead of explicit data structures (e.g., stacks) and continuation variables (e.g. boolean flags).
- The activities in sequence are decoupled and do not hold references to one another.
- A single parent class describes the flow between the activities in a self-contained block of code.
Continuation passing typically controls the overall structure of games such as screen transitions in menus and level progressions. In Pingus, CÉU adopts the direct style technique in five cases involving screen transitions: the main menu, the level menu, the level set menu, the world map...
loop, and the gameplay loop. It also uses the same technique for the loop that switches between pingu actions during gameplay (e.g., walking to falling and back to walking).
C. Dispatching Hierarchies
Entities form a dispatching hierarchy in which a container that receives a stimulus automatically forwards it to its managed children.
1) Case Study: Bomber Action draw and update Dispatching: In C++, the class Bomber in Listing 7 declares a sprite member (ln 3) to handle its animation frames. The Sprite class is part of the game engine and knows how to update and render itself. However, the Bomber still has to respond to update and draw requests from the game and forward them to the sprite (ln 11–13 and 15–18). To understand how the update callback flows from the original environment stimulus to the game down to the sprite, we need to follow a long chain of 7 method dispatches (Figure 7):
1) ScreenManager::display in the main game loop calls ScreenManager::update when starting a new frame.
2) ScreenManager::update calls screen->update for the active game screen (i.e., a GameSession instance, considering the screen in which the Bomber appears).
3) GameSession::update calls world->update.
4) World::update calls objs->update for each object in the world.
5) PinguHolder::update calls pingu->update for each pingu alive.
6) Pingu::update calls action->update for the active pingu action.
7) Bomber::update calls sprite.update. Sprite::update finally updates the animation frame.
Each dispatching step in the chain is indeed necessary considering the typical OO game architecture employed in Pingus:
- With a single assignment to screen, one can easily deactivate the current screen and redirect all dispatches to a new screen (step 2).
- The World class manages and dispatches events to all game entities with a common interface WorldObj, such as the pingus and traps (step 4).
- Since it is common to iterate only over the pingus (vs. all world objects), the container PinguHolder manages all pingus (step 5).
- Since a single pingu can change its actions during lifetime, the action member decouples them with another level of indirection (step 6).
- Sprites are part of the game engine and are reusable everywhere (e.g., UI buttons, world objects, etc.), so it is also convenient to decouple them from actions (step 7).
Like update, the draw callback also flows through a similar dispatching hierarchy until reaching the Sprite class.
In CÉU, the Bomber abstraction presented in Listing 9
spawns a Sprite animation instance on its body (ln 3). However, the Sprite abstraction in CÉU can react directly to external update and draw events, bypassing the program hierarchy entirely. Events in CÉU are broadcasted to the entire application in lexical order, i.e., an abstraction that appears first in the source code (e.g., ln 3) reacts before another one that appears second (e.g., hidden in ln 4). This rule preserves determinism and also conforms to the program static hierarchy. While \((\text{and only while})\) the bomber abstraction is alive, the sprite animation remains alive and reacts to the update and draw events (see also Section IV-D on Lifespan Hierarchies). The radical decoupling between the program hierarchy and reactions to events eliminates dispatching chains entirely.
2) Summary & Pattern Uses in Pingus: Passive entities subjected to hierarchies require a dispatching architecture that makes the reasoning about the program harder:
- The full dispatching chain may go through dozens of files.
- The dispatching chain may interleave between classes specific to the game and also classes from the game engine (possibly third-party classes).
In C++, the update subsystem touches 39 files with around 100 lines of code just to forward update methods through the dispatching hierarchy. For the drawing subsystem, 50 files with around 300 lines of code. The implementation in C++ also relies on dispatching hierarchy for resize callbacks, touching 12 files with around 100 lines of code. Most of this code is eliminated in CÉU since abstractions can react directly to the environment, not depending on hierarchies spread across multiple files.
Note that dispatching hierarchies cross game engine code, suggesting that most games also rely heavily on this control-flow pattern. In the case of the Pingus engine, we rewrote 9 of its files with a reduction from 515 to 173 locs, mostly due to dispatching code removal (not listed in Figure 2, since it’s engine code).
D. Lifespan Hierarchies
Entities form a lifespan hierarchy in which a terminating container entity automatically destroys its managed children.
1) Case Study: Dynamic Pingus Lifecycle: A pingu is a dynamic entity created periodically and destroyed under certain conditions, such as falling from a high altitude.$^8$
In C++, the class PinguHolder in Listing 10 is a container that holds all alive pingus. The method PinguHolder::create_pingu (ln 1–6) is called periodically to create a new Pingu and add it to the pingus collection (ln 3–4). The method PinguHolder::update (ln 8–18) checks the state of all pingus on every frame, removing those with the dead status (ln 12–14). Note that if the programmer disregards the call to remove, a dead pingu would remain in the collection and still update on every frame (ln 11). Since the draw behavior for a dead pingu is innocuous, the death could go unnoticed when testing it but the program would keep consuming memory and CPU time. This problem is known as the \textit{lapsed listener} [14] and
8Death of pingu animation: github.com/an000/p/#5
also occurs in languages with garbage collection: a container typically holds a strong reference to a child (sometimes the only reference to it), and the runtime cannot magically detect it as garbage. Hence, entities with dynamic lifespan always require explicit matching add and remove calls associated to a container (ln 4,13).
CÉU supports pool declarations to hold dynamic abstraction instances. In addition, the spawn statement supports a pool identifier to associate a new instance with a pool. The game screen in Listing 11 spawns a new Pingu on every invocation of Pingu_Spawn (In 4–7). The spawn statement (ln 6) specifies the pool declared at the top-level block of the game screen (ln 3), attaching the scope of the pingu to that block. Since pools are also subject to lexical scope, the lifespan of all dynamically allocated pingus is constrained to the game screen (Figure 8). Lexical scopes handle memory and event dispatching automatically for static instances (Listing 9, ln 3) and also for pools. However, the lifespan of a dynamic instance does not necessarily have to match the lifespan of its associated pool. In CÉU, when the execution block of a dynamic instance terminates, which characterizes its natural termination, the instance is automatically removed from its pool (Figure 8, instances 1 and 3). Therefore, dynamic instances do not require any extra bookkeeping related to containers or explicit deallocation. To remove a pingu from the game in CÉU, we just need to terminate its execution block according to the appropriate conditions: The escape statement (Listing 11, ln 17) aborts the execution block of the Pingu instance, removing it from its associated pool automatically. Hence, a dynamic instance that terminates naturally leaves no traces in the program.
2) Summary & Pattern Uses in Pingus: Lexical lifespan for static instances and natural termination for dynamic instances provide some advantages in comparison to lifespan hierarchies through containers:
- Lexical scope makes an abstraction lifespan explicit in the source code. All entities in a game have an associated lexical lifespan.
- The memory for static instances is known at compile time.
- Natural termination makes an instance innocuous and, hence, susceptible to immediate reclamation.
- Instances (static or dynamic) never require explicit manipulation of pointers/references.
The implementation in CÉU has over 200 static instantiations spread across all 65 files. For dynamic entities, it defines 23 pools in 10 files, with almost 96 instantiations across 37 files. Pools are used to hold explosion particles, levels and level sets loaded from files, gameplay & worldmap objects, and also UI widgets.
V. RELATED WORK
The control-flow patterns presented in this paper closely relate to the GoF behavioral patterns [10], which are discussed in the context of video games in previous work [3, 14, 17]. The original Pingus in C++ uses variations of the patterns state (Sections IV-A and IV-B), visitor (Sections IV-C and IV-D), and observer (to handle events in general) as implementation techniques to achieve the desired higher-level control-flow patterns described in the paper. CÉU overcomes the need of behavioral patterns with support, at the language level, for structured control-flow mechanisms and event-based communication via broadcast.
A number of domain-specific languages, frameworks, and techniques have been proposed for particular subsystems of the game logic, such as animations [8, 15, 16], game state and screen progression [13, 25], and behavior and AI modeling [1, 11]. In Pingus, the adoption of CÉU is not restricted to a specific subsystem. We employed CÉU at the very core of the game for event dispatching (Section IV-C) and memory management of entities (Section IV-D), eliminating parts of the original game engine. We also implemented all entity animations and behaviors (Section IV-A), and screen transitions (Section IV-B) using the available control mechanisms of CÉU. Furthermore, CÉU is a superset of C targeting reactive systems in general, not only games, and has also been successfully adopted in other domains, such as wireless sensor networks [6, 19] and multimedia systems [22].
Functional reactive programming (FRP) [9] contrasts with structured synchronous reactive programming (SSRP) as a complementary programming style for reactive applications. We believe that FRP is more suitable for data-intensive applications, while SSRP, for control-intensive applications. On the one hand, FRP uses declarative formulas to specify continuous functions over time, such as for physics or data constraints among entities. On the other hand, describing a sequence of steps or control-flow dependencies in FRP requires to encode explicit state machines so that functions can switch behavior depending on the current state. FRP has been successfully used to implement a 3D first person shooting game from scratch, but with some performance

We advocate *Structured Synchronous Reactive Programming* as a productive alternative for game logic development. We use the video game *Pingus* as a qualitative case study. We compare the implementation of four game behaviors in C++ and CÉU and discuss how structured reactive mechanisms can eliminate callbacks and let programmers write code in direct style. Ultimately, we rewrote about 1/4 of the whole codebase (9.186 from 39.362 lines of code) which comprises the core of the game logic which is susceptible to structured reactive programming.
We categorize the behaviors in four recurrent control-flow patterns: *State machines* are the workhorses of the game logic, appearing in animations, AI behaviors, and input handling. CÉU can encode states implicitly with sequential statements, eliminating shared state variables and improving code encapsulation. *Continuation passing* controls the overall structure of the game, such as screen transitions and level progressions. Similarly to state machines, CÉU describes the flow of the game as sequential statements in self-contained blocks, eliminating explicit data structures and continuation variables. *Dispatching hierarchies* diseminate input events through the game entities and serve as a broadcast communication mechanism. Event broadcasting is at the core of the semantics of CÉU, allowing entities to react directly to inputs and bypass the program hierarchy entirely. *Lifespan hierarchies* manage the memory and visibility of game entities through class fields and containers. In CÉU, all entities have an associated lexical scope, similarly to local variables with automatic memory management.
Overall, we consider that most difficulties in implementing control-flow behavior in game logic is not inherent to this domain, but a result of accidental complexity due to the lack of structured abstractions and an appropriate concurrency model to develop event-based applications.
**REFERENCES**
|
{"Source-Url": "http://www.ceu-lang.org/chico/ceu_sbgames18_pre.pdf", "len_cl100k_base": 7695, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38945, "total-output-tokens": 9479, "length": "2e12", "weborganizer": {"__label__adult": 0.0008592605590820312, "__label__art_design": 0.00035858154296875, "__label__crime_law": 0.0007014274597167969, "__label__education_jobs": 0.0006542205810546875, "__label__entertainment": 0.00013375282287597656, "__label__fashion_beauty": 0.0003349781036376953, "__label__finance_business": 0.00023317337036132812, "__label__food_dining": 0.0007462501525878906, "__label__games": 0.00710296630859375, "__label__hardware": 0.0012264251708984375, "__label__health": 0.0006403923034667969, "__label__history": 0.0003888607025146485, "__label__home_hobbies": 9.745359420776369e-05, "__label__industrial": 0.0005517005920410156, "__label__literature": 0.0004029273986816406, "__label__politics": 0.00048279762268066406, "__label__religion": 0.0008511543273925781, "__label__science_tech": 0.00772857666015625, "__label__social_life": 0.0001080632209777832, "__label__software": 0.0030231475830078125, "__label__software_dev": 0.97119140625, "__label__sports_fitness": 0.0007281303405761719, "__label__transportation": 0.0008749961853027344, "__label__travel": 0.00036454200744628906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36562, 0.06092]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36562, 0.30441]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36562, 0.87043]], "google_gemma-3-12b-it_contains_pii": [[0, 3724, false], [3724, 5692, null], [5692, 11358, null], [11358, 13230, null], [13230, 17856, null], [17856, 20520, null], [20520, 23034, null], [23034, 26125, null], [26125, 31166, null], [31166, 36068, null], [36068, 36562, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3724, true], [3724, 5692, null], [5692, 11358, null], [11358, 13230, null], [13230, 17856, null], [17856, 20520, null], [20520, 23034, null], [23034, 26125, null], [26125, 31166, null], [31166, 36068, null], [36068, 36562, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36562, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36562, null]], "pdf_page_numbers": [[0, 3724, 1], [3724, 5692, 2], [5692, 11358, 3], [11358, 13230, 4], [13230, 17856, 5], [17856, 20520, 6], [20520, 23034, 7], [23034, 26125, 8], [26125, 31166, 9], [31166, 36068, 10], [36068, 36562, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36562, 0.07904]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
47db18e0e81ba105dea0ab841db0a00c05ca87ad
|
Workspace
Intro
Workspaces were added to Tiki4 and further improved in Tiki5. In Tiki10 a GUI was added for some basic features.
Workspace is a large project which may or may not impact Tiki as a whole. Previous efforts like AulaWiki built a parallel structure with highly useful features for those who need workspaces. The main issue with it is that the rest of the community lived on without any of the changes. This can be seen in two ways: optional features should not affect those who do not use it, or as a lack of collaboration on the project leading to little used, brittle code.
Many people already use Tiki to collaborate on projects as small teams without workspaces or AulaWiki. Be it through implicit trust that others will not play in their sandbox or creating multiple instances of Tiki. Both of these solutions currently have scalability issue. How can we improve the experience without changing everything?
This roadmap changes the fundamental question asked in development from:
*What can I build to solve the workspace issue?*
to:
*What can we improve in order to get closer to workspaces?* and *By improving $X$, what will those who have no interest in workspaces gain from it?*
Workspaces should not be about building another pile of code, but assembling some of the many existing functionalities. If it means improving every single piece along the way, that's what it should be. While there are deadlines in play, the project does not end with the summer and it would be better to have tangible improvements on existing functionalities, making workspaces closer to reality, than a new unstable feature.
The remainder of this roadmap will attempt to explain the incremental improvements required to Tiki in order to achieve workspaces.
GUI
Development of the Graphical User Interface (GUI) for workspaces started during TikiFestBarcelona3. A quick demo of the progress achieved by then can be seen here:
http://blip.tv/xavi-de-pedro/tutorial-on-the-new-workspace-gui-for-tiki10-august-2012-6323818
Issues to resolve
Workspace users only want to see what is relevant to
their workspace
Information overload is always a problem. Even on small collaboration sites, the recent changes can quickly grow to a level where the list is too long for anyone to bother looking at. Category trees become so large that no one can find what they are looking for. Information has to be filtered. It has nothing to do with access rights, it's a matter of personal choice.
**Perspectives** allow the user to select which point of view he desires to have on Tiki. By changing the perspective, the user selects which workspace he is working on right now and which information he finds relevant.
By themselves, the perspectives don't do much. They are implemented by overriding the global preferences, thus creating multiple sets of global preferences. Actual preferences need to be implemented to filter the visible data and multiple components need to be updated to take into account the perspective's preferences. However, perspectives are entirely transparent. Only the preferences need to be considered.
One such new preference that would be required is **category jail**, similar to the UNIX concept limiting what the user is allowed to view in the filesystem by hiding the higher sections of the filesystem tree from them. Adapted to categories, a category jail would allow to limit the visible portion of the category tree to a certain category.
A crucial component of the UI (that is not handled through preferences) is the modules. Module visibility is typically restricted by groups. Limiting the visibility of modules could either be done by changing the **active groups** or adding a **perspective filter** on the module. Both can be done without significant changes. In the first case, the perspective would contain a preference listing the relevant groups in the perspective and the list would serve as a mask on the user's group list when selecting the modules to display. The second one would add a generic parameter to modules and only display when the perspective is active.
### Some contexts require privacy and to hide the work of a workgroup
Tiki has very fined-grained global & object permissions. In regards to category permissions, Tiki is currently (3.x) limited in terms of flexibility. The lack of fine-grained permissions make understanding the impact of granting permissions hard to understand. For this reason, a revamp of the **category permission system** is underway. The complete granularity will be available to categories.
This issue resolved, rethinking how large amounts of categories should be handled becomes possible. At this time, one of the major limitations of categories is that anyone with edit rights on a page (or the equivalent for other objects) can change categories on the page. In a world where categories are used for categorization, this makes perfect sense. However, when changing categories grants or revokes permissions, **category security** is required.
Ultimately, who is allowed to add an object to a category or remove one from it is specific to the category itself, not to the object. The permissions that apply to the categories must be sorted out and deployed in the code base.
Because **perspectives** may point to views in which the user has no rights, visibility permissions on the perspectives may also be desired.
Another issue relevant to security is **permission auditing**, which is to be able to view which permissions
apply to an object, and from where they were granted. The revamp of the permission system will make this task easier, but interfaces to audit the permissions and compare them may be required.
Administrators want to manage delegate management of workspaces
In large organizations, creating workspaces will be a day to day task. Creating a new workspace has to be quick and easy. Through data channels, workspace templates can be created. Effectively, this would allow to configure a new workspace based on a local configuration. The workspace template could create a set of categories, a perspective, new groups and set-up all the category permissions required to get the workspace up and running. Data channels are not the only option to create new workspaces, however, creating a dedicated interface for this task is likely to be time consuming and should be postponed to later releases.
In fact, because data channels rely on groups to determine "execution" rights, it may be possible to keep the global administrator entirely out of the loop.
Afterwards, all that would be required is to add the users to the specific groups. Considering the data channel has a parameter to specify the workspace leader, the leader could then be responsible for adding the members. Adding and removing members from groups currently requires administrator privileges (tiki_p_adminusers) at the instance level. In an emergent group context (aka Organic groups), this is unfortunate. To resolve this, finer grained permissions on groups would be required. Effectively, this would require treating groups as object themselves and to grant permissions on them.
Workspaces need to have a life of their own
Especially for large workgroups and emergent contexts, it's important to reduce the effort required by the group leader(s) and to let them delegate tasks within the workspace. It should be possible to delegate simple tasks (like approving new members or suspending troublemakers) to moderators. While it would be possible to simply grant member administration rights to these people, in many cases this would seem too wide. The introduction of group transitions would allow to specify paths between two groups and to grant rights on the transition to a group of moderators.
These transitions would simplify the management of workspaces and define community-wide policies on how to handle workspace management. By reducing the complexity of group management, you allow less technical people in the organization to lead a workspace.
Similarly, the concept of transitions may be adapted to category transitions, which would enable workflow-like patterns for document management. These may be required when coordinating specifications between multiple independent entities where an approval process is required. Just like normal categories, categories part of a transition set would have permissions assigned to them, limiting the ability to edit for example, and permissions on the transitions themselves.
A sample use case would be the approval of engineering documents.
Workspace leaders may need to customize the configurations locally
**Perspectives** allow to override any preference. Because they are created through profiles, the burden of selecting which one is appropriate is left to the administrator. A perspective management UI would narrow down the set of available preferences. The same way the administrator does not really want to handle the day to day management of the workspaces, he may want to delegate some of the configurations of the perspective to the workspace leaders. Such configuration may be enabling or disabling the forums, changing the theme or any other relevant configuration.
To be done efficiently, **Preferences** would be required. Essentially, the type of field and validation rules for each preference have to be defined at a higher level to be able to generate dynamic interfaces from a list of preferences without having to duplicate and maintain large amounts of code.
Features
The previous sections introduced the features and how they fit in the global picture. This section details each of the features, their impact on the rest of the project, the dependencies among them and their current state. Ideally, each of these is seen as a separate development effort and provides a worthwhile improvement to Tiki as a whole.
Perspectives
Perspectives were introduced in trunk on July 19th. They are in three parts:
1. Application of the preference overrides in tiki-setup.php
2. Creation of new perspectives from profiles
3. Perspective switching module
These three components provide a usable base to work from. The only change required for workspaces is the introduction of a **tiki_p_view_perspective** permission currently pending the merge of **perms-take2** in trunk. The introduction of the permission would only affect the perspective switch module and the companion tiki-switch_perspective.php file. No changes are required in tiki-setup.php or in the profile. Adding permissions on the perspective would be handled through the standard object permission handling in profiles.
Outside of workspaces, perspectives would be useful to create micro-sites with a different visual appearance (with Site Identity preferences, not just Theme Control Center), the introduction of an Administrator perspective to allow the site’s administrator to switch between a normal view of the site and a perspective that would contain more administrator controls.
In the future, a better interface to manage perspectives may be desired. Ideally, the perspective itself would define which preferences can be updated within it and a customized interface could be built through **Preferences**. Combined with the idea of delegating configuration, two additional permissions could be introduced: **tiki_p_edit_perspective** and **tiki_p_edit_perspective_full**.
Breakdown
Category jail
The category jail preference can theoretically be used without perspectives, however, the use is limited. It could be used to limit the visible categories to a limited set of content-related categories and hide the permission management ones. However, in the context of workspaces, the category jail is a pivot concept and enables the real purpose of perspectives.
Rather than a jail, it can be deployed as a suggestion. By default, it could display only the jailed categories, but an option could allow to show all categories instead.
The jail could also force new objects to be created to belong to the category as well. Whether this respects the user’s permissions has to be evaluated.
Technically, the jail is implemented as a preference which will contain a category ID. Any object listing or category listing should restrict the displayed items to the category in the preference and child nodes. To speed up the lookup, the complete list of sub-categories could be stored in preferences.
The exact work required to achieve the category jail has yet to be analyzed. The creation of the preference is trivial, however, multiple components will need to be updated.
Breakdown
- Category jail - 4.0
- Categorize.php/tpl to enable category jail on all objects done
- Update modules listing objects, like recent changes ready to work on (see Deployment of Category Jail)
Updating all modules is a significant task. It might be a good idea to flag modules that are jail aware in the documentation of the module. Enabling category jail on the module may also be an option.
Reference:
Updating all modules
Perspective filter
Optional: Alternative to Active groups
The perspective filter is a local modification in the modules execution path. There are already multiple module arguments that can be used to filter the visible modules, including pages, sections, themes and many others. Adding an additional argument and the handling is a localized change that requires little work.
Instead of filtering on perspective, it may also be possible to filter on categories and use the category jail filter. However, directly on the perspective may make it easier to grasp.
**Breakdown**
- Perspective filter - 4.0 **done**
**Active groups (Cancelled)**
[+]
**Category permission system**
The *perms-take2* branch modifies how category permissions are resolved. It provides the full permission granularity on categories and provides more efficient ways of resolving permissions.
These changes to the category permissions impact the permission management user interfaces. The current category permission interface is no longer suitable. Instead, the object permission interface is closer to being more suitable, but needs to be adapted to display the category permissions correctly when setting permissions on categories.
At this time, to remain efficient when fetching permissions, no inheritance applies to category permissions. Only the permissions applied directly on the category are considered. To accommodate this, the interface must allow bulk modification of the permissions to child categories. An alternative would be to update the database schema to allow efficient lookup of parent categories. This can be done with a parent category table that needs to be updated and kept consistent with the hierarchy or by entirely modifying the category structure to nested sets.
**Breakdown**
- Permission lookup - 4.0 **done**
- Update of the permission interface - 4.0 **in progress** jonnyb, luciash
- Category permission inheritance - 6.0 **pending category structure redesign** Inheritance can wait for 6.0, but copying perms from parent to children, or at least from one category to multiple others, cannot - this is a fundamental requirement of category functionality and has been done in 5.0
**Related wishes**
- WYSIWYCA for all permissions : feature_check in Table: users_permissions
- Mass assignment of permissions, especially for wiki pages
- Better/Easier reporting of item/object permissions which override category and group permissions
- When creating a page, how to inherit permissions from source page?
- Item/Object perms: copy permissions from another object. (especially for wiki pages and categories)
- Permissions: when assigning permissions to item, an option to start with current general permissions
- Add green & yellow permission keys on tiki-listpages.php
**Category security**
One of the current limitations of the category system in Tiki is that only the administrator can modify the category tree. Workspaces need to allow for emergent category creation to suit the needs of the
workgroup. Moreover, the visibility of categories is handled globally, which is well suited for small trees, but becomes impossible on larger trees. Filtering is required.
The solution here is to have category-specific permissions that affect the categories themselves, and not the objects contained in them. Possible permissions are:
- tiki_p_view_category to indicate if the category and its subcategories are visible to the user
- tiki_p_add_object to indicate if the user is allowed to add objects in the category
- tiki_p_remove_object to indicate if the user is allowed to remove objects from the category
- tiki_p_create_category to create new categories
Additionally, an object specific permission may be desired
- tiki_p_modify_object_categories to lock the categories on an object altogether, regardless of the permissions on the categories.
This change mostly impacts the categorize php/tpl component which allows to change the categories on the object. Effectively, it will need to be updated so that visibility rules apply, but also so that the permissions on each category applies. For example, if a user is not allowed to remove a category, adding or changing other categories would not affect the one category he is not allowed to modify.
Permissions like tiki_p_create_category would also need to be scoped to the category that grants it. For example, if a user has the permission on Workspaces > Chemistry > Autumn 09, he would only be allowed to create categories under it. Because there is no inheritance in phase 1, only the category itself has to be considered.
**Breakdown**
- Visibility - 4.0 **done**
- Object management - 4.0 **done**
- Category creation - 4.0 or 5.0 **ready to work on**
**Permission auditing**
As a companion to effective permission management, the ability to quickly view who has which permission on which object allows to increase confidence in the system.
Permission auditing can be seen either as a dashboard for administrators and workspace leaders and as additional information presented on the object's permission page. Within the permission page, allowing to display permissions from different levels, like category level or global level, would allow the permission assigner to have a baseline on the permissions to grant and make educated decisions.
It must answer the following questions:
- Which of global, category or object permissions apply?
- If category permissions, which categories provide permissions?
- Are permissions more open or more restrictive than the parent level?
- Compared to another object, are the permissions more open or more restrictive?
Data channels
Workspaces are a combination of multiple features providing an experience of workgroup and focus, not a feature in itself. While it may be possible to create different interfaces to set-up the groups, categories, permissions, perspectives, transitions and all sorts of objects, using profiles greatly simplifies the task and provides a good starting point to prepare workspaces and test out the improvements to be made.
A certain load of work is required by the administrator to set-up the data channel and profiles initially, but once the template is defined, instantiation can be made multiple times and easily. Typical workspace templates can be distributed through profiles.tiki.org.
Profiles have been mostly tested out and are stable. Data channels on the other hand are in their infancy and have a single use case at this time. Some modifications may be required to get them to work correctly with all datatypes. Some conflicts may occur in the handlers when called multiple times.
A convenient interface to call data channels and input data would be useful.
Breakdown
- Create templates as profiles - 4.0 ready to work on
- Data channel UI - 4.0 done PluginDataChannel
Reference:
http://profiles.tiki.org/Data+Channels
Organic / Emergent groups
As part of the workspaces, groups will primarily be created through data channels and follow a standard template. However, for larger workspaces, one may want to create subgroups to grant special permissions to a few people or simply to identify them (participants to a workshop, experts, ...).
Without emergent groups, the attention of the administrator is required to create new groups. Granting rights to create new groups allows for the possibility to create groups named in a way that expresses a much larger meaning, which may not be required. Consider someone creating a group called "Experts" for their workspace. Groups are defined globally, so the "Experts" group would be available globally, even though it was created for a subfield and a team of 10 people in a 2000 person organization. Just like category creation would be limited to the category on which the permission was granted, it could be possible to impose a prefix to the group based on which category the right was granted on.
Generally, it's important for a group leader to be able to manage users in his workspace. By treating groups as objects, permissions can be assigned to groups for other groups. For example, a lead could be allowed to add and remove members to a group he controls.
Permissions on groups
• tiki_p_add_member to allow someone to add group members
• tiki_p_remove_member to allow someone to remove group members
• tiki_p_join_group to allow someone to join or leave a group on his own
• tiki_p_remove_group to allow someone to destroy the group
Permissions on categories
• tiki_p_create_group to create groups under the category, although the group is not categorized itself
Breakdown
• Member management - 4.0 done
• Self join (modify current implementation) - 4.0 done
• Creation and removal - 5.0
Questions
• Should not be possible to have group perms on Anonymous group
• Need a link somewhere to assign perms to group (already works at: tiki-objectpermissions.php?objectId=Registered&objectName=Registered&objectType=group&permType=group)
• Do these group perms apply globally/in a category as well? (or just on specific group?)
See also:
Organic Groups
Group transitions
(As reference, see graphic above)
Adding and removing group members to promote them is a tedious and error prone task. By introducing transitions between groups, which can be triggered based on a permission, group administration can be delegated in a safe way and reducing the complexity of the task.
This feature is useful for self-managing groups within workspaces, but it could also be used to simplify the code in the Tiki registration process and the multiple approval and validation steps. Correcting the blocked validation states would simply be a group change for the administrator and it would allow installations to customize their validation process.
This feature would require the introduction of a new permission to trigger transitions, a table to store the transitions and some user interface modifications to allow triggering of transitions.
Breakdown
• Support for transitions - 4.0 done
• Deployment of transitions in the registration process - 4.0/5.0
Category transitions
Category transitions are an extension of the group transitions, the same concept could be applied to categories, allowing groups of users to trigger transitions on objects, effectively allowing them to change the applicable categories on the object even if they would not usually be able to change those categories due to category security.
Depending on how it is implemented, it could share the implementation with group transitions. This could evolve into an Approval Workflow.
Breakdown
- Category transitions - 5.0 done
Dynamic preferences
Preferences are not a workspace feature by themselves, but they would allow to enable for the workspace leader to customize the perspective's configuration. However, the scope of this feature is much larger and long term. Globally, this feature would allow:
- The simplification of the current administration panel's code.
- The creation of restricted administration panels with a subset of options to create lesser administrators. For example, a wiki farm administrator may want to only allow Tiki administrators to enable stable features
- The creation of a configuration search for those times you know what you are searching for, but don't know where it was arbitrarily placed in the administrator panel.
- The creation of a perspective configuration panel.
- To build the preferences in a profile editor.
The downside is that Preferences are a massive task. The type of field and validation rules for each preference have to be defined. A prior initiative called Magic aimed to do this. However, complexity of the code and departure of the effort's lead caused the initiative to abort. A CSV file contains some of the information required, but is now outdated.
Breakdown
- Define general structure and interface - 5.0 in progress in trunk for 4.0
- Document features / import data - 5.0 mostly done
Category structure redesign
In order to support permission inheritance in category permissions, which would simplify administration, modifications are required in the category structure. Without such modifications, it is not possible to fetch permissions efficiently. One of the alternatives is to create a table to contain the relationship from each category to all its parents, allowing to join categories with it and obtain all parents in a single query. The alternative is to change the structure entirely and use nested sets instead.
The conversion to nested sets is a much larger effort, however, it would allow for much more flexibility in
the future. A similar change would also be required in structures.
**Breakdown**
- Conversion of the category structure - 6.0
---
**Short term notes**
Jotting things down to remember to do
- "Default category assigned to uncategorized objects edited by a user with this default group:" in tiki-admingroups.php should be reworded and transferred to perspectives
- http://demo.tiki.org/trunk/tiki-searchresults.php?highlight=cgcom&boolean=on&search=Go shows a result when logged as admin but not as anonymous. Yet, this file gallery has view perms for Anon.
**Related links**
- Workspace Helper
- Workspace Ideas
**alias**
- Workspaces RoadMap
- Workspace RoadMap
- Workspaces
|
{"Source-Url": "http://dev.tiki.org/tiki-print.php?display=pdf&page=Workspace", "len_cl100k_base": 5177, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 20738, "total-output-tokens": 5715, "length": "2e12", "weborganizer": {"__label__adult": 0.0002639293670654297, "__label__art_design": 0.0005893707275390625, "__label__crime_law": 0.00020062923431396484, "__label__education_jobs": 0.001148223876953125, "__label__entertainment": 7.838010787963867e-05, "__label__fashion_beauty": 9.697675704956056e-05, "__label__finance_business": 0.0003509521484375, "__label__food_dining": 0.0002036094665527344, "__label__games": 0.0004601478576660156, "__label__hardware": 0.00030231475830078125, "__label__health": 0.00013399124145507812, "__label__history": 0.00014030933380126953, "__label__home_hobbies": 0.00012105703353881836, "__label__industrial": 0.00013363361358642578, "__label__literature": 0.0001704692840576172, "__label__politics": 0.00013506412506103516, "__label__religion": 0.00026154518127441406, "__label__science_tech": 0.0013561248779296875, "__label__social_life": 0.0002548694610595703, "__label__software": 0.07086181640625, "__label__software_dev": 0.92236328125, "__label__sports_fitness": 0.00011688470840454102, "__label__transportation": 0.0001423358917236328, "__label__travel": 0.00018644332885742188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26347, 0.00723]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26347, 0.40527]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26347, 0.91506]], "google_gemma-3-12b-it_contains_pii": [[0, 2105, false], [2105, 5521, null], [5521, 8585, null], [8585, 11422, null], [11422, 13429, null], [13429, 16067, null], [16067, 18698, null], [18698, 21264, null], [21264, 23134, null], [23134, 25663, null], [25663, 26347, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2105, true], [2105, 5521, null], [5521, 8585, null], [8585, 11422, null], [11422, 13429, null], [13429, 16067, null], [16067, 18698, null], [18698, 21264, null], [21264, 23134, null], [23134, 25663, null], [25663, 26347, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26347, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26347, null]], "pdf_page_numbers": [[0, 2105, 1], [2105, 5521, 2], [5521, 8585, 3], [8585, 11422, 4], [11422, 13429, 5], [13429, 16067, 6], [16067, 18698, 7], [18698, 21264, 8], [21264, 23134, 9], [23134, 25663, 10], [25663, 26347, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26347, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
174fc2f6dd6ad7707245d7ace7887c5837d1567b
|
Load Balancing in Beowulf Clusters
Chandramohan Rangaswamy
Department of Electrical and Computer Engineering
University of Illinois at Chicago
July 07, 2001
1 Abstract
Beowulf[1] Clusters are growing in popularity as a means for cheap and powerful distributed systems. Distributed systems demand a new set of tools for running the systems at optimal efficiency. One such tool is load-balancing. It is well known that distributing the loads in a distributed system, so that all the computers in the cluster run at more or less the same load, increases the efficiency of cluster. A heavily skewed cluster, has one or two workstation running all the tasks in the system, while other workstations remain idle. Jobs submitted to the heavily-loaded workstations take longer time to complete. Distributing the jobs submitted, to all the systems in the cluster reduces the time required to execute jobs and also increases the throughput of the cluster. A load balancing tool thus forms an invaluable part of any distributed system. Developing a load balancing tool for the Beowulf clusters is the goal of the project.
2 Beowulf Cluster computing
Cluster computing consists of running a network of computers as one distributed computer. Clusters can be formed using a Network of Workstations (NOW), or Cluster of Workstations (COW). In a Network of Workstations, workstations are loosely coupled, a computer lab can simply be used as a cluster. A Cluster of Workstations is more tightly coupled in that systems are more or less dedicated to be used as one distributed system. Beowulf clusters is in a way a misnomer since a
NOW can be run as a “cluster”. Many of the Beowulf clusters presently available are pure clusters in that dedicated machines are used to form the cluster. The main criterion for a cluster to be a Beowulf is that it should run Linux operating system. Although, now there are Windows NT clusters and Solaris clusters, only Linux clusters count as Beowulf.
A typical Beowulf cluster consists of a server which is connected to a network of workstations. Only the server has access to external networks. No special type of network topology is used for interconnection. All the systems are connected together through Ethernet. All the components in the Beowulf cluster are Commercial Off The Shelf (COTS) systems. All the systems except the server are bare-bone systems. They don’t have a monitor or keyboard or mouse connected to them. Normal interaction is through the network by using rsh. For maintenance purposes, a KVM kit is used. A KVM kit is a Keyboard-Video-Mouse kit, and is connected to the host workstation whenever maintenance work is to be done on the workstation. In some of severely cash-strapped clusters, even no video card is installed. In this case all maintenance work is done through serial port. A Linux terminal can run through serial port. Hence, even video cards are done away with. Only a network card is connected to the motherboard.
The operating system used is of course, Linux. Only the bare essentials are installed. This usually consists of development packages and applications. Normally X Window system is not installed unless the cluster is a Network of Workstations like a computer lab. Usually, software on all the nodes of the cluster are exactly the same (unless the system administrator really wants a nightmare). The home directory (/home) is usually located on the server only. The server is also a NFS server. This allows the nodes’ software to be easily replaceable. If any one workstation goes down or crashes, it can easily be brought up. Backup of entire cluster need not be made. Backup of only the server is required. Other nodes can be restored easily just by installing the OS again. Though, a backup of nodes' network settings will make the job even more easier.
Beowulf systems are fairly new in that have been around for only a few years. Hence special tools for running these clusters are not available or limited in their abilities. For example, load balancing tools available for Linux clusters are Condor[2], GNU Queue[3], MOSIX[4] and PANTS[5]. Each of the tools mentioned above have their advantages and disadvantages. These tools, however, have drawbacks in that they are not transparent (working unknown to user) or being excessively large. Before taking a look at the workings of each systems, an explanation of what is expected of a load balancing system is necessary.
The first and foremost task for a load balancing system is distributing tasks
in a cluster to all the systems in the cluster, so that all the systems are loaded equally. Load of the machine is defined as the cumulative of number of tasks running in the system and the intensity of use of system resources by the tasks. Lightweight tasks like shell are not at all resource intensive. For example, the bash shell has a relatively very low memory foot print (about 1.3 MB) and very low CPU usage (0.0). On the other hand, the X window system server running the new KDE 2.0 window manager has a large memory foot print (about 76MB), the GNU C compiler has a foot print varying from 9MB to 30MB and is very CPU intensive. A load balancing system should take all these variations in resource usage when allocating tasks to systems on the cluster.
Load balancing system should not be resource intensive, and should be fast enough so that there is no visible performance penalty on account of running the load balancing program.
3 General Techniques for Load Balancing
The problem of load balancing can be approached from different viewpoints and hence different solutions exist. These solutions fall into two techniques.
- Scheduling
- Kernel space scheduling
- User space scheduling
- Process migration
(Process migration in user space is difficult to the point of being impossible. Hence process migration is performed only in kernel space)
3.1 Scheduling
In the scheduling solution, whenever a new job is to be submitted to the cluster, the job is submitted to the node on the cluster which is optimally suited to take that load. Optimal load may mean the node with the lowest load on a cluster of same type and hence equally powerful machines. In case of clusters with dissimilar nodes, one or two machines might be much more powerful than other machines in the cluster. Hence, machines with optimal load may not be the machines with the lowest load.
3.1.1 User space scheduling
User space scheduling is the easiest of the load balancing both in theory and implementation. In user space scheduling, jobs are submitted to the cluster through the load balancing program. The load balancing spawns the job on the node with optimal load. Signals that are delivered to the program might be delivered through a stub (proxy) or may not be delivered at all. Most of the User space scheduling programs do not support programs which fork new processes and which use Interprocess Communication (IPC). To support IPC and fork, the programs may have to be recompiled or relinked with new libraries which implement system calls so that IPC and forking is possible over a network.
3.1.2 Kernel space scheduling
Kernel space scheduling makes changes in the kernel scheduling code so that jobs are scheduled on the node with optimal load. Scheduling is completely transparent to the user. Programs which make use of IPC, forks, execs and spawns are also supported. Whenever a job is submitted to the node, the kernel of that node checks if the current node is the optimal load for the execution of the job. Otherwise it finds the appropriate node and submits the task to that kernel for execution. Note has to be made here that programs once started in a particular node run to completion on that node only. Changes to the kernel code are required both in the task allocator and also in the signal delivery mechanisms. The task allocator is changed so that it checks the load condition whenever it creates a new task. The signal delivery mechanisms are changed so that signals are delivered over the network.
Kernel scheduling, although much better than user space scheduling, is much more complicated to implement as changes has to be made in many parts of the kernel. Also, the code for kernel scheduling has to change whenever changes are made to the kernel source code. Linux kernel is changed once every six month. Although no major changes take place in the kernel scheduler, kernel scheduling is tightly bound to the kernel source code and hence patches have to be released whenever a new kernel is released.
Another problem in case of kernel scheduling is that whenever program is created, the kernel has to find the cluster’s load. This results in delays even in creating programs which may not change the load of the node very much.
The scheduling technique has another inherent flaw, in that load balancing is not continuous. Load balancing is done only when a new program is created.
<table>
<thead>
<tr>
<th>Operation</th>
<th>Memory</th>
<th>CPU Time</th>
<th>Load Average</th>
</tr>
</thead>
<tbody>
<tr>
<td>Idle (no image)</td>
<td>15MB</td>
<td>0.1</td>
<td>0.04</td>
</tr>
<tr>
<td>New Image (256x256x400ppi)</td>
<td>29MB</td>
<td>0.1-5.3</td>
<td>0.1</td>
</tr>
<tr>
<td>Script-Fu: Line Nova</td>
<td>39MB-109MB</td>
<td>0.1-82.9</td>
<td>1.09-1.72</td>
</tr>
</tbody>
</table>
Table 1: Gimp Run
Once a program is created, changes in load condition cannot be accommodated. Due to this nature, the load on the nodes can become unbalanced even when load balancing program is running. Allocation of a task is dependent only on the load at the point of creation of task. The load on the node might increase after the task has been allocated. A program that was waiting for user input might use very little memory and CPU time, once user input is obtained, its memory and CPU time use might change drastically. For example, the data in Table.1 was recorded while running the gimp program.
As can be seen, the resources utilized by the program varies from time to time. If another resource intensive process is created before running the Script-Fu script, both the processes will be contending for resources, raising the load on the system and reducing throughput. But during this time, there is no way of transferring one of the processes to another system. The Script-Fu script took about 4 minutes to complete when it was run without any other CPU intensive programs. It took about 7.13 minutes when it was run along with kfract, the KDE fractal generator.
These deficiencies are overcome by the Process Migration technique.
### 3.2 Process Migration
Processes that are already created and being run in a particular node, can be moved to another node. Moved processes can then continue to run in that node. This is known as Process migration. Migrating a process, while tedious and cumbersome provides for finer control over the load in a cluster. This is particularly important in clusters with single point of entry like the Beowulf clusters. With single point of entry, all the processes tend to be created on a single machine, unless one pays regard to the load on the machine and starts his/her programs on another node using rsh. Moving the processes to a different machine reduces the load on the machine in a transparent way.
Load balancing programs using process migration can continuously monitor
the load on the cluster and react to changes in load. Although process migration is better for load balancing than other techniques not many programs use process migration (except for MOSIX[4]). The reasons for this include tedious nature of coding for process migration and non-availability of kernel sources (for Unix systems other than FreeBSD and Linux).
More detailed explanation of the Process Migration is provided in the Proposed Solution section.
4 Currently Available Systems
4.1 Condor
The Condor project[2], started development at University of Wisconsin, Madison in 1988. It was based on Remote-Unix project developed through the efforts and direction of Professors D. Dewitt, R. Finkel and M. Solomon. The Condor project was directed by Professor M. Livny. (Source: http://www.cs.wisc.edu/condor/background.html).
Condor system is currently available for both Unix and NT systems. Condor is essentially a batch processing system. Jobs that are submitted are put in a queue, the jobs are then run on different machines depending on the queuing mechanism, scheduling policy, priority scheme and resource classifications. Users are then informed about the results of execution. Machines in the Condor’s pool can be dedicated compute servers or can be non-dedicated machines. That is machines running idle in a computer lab can easily be included in the Condor pool.
Condor systems present many features. One of those important feature is Checkpointing and migration. Checkpointing is collecting important execution information about a process at regular intervals or at selected times. This way processes can be executed from the checkpoint on another machines. This is helpful in case of system failure like crashes or removal of the machine from pool or machine being shutdown. Since run-time information of machines is available, checkpointed processes can be moved from one machine to another. In Condor, Checkpointing and migration requires relinking of programs.(Some other features of Condor also require relinking with Condor libraries). Checkpointing and migration is useful for providing priority for users. Condor provides priority for machine owners rather than to the programs run by itself.
Condor also supports remote system calls over network. Another feature of
Condor is the **ClassAds** mechanism. In ClassAds mechanism, systems advertise their resources and users advertise their requirements. Condor system matches user with the system such that user’s requirements are fulfilled by the system assigned. For example, if an user requires 96 MB RAM to run a task, then the Condor system might assign that task to a system which announces that it has more than 96 MB of free memory.
Programs that are to be checkpointed and migrated have to be relinked with Condor libraries and have severe limitations imposed on them. The limitations are
1. Processes cannot fork, exec or spawn.
2. Interprocess communication is not possible.
3. Processes can open sockets, but the communication has to be brief.
4. Some of the signals, alarms, timers and sleeping are not allowed.
5. Multiple kernel-threads are not supported. User level threads can be used.
6. Processes cannot use mmap or munmap. (Memory mapped files)
7. File locks are not retained between checkpoints.
8. Files can only be opened read-only.
9. Jobs must be statically linked. Dynamically linked cannot be checkpointed.
The typical mini-HOWTO for running a job in Condor is,
1. Choose a run-time environment (**Universe**, Vanilla, Standard)
2. Create and submit a description file for the program.
3. Submit the job using **condor_submit**
The run-time environment for a Condor system is called an **Universe**. There are five types of universe to choose from. They are Standard, Vanilla, PVM, MPI and Globus. The typical universes are Vanilla and Standard. Vanilla universe is used to run programs that cannot be relinked. These programs cannot be checkpointed and hence cannot be migrated. In this case, programs have to run to completion in the system they are started from. Otherwise, the job has
to be restarted from the beginning on another machine. If the machine that is used for running the program is required by its owner, the Condor system either suspends the task or starts the job from the beginning in another machine. This also requires that all data are accessible from different machines through NFS or AFS.
Standard universe is used for programs which can be relinked. These programs are checkpointed and hence carry the limitation mentioned above. Programs are relinked using the `condor_compile` program.
The description file contains commands and keywords for directing the queuing of jobs. A ClassAd job is created depending on the description file. The job itself is run by through the `condor_submit` program.
The program can only be installed by root. It starts a variety of daemon processes, all running either under root or in user `condor`. The main daemon, `condor_master`'s task is to make sure that all of Condor's daemon's are running in all the machines in the pool. It restarts any daemons that crash and removes any machine from its available pool of machines if that machine is down. One of the machines has to be set as central manager machine and Condor has to be started on this machine first.
The Condor system provides quite so many options. This makes it a very versatile system. But it is a user space program and hence is limited in its reach. Most of the computation-hungry programs are not available in source, object form. Even if they are available in source form or in object form, a system administrator has to spend a great deal of time maintaining the system. He/she has to relink all the programs with Condor libraries. If these programs are relinked, these programs has to meet the requirements for checkpointing. The list of conditions for checkpointing effectively precludes many of the programs, for any non-trivial program will have some file read/write at the minimum.
Programs that are not relinked (running in Vanilla universe) on the other hand cannot be migrated. This means restarting an application from the start on another machine. While this can be done for programs with small run-time, doing this for programs which run for days (physics programs, VHDL compilers, simulators) is not feasible and is a wastage of resources.
4.2 GNU Queue
GNU Queue is a user space network load-balancing scheduler. It acts as a proxy to the programs that are run by it on another machine. As quoted in the developer's website it is similar to telnetting into the machine and running the application
there, except that this process is done in a machine selected by the Queue system. The queue program runs a daemon on all the machines in the cluster. Whenever an application is submitted to the queue program it queries all the cluster queued daemons for the load status. Depending on this data it runs the process on the machine with least load. The queue itself starts running as a proxy for the program that is running in the remote host. Any signals that are to be delivered to the program are delivered to the proxy which delivers the signals to the program.
The queue system is much simpler than Condor and has less footprint in regard to both CPU usage and memory. Installation and running queue is much simpler. Installation can be both done through users and through system administrator. Installation by user is less secure than installation by super user. Installation follows the standard Gnu program installation. (configure;make; make install). Source code of the project is available under GNU license and hence can be modified by users.
To execute a program in queue, the queued daemon must be installed in all the hosts specified in host access control list. After this installation,
```
queue -h hostname --program
```
runs the program in hostname. If the hostname is omitted, queue runs the program in any machine with the least load or best machine depending on the Queue’s profile file. Profile file allows one to set various options for controlling queue operation. Options like minimum free space required, load average of system can be specified in the profile file. Queue supports MPI and PVM support is forthcoming.
### 4.3 MOSIX
MOSIX is being developed by Prof. Amnon Barak of Institute of Computer Science at The Hebrew University in Jerusalem, Israel. He and his team have developed MOSIX for many of Cluster OSES like Linux and NT. Infact the Linux implementation is seventh implementation of the MOSIX system. MOSIX system is the most complete implementation of a load balancing system for a Beowulf Cluster. It turns the cluster into almost a Single system with multiple processors (like an SMP system).
MOSIX is a kernel level program with preemptive process migration. Processes are migrated to nodes with less load as soon as load on the current node is greater than the threshold value. It also implements a separate file system of its own for its operation. The file system implemented is a cluster-wide shared
filesystem called MFS.
Installation of MOSIX is a task for the system administrator alone as it involves a kernel level change. MOSIX distribution site provides kernel patches for the kernel. This kernel changes are to be applied to the kernel source and the new kernel has to be compiled. This kernel has to be implemented in all the hosts in the cluster. An installation tool has been developed to automate some of these tasks.
MOSIX can be configured for operating in a Multi-user, time sharing environment in four different configurations.
Single-pool All the servers and workstations belong to a single cluster and are shared like-wise.
Server-pool Only the servers are shared, workstations work as a single units.
Adaptive-pool All servers are shared, workstations join or leave the cluster independently.
Half-duplex pool Servers are shared. Processes from workstations can be run on the server pool cluster.
These configurations may not be static. These configurations can be dynamically changed. A queuing system is also implemented to run processes in a batch.
The MOSIX system implements kernel level changes and hence is completely transparent to the user. However MOSIX system does have some disadvantages. It does not implement checkpointing. It uses `copy_from_user` and `copy_to_user` primitives over a network to communicate between user space and kernel space. That is, if a kernel in one machine needs to communicate with a program running on another host (for example, to deliver a signal) then the kernel has to move data from user space to kernel space and vice versa. This is done often in system calls. When this is done over a network it leads to reduction in system performance. Even though, MOSIX does pre-fetch data needed, it still leads to a lot of network traffic and more than network traffic, it leads to reduction in performance.
Another main problem with MOSIX is that there are frequent changes in the kernel and hence the system has to be modified for each kernel version. Hence, installing a new kernel means, download the kernel source, download the MOSIX package, apply the patch, compile the kernel and install the kernel. If the system
is modified so that it can be installed as a kernel module, then this problem might be solved.
4.4 PANTS
PANTS is a process migration tool. PANTS is PANTS Application Node Transparency System. PANTS uses checkpointing to migrate processes from one host to another. The initial version of PANTS was developed by Jeffrey Moyer. It was developed in Worcester Polytechnic Institute (WPI). The first version of PANTS used Preemptive Process Migration, but since it was architecture dependent, it was removed in PANTS v2.0[6]. The first version uses the EPCKPT[7], a patch to the Linux Kernel which adds process checkpointing and restart capability. PANTS used this to stop a running program, package it and transport it to an available node in the cluster and restart it remotely.
A multicast-based system is used to communicate with all the nodes. All the nodes which can accept a load can initiate a transfer of programs. This is known as receiver-initiated transfer. Similarly nodes which require a transfer of load, can initiate a transfer. This is known as sender-initiated transfer. PANTS uses a leader-based multicasting with sender initiated transfer.
PANTS v2.0 is in a way, a complete rewrite of PANTS. It is being developed by Kevin Dickson, Chuck Homic and S. Bryan Villamin. Version 2 does not implement a process migration. Load sharing is done only when there is a process being created. For this the execve system call is patched and is replaced. The replaced version talks with the PANTSD daemon when a process is created. This allows load sharing, but no process migration. However since this patch is simple enough to be applied for all architectures, the PANTS program is not independent of architecture. PANTS does not support IPC as of now.
5 Proposed Solution
A system which uses process migration is proposed for load balancing system. Load balancing based on process migration allows a more rigorous load balancing. The system is to be developed as a kernel module. This allows the system to be much more independent of kernel source and is more easily accepted for inclusion in kernel sources for distribution.
Load balancing systems use many different type of communication networks for communicating with other nodes. Some of the types are
• Broadcast
• Multicast
• Neighborcast
5.0.1 Broadcast
Broadcast type systems broadcast all their requirements to the whole network. This guarantees that there is no central point of failure. But this increases the network traffic heavily. Once a sender (the node that needs a process to be transferred) sends out a request for transferring a process, then it is sent to all the systems in the network. Then all the receivers (the nodes that receive the transferred process) respond to that request by sending out a positive or negative acknowledgment. So, this system without any optimization generates \((n - 1)^2\) requests and acknowledgments for one transfer. Besides that, a good arbitration policy, that does not generate any more traffic is needed to select which node finally receives the process to be transferred.
5.0.2 Multicast
Multicast systems have two different multicast subnets. One is the sender multicast group and the other is the receiver multicast group. Both the groups are dynamic in that, node that is heavily loaded joins the sender group while lightly loaded nodes join the receiver multicast. This reduces the traffic that takes place whenever there is a request for transfer. In this case, a request is sent only to the receiver group and only the receiver group nodes acknowledge the requests. This still needs a good arbitration policy. But any arbitration policy with this kind of setup is guaranteed to generate additional traffic. Hence a setup with a multicast leader is normally used. In this type of system, all requests and acknowledgments are routed through a separate node. This introduces a central point of failure, but reduces the bandwidth requirement.
5.0.3 Neighborcast
“Neighborcast” is transmitting only to the neighbors. Here a sender has only two receivers (Except for edge nodes, which have only one receiver). Its left and right neighbors. Whenever a process is to be transferred, the node selects either its left or right neighbor depending on the load on those machines. The
neighborcast system produces a less balanced cluster, since it takes a longer time for all the nodes to reach equilibrium. But this system reduces network traffic, does not need an arbitration policy and most of all, does not have a central point of failure. This solution is also inherently scalable. The nodes have to care about only its neighbors. But since all neighbors are covered, this system leads to a balanced cluster. Traffic generated by broadcast and multicast systems increases proportionally as number of nodes increase. But in this system, the traffic increases only linearly.
The neighborcast system is selected for implementing this project.
All load balancing systems should implement a transfer, selection, information and location policy.
5.1 Location Policy
Location is the node to which the selected process goes. This is either the left or right neighbor. The transfers can be propagated by the nodes, that is, if a node is unable to take the request, the request is passed on to its neighbor and so on. While this propagation can be used to reach the least loaded node to run the process, it is not used as it can lead to more hops and also to race conditions. That is, if two nodes send out a request for transfer, and both request land at the same node, then which request to choose? Also what happens to the request that was denied? Where does the request go to? To avoid these kind of problems, a neighboring node may not reject a request in most cases.
5.2 Transfer Policy
Transfer policy determines how the transfer begins. The most logical transfer policy would be the one where the sender initiates the transfer. The other policies possible are receiver-initiated and joint-initiated. When the load on the whole cluster is low or is heavily skewed, then receivers can initiate a transfer. This policy does that have its advantage that the already heavily-loaded sender has to spend less CPU cycles negotiating for a transfer. In the joint initiated, both the sender and receiver can initiate a transfer. In this system, a sender-initiated is used.
5.3 Information Policy
The information policy becomes the most important policy when the neighborcast system is used. A transfer is initiated only when the load of the node is greater than a threshold value. This threshold value is dependent on the overall cluster load and is a function of the overall load. Due to this, the way, cluster load is collected becomes important. The cluster load is collected as a moving average of the loads of individual nodes. This process starts with the first machine that enters the cluster. What is the information that is to be collected? These can be the disk utilization, CPU time, etc.
5.4 Selection Policy
Selection policy is selecting which process is to be transferred to the next node. The selection policy should not select a process that may just run for a few seconds for the transfer. Also, transfer of processes with large footprints should be avoided as much as possible.
Processes not only use CPU resources, but also use other system resources like files, IPC etc. In this case, selection of process to transfer becomes important. As much as possible, a process which does not use many of the system resources should be selected. Of course, processes may use files or other resources after they are moved. In this case implementation should provide for a way to access these resources over a network.
To aid in moving the processes, the processes must be checkpointed. Fortunately, a checkpointing solution for Linux is already available from Eduardo Pinheiro from Rutgers University. Although its availability as a module remains to be investigated, the checkpointing library can be used for moving the processes.
6 Summary
There are many approaches available for solving the problem of load balancing. The proposed implementation is just one of them. Although the proposed solution may not lead to a very quick load balancing to start with, it will definitely lead to a stable and fault-tolerant load balancing system.
References
|
{"Source-Url": "https://www.ece.uic.edu/~yshi1/linux/loadbal/loadbal.pdf", "len_cl100k_base": 6375, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 27566, "total-output-tokens": 7082, "length": "2e12", "weborganizer": {"__label__adult": 0.0003097057342529297, "__label__art_design": 0.0004100799560546875, "__label__crime_law": 0.0003275871276855469, "__label__education_jobs": 0.0010118484497070312, "__label__entertainment": 0.00012600421905517578, "__label__fashion_beauty": 0.00013434886932373047, "__label__finance_business": 0.00039577484130859375, "__label__food_dining": 0.00031828880310058594, "__label__games": 0.0007867813110351562, "__label__hardware": 0.0063934326171875, "__label__health": 0.0004150867462158203, "__label__history": 0.0003859996795654297, "__label__home_hobbies": 0.00015819072723388672, "__label__industrial": 0.000949859619140625, "__label__literature": 0.0002205371856689453, "__label__politics": 0.00024056434631347656, "__label__religion": 0.0004925727844238281, "__label__science_tech": 0.291015625, "__label__social_life": 0.00012409687042236328, "__label__software": 0.058685302734375, "__label__software_dev": 0.6357421875, "__label__sports_fitness": 0.0002315044403076172, "__label__transportation": 0.0006723403930664062, "__label__travel": 0.00022685527801513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31425, 0.02988]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31425, 0.52471]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31425, 0.94349]], "google_gemma-3-12b-it_contains_pii": [[0, 1620, false], [1620, 4531, null], [4531, 6412, null], [6412, 8946, null], [8946, 11297, null], [11297, 13594, null], [13594, 15396, null], [15396, 17954, null], [17954, 20409, null], [20409, 22596, null], [22596, 24866, null], [24866, 26902, null], [26902, 28989, null], [28989, 30970, null], [30970, 31425, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1620, true], [1620, 4531, null], [4531, 6412, null], [6412, 8946, null], [8946, 11297, null], [11297, 13594, null], [13594, 15396, null], [15396, 17954, null], [17954, 20409, null], [20409, 22596, null], [22596, 24866, null], [24866, 26902, null], [26902, 28989, null], [28989, 30970, null], [30970, 31425, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31425, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31425, null]], "pdf_page_numbers": [[0, 1620, 1], [1620, 4531, 2], [4531, 6412, 3], [6412, 8946, 4], [8946, 11297, 5], [11297, 13594, 6], [13594, 15396, 7], [15396, 17954, 8], [17954, 20409, 9], [20409, 22596, 10], [22596, 24866, 11], [24866, 26902, 12], [26902, 28989, 13], [28989, 30970, 14], [30970, 31425, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31425, 0.03676]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
2d11c4e45a2294038cefd043ad8ddc65011ec191
|
Status of This Memo
This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2006).
Abstract
This document describes a format for a Lightweight Directory Access Protocol (LDAP) Uniform Resource Locator (URL). An LDAP URL describes an LDAP search operation that is used to retrieve information from an LDAP directory, or, in the context of an LDAP referral or reference, an LDAP URL describes a service where an LDAP operation may be progressed.
Table of Contents
1. Introduction .................................................... 2
2. URL Definition .................................................. 2
2.1. Percent-Encoding ........................................... 4
3. Defaults for Fields of the LDAP URL ............................ 5
4. Examples ........................................................ 6
5. Security Considerations ........................................ 8
6. Normative References .......................................... 9
7. Informative References ....................................... 10
8. Acknowledgements ............................................. 10
Appendix A: Changes Since RFC 2255 .............................. 11
A.1. Technical Changes ....................................... 11
A.2. Editorial Changes ...................................... 11
1. Introduction
LDAP is the Lightweight Directory Access Protocol [RFC4510]. This document specifies the LDAP URL format for version 3 of LDAP and clarifies how LDAP URLs are resolved. This document also defines an extension mechanism for LDAP URLs. This mechanism may be used to provide access to new LDAP extensions.
Note that not all the parameters of the LDAP search operation described in [RFC4511] can be expressed using the format defined in this document. Note also that URLs may be used to represent reference knowledge, including that for non-search operations.
This document is an integral part of the LDAP technical specification [RFC4510], which obsoletes the previously defined LDAP technical specification, RFC 3377, in its entirety.
This document replaces RFC 2255. See Appendix A for a list of changes relative to RFC 2255.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119].
2. URL Definition
An LDAP URL begins with the protocol prefix "ldap" and is defined by the following grammar, following the ABNF notation defined in [RFC4234].
```
ladupurl = scheme COLON SLASH SLASH [host [COLON port]]
[SLASH dn [QUESTION [attributes]
[QUESTION [scope] [QUESTION [filter]
[QUESTION extensions]]]]]
; <host> and <port> are defined
; in Sections 3.2.2 and 3.2.3
; of [RFC3986].
; <filter> is from Section 3 of
; [RFC4515], subject to the
; provisions of the
; "Percent-Encoding" section
; below.
scheme = "ldap"
```
dn = distinguishedName ; From Section 3 of [RFC4514],
; subject to the provisions of
; the "Percent-Encoding"
; section below.
attributes = attrdesc *(COMMA attrdesc)
attrdesc = selector *(COMMA selector)
selector = attributeSelector ; From Section 4.5.1 of
; [RFC4511], subject to the
; provisions of the
; "Percent-Encoding" section
; below.
scope = "base" / "one" / "sub"
extensions = extension *(COMMA extension)
extension = [EXCLAMATION] extype [EQUALS exvalue]
extype = oid ; From section 1.4 of [RFC4512].
exvalue = LDAPString ; From section 4.1.2 of
; [RFC4511], subject to the
; provisions of the
; "Percent-Encoding" section
; below.
EXCLAMATION = %x21 ; exclamation mark ("!")
SLASH = %x2F ; forward slash ("/")
COLON = %x3A ; colon (":"
QUESTION = %x3F ; question mark ("?")
The "ldap" prefix indicates an entry or entries accessible from the
LDAP server running on the given hostname at the given portnumber.
Note that the <host> may contain literal IPv6 addresses as specified
in Section 3.2.2 of [RFC3986].
The <dn> is an LDAP Distinguished Name using the string format
described in [RFC4514]. It identifies the base object of the LDAP
search or the target of a non-search operation.
The <attributes> construct is used to indicate which attributes
should be returned from the entry or entries.
The <scope> construct is used to specify the scope of the search to
perform in the given LDAP server. The allowable scopes are "base"
for a base object search, "one" for a one-level search, or "sub" for
a subtree search.
The `<filter>` is used to specify the search filter to apply to entries within the specified scope during the search. It has the format specified in [RFC4515].
The `<extensions>` construct provides the LDAP URL with an extensibility mechanism, allowing the capabilities of the URL to be extended in the future. Extensions are a simple comma-separated list of type=value pairs, where the =value portion MAY be omitted for options not requiring it. Each type=value pair is a separate extension. These LDAP URL extensions are not necessarily related to any of the LDAP extension mechanisms. Extensions may be supported or unsupported by the client resolving the URL. An extension prefixed with a ‘!’ character (ASCII 0x21) is critical. An extension not prefixed with a ‘!’ character is non-critical.
If an LDAP URL extension is implemented (that is, if the implementation understands it and is able to use it), the implementation MUST make use of it. If an extension is not implemented and is marked critical, the implementation MUST NOT process the URL. If an extension is not implemented and is not marked critical, the implementation MUST ignore the extension.
The extension type (`<extype>`) MAY be specified using the numeric OID `<numericoid>` form (e.g., 1.2.3.4) or the descriptor `<descr>` form (e.g., myLDAPURLExtension). Use of the `<descr>` form SHOULD be restricted to registered object identifier descriptive names. See [RFC4520] for registration details and usage guidelines for descriptive names.
No LDAP URL extensions are defined in this document. Other documents or a future version of this document MAY define one or more extensions.
2.1. Percent-Encoding
A generated LDAP URL MUST consist only of the restricted set of characters included in one of the following three productions defined in [RFC3986]:
- `<reserved>`
- `<unreserved>`
- `<pct-encoded>`
Implementations SHOULD accept other valid UTF-8 strings [RFC3629] as input. An octet MUST be encoded using the percent-encoding mechanism described in section 2.1 of [RFC3986] in any of these situations:
The octet is not in the reserved set defined in section 2.2 of [RFC3986] or in the unreserved set defined in section 2.3 of [RFC3986].
It is the single Reserved character ‘?’ and occurs inside a <dn>, <filter>, or other element of an LDAP URL.
It is a comma character ‘,’ that occurs inside an <exvalue>.
Note that before the percent-encoding mechanism is applied, the extensions component of the LDAP URL may contain one or more null (zero) bytes. No other component may.
3. Defaults for Fields of the LDAP URL
Some fields of the LDAP URL are optional, as described above. In the absence of any other specification, the following general defaults SHOULD be used when a field is absent. Note that other documents MAY specify different defaulting rules; for example, section 4.1.10 of [RFC4511] specifies a different rule for determining the correct DN to use when it is absent in an LDAP URL that is returned as a referral.
<host>
If no <host> is given, the client must have some a priori knowledge of an appropriate LDAP server to contact.
<port>
The default LDAP port is TCP port 389.
<dn>
If no <dn> is given, the default is the zero-length DN, "".
<attributes>
If the <attributes> part is omitted, all user attributes of the entry or entries should be requested (e.g., by setting the attributes field AttributeDescriptionList in the LDAP search request to a NULL list, or by using the special <alluserattrs> selector "*").
<scope>
If <scope> is omitted, a <scope> of "base" is assumed.
<filter>
If <filter> is omitted, a filter of "){objectClass=*))" is assumed.
<extensions>
If <extensions> is omitted, no extensions are assumed.
4. Examples
The following are some example LDAP URLs that use the format defined above. The first example is an LDAP URL referring to the University of Michigan entry, available from an LDAP server of the client's choosing:
ldap:///o=University%20of%20Michigan,c=US
The next example is an LDAP URL referring to the University of Michigan entry in a particular ldap server:
ldap://ldap1.example.net/o=University%20of%20Michigan,c=US
Both of these URLs correspond to a base object search of the "o=University of Michigan,c=US" entry using a filter of "(objectclass=*)", requesting all attributes.
The next example is an LDAP URL referring to only the postalAddress attribute of the University of Michigan entry:
ldap://ldap1.example.net/o=University%20of%20Michigan, c=US?postalAddress
The corresponding LDAP search operation is the same as in the previous example, except that only the postalAddress attribute is requested.
The next example is an LDAP URL referring to the set of entries found by querying the given LDAP server on port 6666 and doing a subtree search of the University of Michigan for any entry with a common name of "Babs Jensen", retrieving all attributes:
ldap://ldap1.example.net:6666/o=University%20of%20Michigan, c=US??sub?(cn=Babs%20Jensen)
The next example is an LDAP URL referring to all children of the c=GB entry:
LDAP://ldap1.example.com/c=GB?objectClass?ONE
The objectClass attribute is requested to be returned along with the entries, and the default filter of "(objectclass=*)" is used.
The next example is an LDAP URL to retrieve the mail attribute for the LDAP entry named "o=Question?,c=US", illustrating the use of the percent-encoding mechanism on the reserved character '?'.
ldap://ldap2.example.com/o=Question%3f,c=US?mail
The next example (which is broken into two lines for readability) illustrates the interaction between the LDAP string representation of the filters-quoting mechanism and the URL-quoting mechanisms.
ldap://ldap3.example.com/o=Babsco,c=US
???(four-octet=%5c00%5c00%5c00%5c04)
The filter in this example uses the LDAP escaping mechanism of \ to encode three zero or null bytes in the value. In LDAP, the filter would be written as (four-octet=\00\00\00\04). Because the \ character must be escaped in a URL, the \s are percent-encoded as %5c (or %5C) in the URL encoding.
The next example illustrates the interaction between the LDAP string representation of the DNs-quoting mechanism and URL-quoting mechanisms.
ldap://ldap.example.com/o=An%20Example%5C2C%20Inc.,c=US
The DN encoded in the above URL is:
o=An Example\2C Inc.,c=US
That is, the left-most RDN value is:
An Example, Inc.
The following three URLs are equivalent, assuming that the defaulting rules specified in Section 3 of this document are used:
ldap://ldap.example.net
ldap://ldap.example.net/
ldap://ldap.example.net/?
These three URLs point to the root DSE on the ldap.example.net server.
The final two examples show use of a hypothetical, experimental bind name extension (the value associated with the extension is an LDAP DN).
ldap:///??sub??e-bindname=cn=Manager%2cdc=example%2cdc=com
ldap:///??sub??!e-bindname=cn=Manager%2cdc=example%2cdc=com
The two URLs are the same, except that the second one marks the e-bindname extension as critical. Notice the use of the percent-encoding mechanism to encode the commas within the distinguished name value in the e-bindname extension.
5. Security Considerations
The general URL security considerations discussed in [RFC3986] are relevant for LDAP URLs.
The use of security mechanisms when processing LDAP URLs requires particular care, since clients may encounter many different servers via URLs, and since URLs are likely to be processed automatically, without user intervention. A client SHOULD have a user-configurable policy that controls which servers the client will establish LDAP sessions with and with which security mechanisms, and SHOULD NOT establish LDAP sessions that are inconsistent with this policy. If a client chooses to reuse an existing LDAP session when resolving one or more LDAP URLs, it MUST ensure that the session is compatible with the URL and that no security policies are violated.
Sending authentication information, no matter the mechanism, may violate a user’s privacy requirements. In the absence of specific policy permitting authentication information to be sent to a server, a client should use an anonymous LDAP session. (Note that clients conforming to previous LDAP URL specifications, where all LDAP sessions are anonymous and unprotected, are consistent with this specification; they simply have the default security policy.) Simply opening a transport connection to another server may violate some users’ privacy requirements, so clients should provide the user with a way to control URL processing.
Some authentication methods, in particular, reusable passwords sent to the server, may reveal easily-abused information to the remote server or to eavesdroppers in transit and should not be used in URL processing unless they are explicitly permitted by policy. Confirmation by the human user of the use of authentication information is appropriate in many circumstances. Use of strong authentication methods that do not reveal sensitive information is much preferred. If the URL represents a referral for an update operation, strong authentication methods SHOULD be used. Please refer to the Security Considerations section of [RFC4513] for more information.
The LDAP URL format allows the specification of an arbitrary LDAP search operation to be performed when evaluating the LDAP URL. Following an LDAP URL may cause unexpected results, for example, the retrieval of large amounts of data or the initiation of a long-lived
search. The security implications of resolving an LDAP URL are the same as those of resolving an LDAP search query.
6. Normative References
7. Informative References
8. Acknowledgements
The LDAP URL format was originally defined at the University of Michigan. This material is based upon work supported by the National Science Foundation under Grant No. NCR-9416667. The support of both the University of Michigan and the National Science Foundation is gratefully acknowledged.
This document obsoletes RFC 2255 by Tim Howes and Mark Smith. Changes included in this revised specification are based upon discussions among the authors, discussions within the LDAP (v3) Revision Working Group (ldaphis), and discussions within other IETF Working Groups. The contributions of individuals in these working groups is gratefully acknowledged. Several people in particular have made valuable comments on this document: RL "Bob" Morgan, Mark Wahl, Kurt Zeilenga, Jim Sermersheim, and Hallvard Furuseth deserve special thanks for their contributions.
Appendix A: Changes Since RFC 2255
A.1. Technical Changes
The following technical changes were made to the contents of the "URL Definition" section:
Revised all of the ABNF to use common productions from [RFC4512].
Replaced references to [RFC2396] with a reference to [RFC3986] (this allows literal IPv6 addresses to be used inside the <host> portion of the URL, and a note was added to remind the reader of this enhancement). Referencing [RFC3986] required changes to the ABNF and text so that productions that are no longer defined by [RFC3986] are not used. For example, <hostport> is not defined by [RFC3986] so it has been replaced with host [COLON port]. Note that [RFC3986] includes new definitions for the "Reserved" and "Unreserved" sets of characters, and the net result is that the following two additional characters should be percent-encoded when they appear anywhere in the data used to construct an LDAP URL: "[" and "]" (these two characters were first added to the Reserved set by RFC 2732).
Changed the definition of <attrdesc> to refer to <attributeSelector> from [RFC4511]. This allows the use of "*" in the <attrdesc> part of the URL. It is believed that existing implementations of RFC 2255 already support this.
Avoided use of <prose-val> (bracketed-string) productions in the <dn>, <host>, <attrdesc>, and <exvalue> rules.
Changed the ABNF for <ldapurl> to group the <dn> component with the preceding <SLASH>.
Changed the <extype> rule to be an <oid> from [RFC4512].
Changed the text about extension types so it references [RFC4520]. Reordered rules to more closely follow the order in which the elements appear in the URL.
"Bindname Extension": removed due to lack of known implementations.
A.2. Editorial Changes
Changed document title to include "LDAP:" prefix.
IESG Note: removed note about lack of satisfactory mandatory authentication mechanisms.
"Status of this Memo" section: updated boilerplate to match current I-D guidelines.
"Abstract" section: separated from introductory material.
"Table of Contents" and "Intellectual Property" sections: added.
"Introduction" section: new section; separated from the Abstract.
Changed the text indicate that RFC 2255 is replaced by this document (instead of RFC 1959). Added text to indicate that LDAP URLs are used for references and referrals. Fixed typo (replaced the nonsense phrase "to perform to retrieve" with "used to retrieve"). Added a note to let the reader know that not all of the parameters of the LDAP search operation described in [RFC4511] can be expressed using this format.
"URL Definition" section: removed second copy of <ldapurl> grammar and following two paragraphs (editorial error in RFC 2255). Fixed line break within "!" sequence. Reformatted the ABNF to improve readability by aligning comments and adding some blank lines. Replaced "residing in the LDAP server" with "accessible from the LDAP server" in the sentence immediately following the ABNF. Removed the sentence "Individual attrdesc names are as defined for AttributeDescription in [RFC4511]." because [RFC4511]’s <attributeSelector> is now used directly in the ABNF. Reworded last paragraph to clarify which characters must be percent-encoded. Added text to indicate that LDAP URLs are used for references and referrals. Added text that refers to the ABNF from RFC 4234. Clarified and strengthened the requirements with respect to processing of URLs that contain implemented and not implemented extensions (the approach now closely matches that specified in [RFC4511] for LDAP controls).
"Defaults for Fields of the LDAP URL" section: added; formed by moving text about defaults out of the "URL Definition" section. Replaced direct reference to the attribute name "**" with a reference to the special <alluserattrs> selector "**" defined in [RFC4511].
"URL Processing" section: removed.
"Examples" section: Modified examples to use example.com and example.net hostnames. Added missing ‘?’ to the LDAP URL example whose filter contains three null bytes. Removed space after one comma within a DN. Revised the bindname example to use e-bindname. Changed the name of an attribute used in one example from "int" to "four-octet" to avoid potential confusion. Added an example that demonstrates the interaction between DN escaping and URL percent-encoding. Added some examples to show URL equivalence with respect
to the <dn> portion of the URL. Used uppercase in some examples to remind the reader that some tokens are case-insensitive.
"Security Considerations" section: Added a note about connection reuse. Added a note about using strong authentication methods for updates. Added a reference to [RFC4513]. Added note that simply opening a connection may violate some users’ privacy requirements. Adopted the working group’s revised LDAP terminology specification by replacing the word "connection" with "LDAP session" or "LDAP connection" as appropriate.
"Acknowledgements" section: added statement that this document obsoletes RFC 2255. Added Kurt Zeilenga, Jim Sermersheim, and Hallvard Furuseth.
"Normative References" section: renamed from "References" per new RFC guidelines. Changed from [1] style to [RFC4511] style throughout the document. Added references to RFC 4234 and RFC 3629. Updated all RFC 1738 references to point to the appropriate sections within [RFC3986]. Updated the LDAP references to refer to LDAP Bis WG documents. Removed the reference to the LDAP Attribute Syntaxes document and added references to the [RFC4513], [RFC4520], and [RFC4510] documents.
"Informative References" section: added.
Header and "Authors’ Addresses" sections: added "editor" next to Mark Smith’s name. Updated affiliation and contact information.
Copyright: updated the year.
Throughout the document: surrounded the names of all ABNF productions with "<" and ">" where they are used in descriptive text.
Authors’ Addresses
Mark Smith, Editor
Pearl Crescent, LLC
447 Marlpool Dr.
Saline, MI 48176
USA
Phone: +1 734 944-2856
EMail: mcs@pearlcrescent.com
Tim Howes
Opsware, Inc.
599 N. Mathilda Ave.
Sunnyvale, CA 94085
USA
Phone: +1 408 744-7509
EMail: howes@opsware.com
Full Copyright Statement
Copyright (C) The Internet Society (2006).
This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights.
This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property
The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org.
Acknowledgement
Funding for the RFC Editor function is provided by the IETF Administrative Support Activity (IASA).
|
{"Source-Url": "http://art.tools.ietf.org/pdf/rfc4516.pdf", "len_cl100k_base": 5276, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 28525, "total-output-tokens": 6756, "length": "2e12", "weborganizer": {"__label__adult": 0.0003771781921386719, "__label__art_design": 0.00031828880310058594, "__label__crime_law": 0.001163482666015625, "__label__education_jobs": 0.0008959770202636719, "__label__entertainment": 0.00017154216766357422, "__label__fashion_beauty": 0.00021564960479736328, "__label__finance_business": 0.0011014938354492188, "__label__food_dining": 0.0002677440643310547, "__label__games": 0.0008578300476074219, "__label__hardware": 0.00452423095703125, "__label__health": 0.0005021095275878906, "__label__history": 0.0004658699035644531, "__label__home_hobbies": 9.41753387451172e-05, "__label__industrial": 0.00046443939208984375, "__label__literature": 0.0004000663757324219, "__label__politics": 0.0003962516784667969, "__label__religion": 0.0005679130554199219, "__label__science_tech": 0.1290283203125, "__label__social_life": 0.00017023086547851562, "__label__software": 0.28076171875, "__label__software_dev": 0.576171875, "__label__sports_fitness": 0.00029587745666503906, "__label__transportation": 0.0004270076751708984, "__label__travel": 0.00028228759765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25418, 0.03632]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25418, 0.43525]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25418, 0.83068]], "google_gemma-3-12b-it_contains_pii": [[0, 1711, false], [1711, 3337, null], [3337, 5094, null], [5094, 7177, null], [7177, 8824, null], [8824, 10572, null], [10572, 12071, null], [12071, 14644, null], [14644, 16083, null], [16083, 17290, null], [17290, 19180, null], [19180, 21679, null], [21679, 23181, null], [23181, 23450, null], [23450, 25418, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1711, true], [1711, 3337, null], [3337, 5094, null], [5094, 7177, null], [7177, 8824, null], [8824, 10572, null], [10572, 12071, null], [12071, 14644, null], [14644, 16083, null], [16083, 17290, null], [17290, 19180, null], [19180, 21679, null], [21679, 23181, null], [23181, 23450, null], [23450, 25418, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25418, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25418, null]], "pdf_page_numbers": [[0, 1711, 1], [1711, 3337, 2], [3337, 5094, 3], [5094, 7177, 4], [7177, 8824, 5], [8824, 10572, 6], [10572, 12071, 7], [12071, 14644, 8], [14644, 16083, 9], [16083, 17290, 10], [17290, 19180, 11], [19180, 21679, 12], [21679, 23181, 13], [23181, 23450, 14], [23450, 25418, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25418, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
6726ca19df9ca05903ae193076033e3efda508db
|
An in-depth report and analysis on the WordPress REST API
About the Authors
Tom Willmot is the co-founder and CEO of Human Made. He has over 10 years experience bringing open source technology to enterprise. He lives in the UK with his wife and daughter.
Joe Hoyle is the co-founder and CTO of Human Made, where he leads the development team and the overall technical direction of the company. He is a member of the WordPress REST API development team.
Contributors
Ryan McCue - WordPress REST API Team Co-Lead
Daniel Bachhuber - WordPress REST API Team
Siobhan McKeown - Writer & Editor
Michael Pick - Designer
Executive Summary
WordPress is an open-source content management system which is used to power millions of websites and blogs, and an increasing number of web applications.
It currently powers more than 25% of the top 10 million websites on the Internet. WordPress’ usability, extensibility, and mature development community make it a popular choice for web projects of all sizes.
WordPress, like many other CMSes, is monolithic. It provides everything you need to run a website, and can be extended further with third-party plugins and themes. But on today’s web, we are moving beyond the monolithic CMS. WordPress, along with others, is leading the charge to a future where the CMS acts as a central hub, consuming and aggregating content and data from other tools and services and in-turn exposing its own content and data via APIs.
The WordPress REST API is a huge step towards this future. Exposing WordPress content and data as JSON via a standardised RESTful API unlocks your data and will enable an explosion in the number and complexity of integrations.
By embracing the WordPress REST API, you can more easily:
- separate your frontend delivery from the CMS
- power multiple frontends from the same content (think a website, app, Apple News, etc.)
- and use WordPress as part of complex multi-service workflows (like pushing content to a separate service for translation before pulling those translations back into WordPress).
The potential implications for your business are far-reaching, particularly for large custom builds and applications. WordPress provides content management and content capture, while making the data available to other frontend technologies. This permits engineering teams to work independently on discrete parts of a larger project, and allows for more stable third-party integrations.
With the REST API, WordPress stops being a web development tool used in isolation. It is one module that is available in a web developer’s toolkit; a building block to be used in many kinds of applications.
The Headless CMS
We look at the difference between a traditional and a headless CMS, how a headless CMS slots into web development, and discuss some of the advantages of using a headless CMS for your web project.
A headless CMS is used only for data capture, storage, and delivery, making it frontend agnostic. Its data can be displayed using any frontend technology, whether in a browser, mobile application, syndication, or elsewhere.
**A traditional CMS** deals with data collection, delivery, and display. WordPress, for example, has a backend where users can enter data. This data is stored in a MySQL database, retrieved from the database using PHP, and then displayed in the browser using the theme system.
**A headless CMS** decouples the theme system, allowing you to replace it with the frontend technologies of your choice. What’s left is the data storage method and web application for authors and editors, while the data is delivered to the frontend using an API.
Decoupling content management from frontend display
By decoupling content management from frontend display, a headless CMS allows developers to use any technology to display content. Developers are not locked into the templating engine provided by the CMS. The CMS might be written in PHP, but developers working in languages like JavaScript, Java, Ruby, and Swift can use an API to retrieve, store and display data. A frontend developer has complete control over the website or application’s markup and user experience, using client-side technologies to create smooth interactive experiences. It also means that if the frontend needs to be displayed in a new way (for example a redesign or to display content on a new device) the CMS can still store the data, removing the need for complex migrations.
Fast, interactive experiences
When you use a headless CMS there are two components: the CMS itself and the frontend display. The CMS focuses only on content management, without having to assemble formatted responses, while the client-side technology can quickly display that data in the browser. Using client-side technologies for display means that in-browser experiences are fast, acting in real-time, without having to wait for PHP queries to retrieve information from the database. There is a significant increase in performance when using JavaScript vs PHP: Node.js, for example, can handle many more requests than PHP due to its asynchronous event-driven nature. This can be especially useful when an application requires many connections open simultaneously.
One content management system, multiple frontends
With a traditional, monolithic CMS, data is simply displayed by the CMS itself. Data stored in a headless CMS is available for display in any context. You may want to use it for a website now, but later you may decide to use the same data for a desktop or touch screen application. The stored data is always available via the API.
Multi-service content pipelines
A headless CMS can be used to store all of the data for one site or application, or it can just be one element of a larger application that retrieves and aggregates data. This means that data can be integrated into existing workflows as just one layer. For example, it could be used just as a layer for translating content which is then pushed to another CMS.
What is a REST API?
We look at what the REST architectural style is, explore the elements that make an API RESTful, and consider some of the ways in which open APIs are changing the internet.
What is REST?
Representational State Transfer (REST) is a software architectural style for Application Programming Interfaces (APIs) that consists of guidelines and best practices for creating scalable web services. REST uses simple HTTP to make calls between machines.
This happens via a request/response mechanism between the server and the client. For example, a client, let’s say an Android application, makes a request for the most recent posts from the website. The server knows how to interpret this request, through REST, and satisfies the response by providing the most recent posts in a format understood by the client.
REST requests interact with the resources in your application (e.g. a Post or Page). These interactions are typically Reading, Creating, Updating, or Deleting. Combined with HTTP, REST requests are formed using four verbs:
- **POST**: Create a resource
- **GET**: Retrieve a resource
- **PUT**: Update a resource
- **DELETE**: Delete a resource
The data retrieved is supplied in a machine-readable format, often JSON in modern web applications.
What makes an API RESTful?
An API must have the following architectural features to be considered RESTful:
- **Client-server**: the client is separated from the server. This means that clients are not concerned with data storage and servers are not concerned with display. This ensures that data is portable and can be reused in multiple clients, and servers are simpler and more scalable.
- **Cacheable**: clients can, and should, cache responses to improve performance, and avoid the server with every request.
- **Stateless**: the necessary state to handle the request is contained in the request itself, whether as part of the query parameters, URL, body, or headers.
- **Uniform interface**: information transferred via REST comes in a standardised form, creating a simplified, decoupled architecture.
- **Layered System**: the architecture is composed of hierarchical layers. Each component cannot “see” beyond its layer: a client cannot tell if it’s connected to the server or to an intermediary.
A separate, but closely-related concept is hypermedia. Hypermedia allows a client to more fully discover a REST API without needing to know anything about the structure of the API. It’s similar to hyperlinks on the human-readable web (which enable discovering new sites and content). The server provides the information the client needs to interact with it. This means that the client can interact with the server in complex ways without knowing anything beforehand about it.
What is an open API?
Open APIs are publicly available APIs that give developers access to proprietary software information that they can make use of in their own software and applications. REST is the ideal architecture for creating an Open API for the web because, by using HTTP, it is built on the principles of the open web. To leverage an open REST API a developer just needs to make a HTTP request.
The impact of APIs cannot be overestimated; they are transforming the way businesses and services are run. For example:
- Around 25% of annual revenue of the fundraising platform JustGiving is API-driven
- In 2011, Twitter reported that they had more than one million applications registered, with a number of entire companies built off the API
- Data feeds from the Skyscanner API are used by startups like Hitlist, Go Euro, and Pintrips
- Hilton is making use of Uber’s API to allow guests to book rides from the Hilton Honors App
This aggregation of public data across different platforms enables the creation of feature-rich, powerful applications that do more than any individual product or service could do on its own.
Nomadbase is a service that facilitates remote working and the digital nomad way of life. Using public APIs, it aggregates data from social networking services to create a map of nomad locations across the world.
Our development team has a lot of experience with WordPress, and recognises its ability to provide a solid foundation to build an application on. Leveraging its users system for signups, the REST API for all external communication, and extremely common server requirements, we were able to get the prototype up and running in a matter of days, rather than weeks or months.
We used the REST API for not only the communication with the frontend application, but also inbound data coming from Facebook and Foursquare to collect user location data.
Joe Hoyle, Nomadbase
Why use WordPress and the REST API?
WordPress enabled the developers to get the first version of Nomadbase up and running quickly, providing both a stable central platform where data can be aggregated and an API for delivering the data to the frontend.
The build
Nomadbase uses APIs to gather geodata from Facebook, Swarm, Twitter, Instagram, and TripIt. This data is stored in custom tables in the MySQL database. Data is then sent over the WordPress REST API to a React frontend and displayed in the browser. Leaflet is used to create overlays on the map.
When a new user signs up for Nomadbase, data is requested from five different social networks. This leads to significant background processing. To speed this up, wp-cron is replaced by a system tool called Cavalcade, developed by Human Made for large-scale WordPress installations.
The next step for Nomadbase is a React Native iOS application that reuses code from the browser app. As all the data and user actions that exist for Nomadbase are processed by the REST API, no additional backend work needs to be done to build the new application.
What is the WordPress REST API?
We look at the specifics of the WordPress REST API, including its infrastructure and endpoints, its authentication methods and the concept of hypermedia. We also introduce you to the team behind it.
The WordPress REST API allows access to a website’s data, including users, posts, and taxonomies. In the past, developers needed to use WordPress’ built-in theme system and administration panel to display and edit content on a site.
The REST API decouples the WordPress backend from the frontend, allowing developers to use it as an application platform: WordPress is used for data entry and storage, and the frontend can be built in any programming language. The REST API transforms WordPress into a headless CMS.
**Infrastructure**
WordPress 4.4 contains the infrastructure for the WordPress REST API. This can be thought of as a construction kit for RESTful APIs in WordPress: it enables developers to build their own REST APIs, handling things like API discovery, request routing, arguments, JSON serialisation/deserialisation, and response codes. If you are building a website, application, theme or plugin, you can use the API by adding your own custom endpoints.
---
The REST API decouples the WordPress backend from the frontend, transforming it into a headless CMS.
---
**Endpoints**
Endpoints are functions that are available through the API: they’re the places where developers can do something with the CMS, whether that’s creating, retrieving, updating or deleting (CRUD) data. This includes the four core data types in WordPress (posts, comments, terms, and users) initially, although these will grow in future versions of WordPress to support all data on the site.
Authentication
A major challenge around building a REST API is authentication: how does an API know that a user should be allowed to update content on a site, for example? Who should be allowed to retrieve data? Under what conditions? The WordPress REST API uses two forms of authentication:
- **Cookie** - this is the basic authentication method used in WordPress. When you log into your dashboard a cookie is set in your browser. This method is only viable when the current user is logged into WordPress and that user has the capability to perform the action requested.
- **OAuth** - this is the main authentication method used for external clients, i.e. any third-party site or application that wants to interact with the API. With OAuth, logged in users can authorise clients to act on their behalf. Clients are issued with OAuth tokens so they can interact with the API. The REST API uses OAuth 1.0a so that it can be used by all WordPress websites; OAuth 2.0 requires HTTPS but WordPress does not.
In addition, there is a Basic Authentication method for external clients. However, this is only recommended for development environments as it involves passing your username and password on every request, and giving your credentials to clients.
Team
The WordPress REST API has contributions from 72 developers. However, the team has had four core members:
- **Ryan McCue** (Human Made)
Co-Lead of the REST API
- **Rachel Baker** (Wirecutter)
Co-Lead of the REST API
- **Joe Hoyle** (Human Made)
- **Daniel Bachhuber** (Hand Built)
Why use the WordPress REST API?
We explore some of the reasons that you may want to use a headless WordPress with the REST API in your own projects, drawing out specific technical and project management-related benefits.
The WordPress REST API allows you to use WordPress as a headless CMS. Developers can interact with WordPress independently of the theme system, using it only as a data storage and delivery platform. There are many benefits to doing this.
Create context-specific solutions
Around 25% of all websites use WordPress. These websites are PHP-based, with frontends built with the WordPress theme system. The API frees developers and allows them to use any technology that will solve a problem in their specific context. **WordPress no longer has to be concerned with the frontend:** it can just deliver data to any frontend technology. A developer can take data from a WordPress website and display it using the technology of their choice, whether that’s for a website, Android application, iOS application or whatever context the data is needed.
Reusable, portable content
The content entered into WordPress is no longer limited to being displayed on a WordPress website. A REST API-powered website has content which is infinitely portable. Your content authors only need to enter data in one place. Once it has been authored and published in WordPress, it’s now available via the API and can be delivered to websites, web applications, and mobile and desktop applications.
Separation of concerns
In traditional WordPress development, where the frontend and backend are tightly coupled, frontend developers need to be familiar with at least some aspects of WordPress. This makes it difficult to hire and work with purely frontend developers. In a decoupled environment this is no longer a problem. Different teams can work on different parts of a project while having access to the same data: a backend team can work on WordPress and the database, a team can build the frontend in JavaScript or another web technology, you can have an iOS team and an Android team. Your JavaScript developer no longer needs to learn PHP to work with WordPress, and your WordPress developer no longer needs to tinker with JavaScript. This widens the pool of development talent available to work on your website or application, and streamlines project management.
Familiar backend for authors and publishers
One of the reasons WordPress has been so successful is that it provides an easy-to-understand interface for non-technical users. With the REST API, you don’t have to decide between using your frontend technology of choice and giving your authors the admin interface they want. Authors and editors can work in the WordPress admin and the data is delivered to the frontend by the API. You have the advantage of providing an admin interface which many authors will already be familiar with, reducing the need for training and re-training, and letting authors quickly start adding content.
Integrate WordPress as one part of a content-authoring workflow
WordPress may only be suitable for one aspect of your website or application. The REST API allows you to use WordPress for just those elements that it is suitable for. The New York Times, for example, uses WordPress for its live coverage platform: data is received via WebSocket from a custom-built service, rendered in React, and served to the frontend via the WordPress REST API. In August 2015, the paper even added Slack to its publishing workflow. This makes WordPress just one module in a larger stack, making it more available to the wider web development community for smaller, specific tasks.
WordPress as a central repository
The web is increasingly API-driven, with websites and services aggregating data. The REST API makes it possible for WordPress to be the central place that brings all of this data together. This means that all of your services and data can be centralised while providing your authors with a straightforward interface that they are familiar with. This also provides a standard platform for further functionality with WordPress’ plugin system.
Develop with live data
When data is exposed through the REST API it can be used by developers in their development environment. Content can be added to the CMS and is available to developers whether they are working on the frontend, the admin, or any applications.
ustwo wanted a decoupled website with a WordPress backend and a frontend built with React.
“We chose WordPress as we wanted to have an established open source CMS so that we can be confident that we’ll never be left without support or ability to change.
To fulfil our design ambitions we decided to build our frontend as a single-page application, which was made possible with the emerging WP-API.
Daniel Demmel, ustwo
The build
The ustwo website is a single-page application: the frontend is built using React and WordPress manages the content. React was used because it allows for isomorphic rendering (pages can be rendered on the server or by the client). There is a Node.js server that enables server-side rendering.
On the WordPress side, a custom page builder plugin gets authors to enter content in a modular fashion. This ensures that the content is modular and portable to different contexts.
The infrastructure for the REST API is used along with a bespoke API comprising custom endpoints that deliver content in JSON format to the frontend.
How the REST API will change WordPress development
We look at some of the ways that the REST API will change WordPress development and the impact that this will have both for WordPress-based websites and applications, and for the people creating them.
WordPress as part of a larger stack
WordPress has a familiar and easy to use user interface which authors want to use for managing and publishing content. With the REST API, you can provide this interface to your authors without compromising on the rest of your stack.
Developers will independently focus on different aspects of the website or application, working with live data retrieved using the API.
New approaches to project management
The separation of concerns that come with a REST API project mean approaching project management in a new way. Developers will independently focus on different aspects of the website or application, working with live data retrieved using the API.
The WordPress developer as a backend specialist
There will be an increase in the number of WordPress developers who are backend specialists, focusing on the admin screens and the database, while leaving the frontend layer to frontend developers.
Permeation of WordPress outside of PHP communities
As a single module in a larger stack, WordPress will be used increasingly outside of its traditional community. The REST API allows developers to create websites and applications in any language without having to roll their own CMS.
Focused admins will remove clutter and empower the user to do exactly what they need to do.
The emergence of funneled, role-based admins
The REST API allows developers to create funneled administration experiences that focus on a particular user doing particular actions. These focused admins will remove clutter and empower the user to do exactly what they need to do.
The enhancement of built-in WordPress functionality
The REST API makes it easier for developers to enhance functionality in the WordPress admin. Developers can create client-side features in the admin that are more advanced and more performant than can be achieved with PHP.
Explorations in non-GPL products
The absolute separation of concerns means that frontend products that retrieve data from the API will not need to be GPL. It’s unlikely, however, that we’ll see a vast increase in API-powered themes due to the challenges of rebuilding native WordPress functionality on the frontend.
Challenges presented by the REST API
We explore the challenges that will be brought about by the REST API and discuss some of the ways that these challenges will be addressed both in individual projects and by the wider WordPress community.
The introduction of the REST API marks a new era in WordPress development. Not all the ways that the REST API will change WordPress are clear, but some challenges are already emerging.
The REST API will be of most significance to large custom builds and WordPress-based SaaS products.
Loss of core functionality
A REST API driven website loses frontend features that are linked to the WordPress theme system, like menu management and post previews. Frontend developers need to take responsibility for re-implementing features that come for free with WordPress. If they are not rebuilt, users must do without them. When writing project specifications for an API-driven project, it will become necessary to be very specific about the features that the client needs and not just assume that because they are in WordPress they are available.
To solve this problem, we anticipate the emergence of REST API base themes that rebuild WordPress features on the frontend. These boilerplate themes will be written in different languages and will provide a starting point for frontend developers to build on.
Disempowers WordPress site builders
In addition to its ease of use, WordPress’ strength is that it is easy to set up a website. Through WordPress, many people gain experience of PHP, CSS, and HTML, gaining confidence to make changes to the frontend of their website. The REST API completely decouples the frontend from the backend, disempowering those users, and making the frontend only editable by developers.
For this reason, it is unlikely that we will see a major disruption to the WordPress theme market. Instead, the REST API will be of most significance to large custom builds and WordPress-based SaaS products.
The necessity for structured, portable data
A headless WordPress requires data that can be used across multiple contexts. This means creating and storing it in a way that is completely frontend agnostic. In the first instance, you may just be using data on a website, but you may want to make it available later to a native application. The focus here is on content management as opposed to web publishing. This data needs to be structured in a modular manner, separate to the CSS and HTML. For this reason, REST API-driven sites will not rely on the WYSIWYG capability in TinyMCE for page layouts, instead using content structured by modular page builders.
Data needs to be structured in a modular manner, separate to the CSS and HTML.
WordPress’ commitment to backwards (and forwards) compatibility ensures that data produced by the API will continue to be readable and usable well into the future. This means that you can safely store it knowing that it will continue to be available through a well-supported API. In addition, the WordPress REST API is open, ensuring that your data can be moved out of your site using standard tools.
Dealing with progressive enhancement
In an increasingly JavaScript-driven world, progressive enhancement is a challenge that has to be addressed. Some people have JavaScript disabled in their browser, either because they use assistance technologies, because of personal preference, or because the organisation they work for requires it to be turned off. If content from a REST API driven WordPress website is delivered to a JS-powered frontend, these people will simply see a blank page.
Developers need to address these issues to ensure that the web stays accessible. One method is to render frontend templates on the server using a technology like Node.js, and then enhance the website on the frontend using client-side JavaScript. This setup, however, requires an additional server, and developers with the experience to implement it.
npm wanted to use the REST API to deliver custom brochure pages and upsell boxes on their website.
Security was of paramount importance to us and as a result, we needed a headless WP instance hosted under the same constraints as in our production environment. This meant reliance on the WordPress API to deliver authored content via JSON to our build, giving us the ability to parse and display CMS-generated content without having to grant access to any non-whitelisted IP Addresses.
Nick Cawthon, npm
Why use WordPress and the REST API?
npm wanted to use WordPress as a central repository for their documentation and their product pages. Its straightforward interface means that content authors can easily add data, which is delivered to the client in JSON format.
The build
The npm website has a WordPress backend and admin. A bespoke API built of custom endpoints serves content in JSON format to a Node.js server. This renders the final HTML and sends it to the browser where Handlebars renders the templates. The API doesn’t just send the data: it sends rendered HTML along with scripts and stylesheets. This is cached by the Node.js server so that the website stays up even if WordPress is unavailable. It also means that the website stays fast without having to expend effort scaling the database.
Some customisations recreate the post preview feature of WordPress. Parts of the CSS templates and handlebars frontend are used to create a basic WordPress theme which authors use to preview posts before they are published and pushed to the frontend.
Resources
Resources
The WordPress REST API
- [Official website and documentation](wp-api.org)
- [WordPress core discussion about the REST API](hmn.md/wp-api/core/)
Client Libraries
- [Node.js](hmn.md/wp-api/node/)
- [Backbone.js](hmn.md/wp-api/backbone/)
- [AngularJS](hmn.md/wp-api/angular/)
- [PHP Client](hmn.md/wp-api/php-client/)
- [C#](hmn.md/wp-api/c/)
Resources
Authentication
- OAuth (hmn.md/wp-api/oauth/)
- Basic Authentication (hmn.md/wp-api/basic-auth/)
Tools
- WP-CLI client (hmn.md/wp-api/cli/)
- API Console (hmn.md/wp-api/console/)
- WP JSON API Connect (hmn.md/wp-api/connect/)
Other resources
- Picard React theme (hmn.md/wp-api/picard/)
- Feeling Restful theme (hmn.md/wp-api/feeling-restful/)
- ustwo.com frontend (hmn.md/wp-api/ustwo/)
Join the REST API team along with speakers from WIRED, Automattic, The New York Times, and Bocoup, at A Day of REST, a one-day conference where you can learn how to use the REST API in your project: Conway Hall, London. 28th January 2016.
Work with us
Human Made is an enterprise-level WordPress development firm, based in the UK but with employees and clients worldwide.
Our clients include NewsUK, AirBnB, Skype, and Yell. We’re the people behind Nomadbase and Happytables. Our developers have led the development of the WordPress REST API and we’re already making use of it in our own products and in client projects.
Want to use the WordPress REST API in your project? Contact Human Made for:
- enterprise-level development
- bespoke training
- consultancy
- hosting
Email us hello@hmn.md or give us a call at +44 (0) 1629 628082
All content licensed under Creative Commons BY-SA 4.0
|
{"Source-Url": "https://hmn.md/uploads/2016/07/Talking-to-25-of-the-Web-1.1.pdf", "len_cl100k_base": 5995, "olmocr-version": "0.1.50", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 69029, "total-output-tokens": 7652, "length": "2e12", "weborganizer": {"__label__adult": 0.0002567768096923828, "__label__art_design": 0.0003139972686767578, "__label__crime_law": 0.0001634359359741211, "__label__education_jobs": 0.00031185150146484375, "__label__entertainment": 5.239248275756836e-05, "__label__fashion_beauty": 9.363889694213869e-05, "__label__finance_business": 0.0005178451538085938, "__label__food_dining": 0.00024235248565673828, "__label__games": 0.00025153160095214844, "__label__hardware": 0.0003736019134521485, "__label__health": 0.0001347064971923828, "__label__history": 9.959936141967772e-05, "__label__home_hobbies": 4.172325134277344e-05, "__label__industrial": 0.00016510486602783203, "__label__literature": 9.888410568237303e-05, "__label__politics": 9.79304313659668e-05, "__label__religion": 0.0002052783966064453, "__label__science_tech": 0.0013055801391601562, "__label__social_life": 4.404783248901367e-05, "__label__software": 0.0137786865234375, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0001392364501953125, "__label__transportation": 0.00021338462829589844, "__label__travel": 0.0001647472381591797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30114, 0.00204]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30114, 0.09395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30114, 0.90583]], "google_gemma-3-12b-it_contains_pii": [[0, 58, false], [58, 58, null], [58, 616, null], [616, 1455, null], [1455, 2653, null], [2653, 2867, null], [2867, 3633, null], [3633, 5205, null], [5205, 5587, null], [5587, 5980, null], [5980, 6173, null], [6173, 7253, null], [7253, 8740, null], [8740, 9145, null], [9145, 9872, null], [9872, 10085, null], [10085, 10653, null], [10653, 11762, null], [11762, 11994, null], [11994, 12510, null], [12510, 13481, null], [13481, 14734, null], [14734, 15029, null], [15029, 15251, null], [15251, 16524, null], [16524, 18028, null], [18028, 19439, null], [19439, 19530, null], [19530, 19860, null], [19860, 20497, null], [20497, 20750, null], [20750, 21691, null], [21691, 22944, null], [22944, 23186, null], [23186, 24287, null], [24287, 26051, null], [26051, 26891, null], [26891, 26990, null], [26990, 27395, null], [27395, 28453, null], [28453, 28463, null], [28463, 28817, null], [28817, 29221, null], [29221, 29460, null], [29460, 30114, null]], "google_gemma-3-12b-it_is_public_document": [[0, 58, true], [58, 58, null], [58, 616, null], [616, 1455, null], [1455, 2653, null], [2653, 2867, null], [2867, 3633, null], [3633, 5205, null], [5205, 5587, null], [5587, 5980, null], [5980, 6173, null], [6173, 7253, null], [7253, 8740, null], [8740, 9145, null], [9145, 9872, null], [9872, 10085, null], [10085, 10653, null], [10653, 11762, null], [11762, 11994, null], [11994, 12510, null], [12510, 13481, null], [13481, 14734, null], [14734, 15029, null], [15029, 15251, null], [15251, 16524, null], [16524, 18028, null], [18028, 19439, null], [19439, 19530, null], [19530, 19860, null], [19860, 20497, null], [20497, 20750, null], [20750, 21691, null], [21691, 22944, null], [22944, 23186, null], [23186, 24287, null], [24287, 26051, null], [26051, 26891, null], [26891, 26990, null], [26990, 27395, null], [27395, 28453, null], [28453, 28463, null], [28463, 28817, null], [28817, 29221, null], [29221, 29460, null], [29460, 30114, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30114, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30114, null]], "pdf_page_numbers": [[0, 58, 1], [58, 58, 2], [58, 616, 3], [616, 1455, 4], [1455, 2653, 5], [2653, 2867, 6], [2867, 3633, 7], [3633, 5205, 8], [5205, 5587, 9], [5587, 5980, 10], [5980, 6173, 11], [6173, 7253, 12], [7253, 8740, 13], [8740, 9145, 14], [9145, 9872, 15], [9872, 10085, 16], [10085, 10653, 17], [10653, 11762, 18], [11762, 11994, 19], [11994, 12510, 20], [12510, 13481, 21], [13481, 14734, 22], [14734, 15029, 23], [15029, 15251, 24], [15251, 16524, 25], [16524, 18028, 26], [18028, 19439, 27], [19439, 19530, 28], [19530, 19860, 29], [19860, 20497, 30], [20497, 20750, 31], [20750, 21691, 32], [21691, 22944, 33], [22944, 23186, 34], [23186, 24287, 35], [24287, 26051, 36], [26051, 26891, 37], [26891, 26990, 38], [26990, 27395, 39], [27395, 28453, 40], [28453, 28463, 41], [28463, 28817, 42], [28817, 29221, 43], [29221, 29460, 44], [29460, 30114, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30114, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
bc069d34f650713f9bba63b6db59a7fb32fbf403
|
Trade-off evaluation in embedded system design via co-simulation
Claudio Passerone
Luciano Lavagno
Claudio Sansoe
Massimiliano Chiodo
Alberto Sangiovanni-Vincentelli
Dipartimento di Elettronica
Politecnico di Torino
Torino, ITALY 10129
Tel: +39-11-5644111
Fax: +39-11-5644099
email: alcor@pol2k.polito.it
email: lavagno@polito.it
email: sansoe@polito.it
Alta Group of
Cadence Design Systems, Inc.
Sunnyvale, CA 94086
Tel: +1-408-5234632
Fax: +1-408-5234601
email: maxc@altagroup.com
Department of EECS
University of California
Berkeley, CA 94720
Tel: +1-510-6424882
Fax: +1-510-6435052
email: alberto@ees.berkeley.edu
Abstract—Current design methodologies for embedded systems often force the designer to evaluate early in the design process architectural choices that will heavily impact the cost and performance of the final product. Examples of these choices are hardware/software partitioning, choice of the micro-controller, and choice of a run-time scheduling method. This paper describes how to help the designer in this task, by providing a flexible co-simulation environment in which these alternatives can be interactively evaluated.
I. Introduction
One of the major problems facing an embedded system designer is the multitude of different design options, that often lead to dramatically different cost/performance results. In this paper we address the problem of trade-off evaluation via simulation, rather than via mathematical analysis. We believe that by providing efficient tools, supporting a variety of target implementation architectures and flexible mechanisms for evaluating design choices, we will help the designer in this difficult task more effectively than by providing relatively inflexible partitioning algorithms.
The problems that we must solve include:
• fast co-simulation of mixed hardware/software implementations, with accurate synchronization between the two components.
• evaluation of different processors and processor architectures, with different speed, memory size and I/O characteristics.
• co-simulation of heterogeneous systems, in which data processing is performed simultaneously to reactive processing, i.e., in which regular streams of data can be interrupted by urgent events. Even though our main focus is on reactive, control-dominated systems, we would like to allow the designer to freely mix the representation, validation and synthesis methods that best fit each particular sub-task.
Since we want to explore trade-offs, we have to use an implementation independent representation. Among the available ones, we use a network of CFSMs, which are Finite State Machines extended to handle integer arithmetic ([CGH94]). The CFSM model uses a locally synchronous, globally asynchronous paradigm, that
• determines exactly how a module behaves whenever it performs a transition, but
• poses little restriction on the speed of communication between modules or on the relative timing of module executions (the “scheduling policy”).
By using CFSMs we can leverage the work done in software synthesis so that if we wish to explore a particular hardware/software partition, the software part can be automatically generated optimally. The performance of the system and its correctness can be assessed by running a co-simulation. Because of the semantics of CFSMs and the architecture supported, there is only one resource (microprocessor) that can run code, but there is no limitation on the hardware blocks. Hence all concurrent hardware tasks can be simulated in concurrent mode, while software tasks have to be scheduled onto the processor.
Simulating this implementation requires to know how long each of the software tasks takes, and what scheduling policy we are going to use. To solve the first problem we need an estimation of running times of the tasks.
To solve the second one we need to know the scheduling policy, that can be either given by the user or automatically generated as well. With this information we can use any Discrete Event simulator. It is our intention to leverage maximally what is available and the Ptolemy system ([BHLM96]) is appropriate to solve our problem except for the differentiation among software and hardware tasks, since in the Ptolemy DE simulation each task is considered "hardware". In this paper we show an algorithm that allows to co-simulate CFSM tasks with a given microprocessor architecture and a given scheduling policy without changing any of the Ptolemy code, but by simply having all software tasks implement a distributed scheduler.
Past work in the area of performance prediction and trade-off evaluation has focused mostly on elaborate cost models to guide automated partitioning algorithms, or on co-simulation methods in which a rather detailed model of the processor may be required.
Specifically, co-simulation requires to satisfy two conflicting requirements:
1. accurate modeling of the interaction between software and hardware, which requires in the limit to use cycle-accurate execution of a stream of instructions on a hardware model (emulator) or on a software one (simulator).
2. fast execution of application code in a flexible analysis and debugging environment, which would require in the limit to compile and run the embedded software on a host workstation, somehow simulating its interaction with the hardware and with the environment.
A first class of co-simulation methods, proposed for example by Gupta et al. in [GJM92], relies on a single custom simulator for hardware and software. This simulator uses a single event queue, and a high-level, bus-cycle model of the target CPU.
A second class, described by Rowson in [Row94], loosely links a hardware simulator with a software process. Synchronization is achieved by using the standard interprocess communication mechanisms offered by the host Operating System. One of the problems with this approach is that the relative clocks of software and hardware simulation are not synchronized, thus requiring the use of handshaking protocols. This may impose an undue burden on the implementation, e.g. if hardware and software do not need such handshaking since the hardware part in reality runs much faster than in the simulation.
A third class, described in [HHM93], keeps track of time in software and hardware independently, using various mechanisms to synchronize them periodically. If the software is master, then it decides when to send a message, tagged with the current software clock cycle, to the hardware simulator. If the hardware time is already ahead, the simulator may need to back up, which is a capability that few hardware simulators currently have. If the hardware is master, then the hardware simulator calls communication procedures which in turn call user software code.
Our approach is different from those listed above, because:
1. it allows different components of the system to be scheduled independently, without requiring to write a custom environment,
2. software and hardware are simulated together in a unified environment, with the same debugging interface,
3. a bus-cycle model of the target processor is not required, yet a satisfactory level of accuracy is achieved.
The main emphasis of our work is on speed, both during simulation (a speed exceeding 1 million clock cycles per second can be achieved on a general-purpose workstation) and when the user changes some architectural parameters (changing the target processor or the hardware/software partition takes about 1 second). This is possible because software is compiled directly on the host workstation microprocessor, and the actual running time on the target microprocessor is determined via estimation, before starting the simulation.
The paper is organized as follows. Section II describes our co-simulation methodology in detail. Section III shows with an example how co-simulation can be used to interactively evaluate the performance of various partitions of a system under various operating conditions. Section IV concludes the paper and outlines opportunities for future research.
II. Co-Simulation methodology
A. The co-simulation environment
Our co-simulation and trade-off evaluation method uses an existing co-design environment for reactive embedded systems, described in [CGH+94], for synthesizing software and hardware, and for analyzing their performance. The POLIS system is centered around a single Finite State Machine-like representation, which is well suited to our target class of control-dominated systems. A Co-design Finite State Machine (CFSM), like a classical Finite State Machine, transforms a set of inputs into a set of outputs with only a finite amount of internal state. The difference between the two models is that the synchronous communication model of concurrent FSMs is replaced in the CFSM model by a finite, non-zero, a priori unbounded reaction time. Each element of a network of CFSMs describes a component of the system to be modeled. One of the purposes of co-simulation is exactly to attach timing information to this originally untimed specification, by means of partitioning and profiling.
One of the major strengths of POLIS is the ability to synthesize both hardware and software components starting from the common model of CFSMs.
- Hardware blocks are mapped into an abstract hardware description format, namely BLIF ([SSL+92]).
- Software blocks are mapped into a software structure that includes a procedure for each CFSM, together with a simple Real-time Operating System. A timing estimator quickly analyzes the program and reports code size and speed characteristics. The algorithm is similar to that used by [PS91], but requires no user input. The estimator is a key component of our co-simulation methodology, because it allows us to obtain accurate estimates of program execution times on any characterized target processor.
Ptolemy ([BHLM90]) is a complete design environment for simulation and synthesis of mixed hardware/software data-dominated embedded systems. Here we will concentrate on its simulation aspects. Ptolemy treats the system to be designed as a hierarchical collection of objects, described at different levels of abstraction and using different semantic models to communicate with each other:
- Each abstraction level, with its own semantic model, is called a “domain” (e.g., data flow, logic, ...).
- Atomic objects (called “stars”) are the primitives of the domain (e.g., data flow operators, logic gates, ...).
- “Galaxies” are collections of instances of stars or other galaxies. Instantiated galaxies can possibly belong to domains different than the instantiating domain.
Each domain includes a scheduler, which decides in which order stars are executed (both in simulation and in synthesis). In particular, we used the Discrete Event (DE) domain of Ptolemy to implement the event-driven communication mechanism among CFSMs. It has a notion of global time, and the scheduler maintains a global event queue where events are ordered based on their time stamps; at any given instant the event with the smallest time stamp is taken from the queue, and the simulation code of the stars which have that event as input is executed (the stars are “fired”). This domain is event-driven, rather than data-driven as most other domains in Ptolemy, and hence seems the most appropriate for our purposes.
**B. CFSM co-simulation in Ptolemy**
The Ptolemy scheduler in the DE domain fires stars as if they were executed concurrently. Thus it does not directly provide a way to simulate CFSMs implemented in software and running on a limited amount of computational resources (in this paper we assume that a single CPU is available). Our goal was to modify the DE scheduler behavior without changing its code, to maintain compatibility with the original version. Thus we let the scheduler fire stars in its own preferred order, but every star may or may not actually execute the main part of its code, based on global information which characterizes the shared resources. All this is accomplished in a transparent way so that the Ptolemy scheduler sees a world of concurrent stars, while the software stars see the POLIS scheduling policy.
Having met this goal, it is now possible to simulate in Ptolemy a system designed using POLIS: this is accomplished by generating a proper description of every CFSM and loading it into Ptolemy together with the network of interconnections. It is then possible, through the nice graphical interface provided by Ptolemy, to evaluate trade-offs between hardware and software solutions, and to visualize the overall system behavior.
The detailed design flow for the co-simulation and trade-off analysis phases is as follows:
1. The control/data flow graph of every CFSM in the system specification is built, and the corresponding C code is generated (as described in [CGH95]). The C code also includes run time estimations for each C code statement, based on information derived from benchmark analysis of the target processor ([SSV96]).
2. The Ptolemy language source code for every CFSM is generated. This in turn includes the C code generated by the previous step.
3. All CFSMs are loaded into Ptolemy. Each of them is a single star, with input and output portholes corresponding to CFSM inputs and outputs.
4. The network of interconnections between stars (CFSMs) is created in the Ptolemy environment.
5. Each star is assigned (by interactively modifying one of its properties or one of the properties of a galaxy above it in the hierarchy):
- an implementation, either software or hardware, and
- a priority, used by the software scheduler.
Hardware stars run concurrently and terminate in a single clock cycle. Software stars are mutually exclusive, and use the run time estimation to determine how long it takes to emit outputs and complete firing of a single transition of the CFSM. Since the underlying model for both hardware and software is the same, changing the implementation can be done at run-time by just accumulating or not the timing information.
6. A single system-wide parameter (also modifiable interactively) describes which one of the pre-characterized processors must be used for cycle counting. This provides a mechanism to easily change the target processor for a given set of simulation stimuli, without the need to re-analyze or re-compile the specification. In the same way it is possible to specify the scheduling policy best suited for the given application. If the scheduler is priority-based, then it can use the priority level assigned to each star.
7. The simulation is started with appropriate stimuli generators and output monitors, to check the behavior of the system. Multiple simulations can be compared to evaluate timing constraint satisfaction, run time, processor occupation, and other interesting pieces of information.
For example, in Section III we will describe how the CPU utilization can be monitored, to detect and analyze potential overload conditions.
C. Scheduling policy implementation
The solution that we have chosen does not require to modify the DE scheduler. From now on, we will call software scheduler our own scheduling policy, implemented by an automatically generated procedure on top of the Ptolemy DE scheduler. Each simulation cycle (identified by an integer number, and directly corresponding to a simulated system clock cycle) is divided into three phases:
1. Request phase, in which all stars receive events, and request from the software scheduler access to the processor, re-scheduling themselves at the grant phase.
2. Grant phase, in which only one star is granted access to the processor and executes its user code, while all other enabled stars re-schedule themselves at the update phase. The identifier of the star which currently runs on the processor is kept in a shared variable called star.ack. This star also computes the time at which the processor will become available again, in a shared variable called next, by accumulating estimated clock cycles during the execution of user code and RT-OS calls.
3. Update phase, in which all enabled stars re-schedule themselves at the next time the processor is available.
The request phase is characterized by an integer simulation time, while the grant and update phases are characterized by fractional simulation times. All events are received and emitted at integer times.
Scheduling is thus performed in the following way. Whenever a software star receives an event:
- If the current time is greater than or equal to next, then the processor is available:
- If the current time identifies a request phase, then the star sends the request to the software scheduler, and re-schedules itself at the next grant phase.
- If the current time identifies a grant phase, then the star must check the variable star.ack to know if it has been chosen by the software scheduler. In this case, it executes the user-specified part of its own code, accumulating estimated clock cycles depending on the sequence of instructions that is executed, and sets the variable next according to the clock cycle estimate. Otherwise it must re-schedule itself again at the next update phase (to make sure that the selected star has had time to execute its code and update the variable next).
- If the current time identifies a grant phase, then the star reads the variable next and re-schedule itself to try to get the processor at that time.
- If the processor is not available, but the priority of the star is greater than that of the currently executing star, and the chosen scheduling policy allows interrupts (i.e. it is pre-emptive), then an interrupt occurs. The variable next is incremented by the estimated execution time of the interrupting star, and the interrupted one is prevented from emitting output events until the end of the interrupt. Interrupts may be arbitrarily nested, and can only cause delays in the interrupted stars, without changing their behavior (input variables to stars are buffered in our software implementation scheme, to improve the predictability of the system behavior).
- If the processor is not available and the priority of the star is less than or equal to that of the currently executing star, or if the software scheduling policy is non-pre-emptive, then the star must re-schedule itself at time next. Hence, stars with the same priority level do not interrupt each other.
The communication queues among stars are forced to hold at most one event, to match the CFSM communication model using one-place buffers. This means that events may be overwritten, if they are emitted twice without being detected. This is legal in the CFSM model of computation, but can optionally be logged to a file, because often losing events means violating timing constraints and is interesting for the designer.
The implementation (hardware or software) of each star can be dynamically and independently changed during a Ptolemy session, by just updating a parameter of the star or of a galaxy enclosing it in the hierarchy. Thus it is possible to experiment with different solutions in a very
straightforward manner and in a short period of time. In fact, it is not necessary to rebuild the system simulation model, but it is sufficient to execute a new simulation of the same model.
III. An application example
We consider an application from the automotive domain: a dashboard controller. It takes as inputs pulses coming from the sensors on the wheels and on the engine, and data about fuel level and water temperature. It processes the inputs and drives, using pulse-width modulated signals, a set of dials.
We evaluated different implementation choices, under various possible operating conditions. The timing constraints, which in this case are relatively soft, derive mainly from the need not to miss any incoming pulse. For the engine this means up to 5000 pulses per second, while for the wheel this means up to 16 pulses per second. Outputs must be produced at a frequency of at least 100 Hz and with a maximum jitter of 100 microseconds to drive the gauge coils. In this case, we first assumed to use a Motorola 68HC11, because the timing estimate available from synthesis (379 clock cycles at most for the most time critical task, the engine pulse recorder) showed that we could hope to satisfy the requirements with a 1 MHz processor.
The system was modeled using 13 CFSMs, each specified in ESTEREL, for a total of 2500 lines of source code and 10 KBytes of compiled object code for the final implementation. Their interconnection was described graphically with the Ptolemy user interface, as illustrated in Fig. 1. One CFSM is devoted to input conversion from level to pulse, three CFSMs derive timing events from the base time reference, four CFSMs filter and normalize the data, one CFSM converts potentiometer readings to fuel level (taking into account the shape of the tank), and the remaining four CFSMs perform the PWM conversion. Scheduling was performed by hand, due to the simplicity of the system, resulting in an assignment of two priority levels to CFSMs. The highest level was assigned to the CFSMs that had to handle input events, and corresponds, in practical terms, to interrupt-driven I/O without interrupt nesting.
We first tried to implement everything in software, then moved part of the components to a hardware implementation in order to satisfy the timing constraints. The PWM converters were an obvious choice for hardware implementation, because their low jitter constraints made a software implementation on a 68HC11 infeasible. Fig. 2 shows the priority level of the star currently executed on the processor as a function of time$^1$. The processor utilization is still fairly high, especially considering that the simulated car speed was about 50 Km/h. Hence we moved also the timing event generators to hardware (a typical choice in embedded systems, in which timing functions are often performed by special-purpose timers which are part of standard micro-controllers). Fig. 3 shows the priority level in this second case. The user interface uses the standard constant specification mechanism provided by Ptolemy, and allows the designer to change all the architectural parameters without re-compilation. Currently supported parameters are:
- CPU type, clock speed, scheduler type for the whole system,
- implementation (hardware or software) and priority (used only for software stars) for each star or hierarchical star group (galaxy).
The performance of the simulator is very high, especially if there is no component which is active at every clock cycle, because in that case we can exploit the inactivity of the system. The dashboard example is ideal in this respect, because the highest frequency input events occur about once every 100 clock cycles (assuming a 1 MHz clock). The results for the dashboard simulation, using various types of architectural choices, are reported in Table I. In this case, we used estimated execution times for a Motorola 68HC11 micro-controller with 1 MHz clock speed, and a MIPS R3000, with 1 MHz and 10 MHz clock speed. The partitions shown in the column labeled “Part.” are respectively:
- SW: all modules are in software,
- HW/SW: the PWM drivers and timing generators are in hardware, and the rest is in software,
- HW: all modules are in hardware.
The column labeled “Graph.” shows when the graphical priority display (useful for debugging the software scheduler) is used. All CPU and User times are in seconds, and were obtained on a SPARCSTATION 10 with 16 Mbytes of RAM, simulating 20 million clock cycles (except for the 10 MHz MIPS, for which 200 million cycles were simulated). The user time required to restart a simulation when the partition or the target processor are changed is about 1-2 seconds.
The execution time for the all-software partition on the 68HC11 is very high because the processor does not meet the timing constraints, and hence the simulation is interrupted very often to log the information about missed deadlines on a file.
The simulation performance (without graphics output, that significantly slows down the system) is around 1 million clock cycles per second. This speed, which can be achieved thanks to the extremely low overhead imposed by our cycle counting technique, is sufficient in many cases to run simulations almost at the same speed as the real target system (virtual prototyping).
$^1$ A value of 0 means that the processor is idle, 1 is the highest level, each vertical bar represents a context switch.
Fig. 1. The dashboard controller netlist
Fig. 2. Processor utilization analysis with few hardware components
Fig. 3. Processor utilization analysis with more hardware components
TABLE I
Simulation speed for various types of system partitions
<table>
<thead>
<tr>
<th>Target Proc.</th>
<th>MHz</th>
<th>Part.</th>
<th>Graph.</th>
<th>Time CPU</th>
<th>User</th>
</tr>
</thead>
<tbody>
<tr>
<td>HC11</td>
<td>1</td>
<td>SW</td>
<td>No</td>
<td>> 1000</td>
<td>> 1000</td>
</tr>
<tr>
<td>HC11</td>
<td>1</td>
<td>HW/SW</td>
<td>No</td>
<td>30</td>
<td>38</td>
</tr>
<tr>
<td>HC11</td>
<td>1</td>
<td>HW</td>
<td>No</td>
<td>25</td>
<td>27</td>
</tr>
<tr>
<td>MIPS</td>
<td>1</td>
<td>SW</td>
<td>No</td>
<td>161</td>
<td>162</td>
</tr>
<tr>
<td>MIPS</td>
<td>1</td>
<td>HW/SW</td>
<td>No</td>
<td>23</td>
<td>25</td>
</tr>
<tr>
<td>MIPS</td>
<td>1</td>
<td>HW</td>
<td>No</td>
<td>25</td>
<td>26</td>
</tr>
<tr>
<td>MIPS</td>
<td>10</td>
<td>HW/SW</td>
<td>No</td>
<td>23</td>
<td>23</td>
</tr>
<tr>
<td>HC11</td>
<td>1</td>
<td>HW/SW</td>
<td>Yes</td>
<td>202</td>
<td>205</td>
</tr>
<tr>
<td>MIPS</td>
<td>1</td>
<td>HW/SW</td>
<td>Yes</td>
<td>258</td>
<td>270</td>
</tr>
</tbody>
</table>
IV. Conclusions and future work
In this paper we have shown that fast co-simulation can be done at the early stages of a design, for partition evaluation and functional verification purposes. The methodology relies on the use of constrained software synthesis, that permits easy run time estimation for a target processor, and of a powerful co-simulation environment built in the Ptolemy system.
We noticed, by profiling the simulator code, that over 90% of the time (when the graphic output is not used) is spent executing the DE scheduler code. This means that a faster simulator could be obtained by re-writing the Ptolemy DE Scheduler to take into account the required behavior of CFM stars, and eliminate the overhead introduced by the re-firing method. On the other hand, this option may not be desirable for reasons of compatibility, both with future versions of Ptolemy, and with other simulation domains within Ptolemy.
In the future, we would like to allow the designer to create more than one software partition, thus simulating multiprocessor environments, and to specify hand-estimated execution times for software modules that were not synthesized using POLIS (e.g., data-intensive modules designed using Ptolemy).
References
|
{"Source-Url": "http://www.cecs.uci.edu/~papers/compendium94-03/papers/1997/aspdac97/pdffiles/05a_2.pdf", "len_cl100k_base": 5701, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25500, "total-output-tokens": 6481, "length": "2e12", "weborganizer": {"__label__adult": 0.0006890296936035156, "__label__art_design": 0.0009450912475585938, "__label__crime_law": 0.0005850791931152344, "__label__education_jobs": 0.0008029937744140625, "__label__entertainment": 0.00016319751739501953, "__label__fashion_beauty": 0.0003209114074707031, "__label__finance_business": 0.0005016326904296875, "__label__food_dining": 0.000598907470703125, "__label__games": 0.001495361328125, "__label__hardware": 0.0269622802734375, "__label__health": 0.00093841552734375, "__label__history": 0.0006389617919921875, "__label__home_hobbies": 0.00025653839111328125, "__label__industrial": 0.0021228790283203125, "__label__literature": 0.0003151893615722656, "__label__politics": 0.000522613525390625, "__label__religion": 0.0010089874267578125, "__label__science_tech": 0.3701171875, "__label__social_life": 8.33272933959961e-05, "__label__software": 0.008026123046875, "__label__software_dev": 0.57958984375, "__label__sports_fitness": 0.0005621910095214844, "__label__transportation": 0.002292633056640625, "__label__travel": 0.0003578662872314453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28361, 0.04493]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28361, 0.60005]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28361, 0.91271]], "google_gemma-3-12b-it_contains_pii": [[0, 3825, false], [3825, 9134, null], [9134, 14088, null], [14088, 19163, null], [19163, 24619, null], [24619, 24799, null], [24799, 28361, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3825, true], [3825, 9134, null], [9134, 14088, null], [14088, 19163, null], [19163, 24619, null], [24619, 24799, null], [24799, 28361, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28361, null]], "pdf_page_numbers": [[0, 3825, 1], [3825, 9134, 2], [9134, 14088, 3], [14088, 19163, 4], [19163, 24619, 5], [24619, 24799, 6], [24799, 28361, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28361, 0.07971]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
f78ed19ae316066808db48b424cce394fbfa4c69
|
VASP: Virtualization assisted Security Monitor for Cross-Platform Protection
Min Zhu\textsuperscript{1}, Miao Yu\textsuperscript{1}, Mingyuan Xia\textsuperscript{6}, Bingyu Li\textsuperscript{1}, Peijie Yu\textsuperscript{1}, Shang Gao\textsuperscript{1}, Zhengwei Qi\textsuperscript{1}, Liang Liu\textsuperscript{7}, Ying Chen\textsuperscript{1}, Haibing Guan\textsuperscript{5}
\textsuperscript{1}School of Software \textsuperscript{6}School of Electronic Information and Electrical Engineering
Shanghai Key Laboratory of Scalable Computing and Systems
Shanghai Jiao Tong University
\textsuperscript{1} IBM Research China
\textsuperscript{1} { carit, superymk, kenmark, justasmallfish, yupiwang, chillygs, qizhwei, hbguan } @ sjtu.edu.cn
\textsuperscript{1} { liuliang yingch } @cn.ibm.com
ABSTRACT
Numerous operating systems have been designed to manage and control system resources with largeness and complexity features, so they need high security protection. However, the security applications always can not provide adequate protection due to the untrusted execution environment. Furthermore, these security strategies cannot support a universal cross-platform system protection. This paper presents VASP, a hypervisor based monitor which allows a trusted execution environment to monitor various malicious behaviors in the operating system. This is achieved by taking advantage of x86 hardware virtualization and self-transparency technology, and providing a unified security protection to operating systems such as Linux and Windows, which can run without any modification. Our design is targeted at establishing a security monitor which resides completely outside of the target OS environment with a negligible overhead. According to the security analysis and performance experiment result, our approach can effectively protect applications and the kernel by the cost of only 0.9% average overhead in Windows XP and 2.6% average overhead in Linux.
Categories and Subject Descriptors
D.4.6 [Operating Systems]: Security and Protection
General Terms
Design, Virtualization technology, Software Protection
Keywords
cross-platform, hypervisor, security, hardware virtualization
1. INTRODUCTION
Virtualization was first applied to the operating system (OS) as IBM System/370 Extended Architecture in 1970 \cite{9}
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
\textit{SAC’11} March 21-25, 2011, TaiChung, Taiwan.
Copyright 2011 ACM 978-1-4503-0113-8/11/03 ...$10.00.
VASP makes use of few hardware resources, only for processor and memory demand, and the effect is slight relative to the traditional full virtualization. On the other hand, there is no need to modify the original OS kernel.
To the best of our knowledge, this paper is the first to address the issue of the universal security protection on different architecture of operating systems based on hardware virtualization. As we will demonstrate, instead of relying on operating system's space resources, the virtual machine monitor behaviors are safe and effective with the help of hardware virtualization.
In this paper, we make the following contributions.
- We establish a lightweight security monitor with least temporal and spatial overhead to protect operating system as our experiment results revealed. And the hypervisor is designed without any modification to the protected OS.
- We construct a universal cross-platform security architecture. Both Windows and Linux operating systems can be protected under our security platform.
- This architecture is easily extensible for application. Our approach can be utilized in many different security requirements with few changes and corresponding extensions.
The rest of the paper is organized as follows. Section 2 presents the related work in this research field. Section 3 illustrates the design and implementation of our security monitoring. Section 4 evaluates the performance test in two different operating systems, Windows XP and Linux. At last, we conclude the paper in Section 5.
2. RELATED WORK
Hardware virtualization has been applied to operating systems both commercially and in research for several years. Intel VT and AMD SVM Technology make the hardware suitable for virtualization and accelerate the development of it. VMware [12] [15] and KVM [7] both take advantage of hardware virtualization, allowing their virtual machines to work more quickly and smoothly on a single host. However, VMware still use the full virtualization technology which is used to virtualize most hardware resources, such as networks, I/O devices, hard disks and so on, except CPU and memory resources nowadays. KVM is a linux built-in hypervisor which has been combined with the linux kernel tree form 2.6.20 kernel version. When the linux kernel loads the special module, KVM, the whole kernel plays the role of hypervisor itself. Though KVM is small, the whole hypervisor is large with the addition of the kernel which will be occupied by the hypervisor itself. Although this approach is effective for protecting system security, it only aims at the security functionalities of I/O access.
Some previous researches, like Proxos [13] and iKernel [14], provide a high secure execution environment and enable secure code running in the separate VMs. Proxos allows applications to configure their trust in the OS by partitioning the system call interfaces into trusted and untrusted components [3], but needs code modification to both application and kernel. For the same purpose, iKernel isolates the malicious device drivers running on separate VMs to make the operating system more secure and reliable. Focusing on the design goal of making hypervisor based monitoring on malicious behaviors handle some sensitive instructions, our approach also possesses the features of high performance and little effect on the original OS kernel with the optimized design.
3. DESIGN AND IMPLEMENTATION
VASP implements hypervisor based monitoring for popular operating systems, such as Windows XP and Linux, with some challenges of no modification to OS, universal protection strategy, self-transparency and Multi-core Support. Our approach is tailored from Blue Pill project as the base for implementation, inheriting its features of lightweight, reliability, easily extensibility and configurability. In this section, we describe the architecture of VASP hypervisor at first, then introduce the details of implementations, and show two case studies at last.
3.1 VASP Architecture
Intercepting the malicious behavior with configurable function is the basic and key point to protect and monitor the whole system. VASP leverages this mechanism to intercept the sensitive behaviors, like CR registers change, debugging interrupts, special memory access, etc., which common attackers use to harm the operating system.
Virtual memory is divided into three layers: hardware platform layer, VASP hypervisor layer and guest operating system layer.
**Hardware Platform Layer.** This layer is the platform where the hypervisor is established on. This platform needs to support hardware virtualization technology, such as Intel VT-x and AMD SVM, though they have some differences in hardware design. These differences, involving virtualization-related instructions, checking platform, VMCS configurations, etc., can be masked by platform-related codes to provide unique interfaces to the hypervisor.
**VASP Hypervisor Layer.** This layer is the core of the whole platform. There is a set of interfaces exported to the future developers, such as memory management interface, trap register interface and debugging interface. Memory management interface is used to realize its own memory management of VASP. Trap register interface supports the extension usage of VASP and provides the configuration of intercepting behavior. Debugging interface is used for dynamic analysis when developing the hypervisor. With the help of these three interfaces, we also implement two basic services for the hypervisor to protect the guest: default memory management service and execution control service. This memory management service realizes the memory self-transparency strategy for the self-protection of the hypervisor. Execution control service contains all the trap event handlers, and it handles the corresponding event according to the indicated #VMEXIT reason.
**Guest Operating System Layer.** This layer contains the protected operating system, Windows XP or Linux. With the hypervisor established, original operating system executes in the guest virtual machine monitored by the hypervisor. VASP is realized to support only one guest machine at one moment currently.
### 3.2 Implementation
Next, we will introduce the detailed implementation of VASP, such as its control flow, memory self-transparency and Multi-core support.
#### 3.2.1 VASP Control Flow
According to the VASP architecture described in the last subsection, VASP hypervisor plays a role as virtual machine monitor between the guest operating system and hardware environment. There is a flow of control for handling a sensitive system behavior interception, involving guest application or guest kernel, hardware and the VASP hypervisor. The trap occurs in step 1, and the control is transferred to hardware with the generation of a #VMEXIT event, such as cpuid instruction. In step 2, the hardware stores the guest execution states in virtual machine control structure (VMCS) and finds out the handler of all the traps by VMCS which is configured when initializing VASP hypervisor. In step 3, VASP hypervisor saves the contents of all applications or kernel registers at trap point, then selects the trap handler routine to handle the corresponding behavior properly. After the trap handling, hypervisor passes the control back to hardware by executing the #VMRESUME instruction and restoring the saved registers. In the last step, the return instruction pointer (IP) and stack pointer (SP) registers are modified by hardware in accordance with the return state stored in step 2, and hardware transfers the control to the intercepted point finally.
#### 3.2.2 Memory Self-Transparency
Our approach is in the form of a driver or module which possesses the highest privilege to execute the special virtualization instructions, so we need the corresponding APIs of memory management to build the memory space of hypervisor. As a result, the hypervisor can be accessed by kernel, and it is vulnerable with no self-protection strategy. To improve the security of VASP itself, the memory self-transparency strategy is essential to conceal the hypervisor. Figure 2 depicts our memory self-transparency strategy for VASP hypervisor.
After the hypervisor is built, it applies for another page of memory and clones the kernel page table to this allocated memory space, which will be provided as the page table when executing in the hypervisor. Next, we need a piece of spare memory space, used as pseudo memory space of hypervisor, and modify the physical address of hypervisor in the original kernel page table to point to the physical address of pseudo memory. As a result, the application or kernel in the guest operating system can only access the physical address of pseudo memory when using the virtual address of hypervisor, but hypervisor can access its real physical address using the same virtual address when it executes in the root mode.
3.2.3 Multi-core Implementation
Multi-core processor becomes more and more popular in the current computer world, and occupies most part of marketplace, so realizing the hypervisor to monitor the multicore system is significant and necessary. We use the affinity APIs, like KeSetSystemAffinityThread(), to establish the hypervisor in each core of processor and unify the memory space of two hypervisors which can work in coordination with each other in monitoring behaviors. Although the multi-core virtualization in Linux operating system has not been implemented yet, it is just a engineering work similar to implementation of Windows. As a result, we use only one core of processor to test the performance.
3.3 Case Study
In order to verify the efficiency and usability of VASP security platform, we show the two case studies of monitoring the system with the help of VASP hypervisor. One is to monitor the I/O resources, and the other is an example of anti-debugging protection.
3.3.1 I/O monitoring
VASP also supports monitoring the guest machine on accessing some physical resources such as I/O related instructions. Taking the password input protection for an example, the malicious applications always modify or steal the passwords through hooking some important kernel APIs of I/O related when users are inputting them. Our protection goal is: to intercept the I/O operations from the keyboard when inputting user’s password, store them in a protected memory space, and finally give the pseudo data to the I/O buffer. Figure 3 illustrates the design and usage model of VASP for I/O monitoring. The memory self-transparency strategy is still an important module used to conceal the code and data segment of hypervisor. By reconfiguring the virtual machine control structure of VASP, the hypervisor can intercept each data input of keyboard device before it is transferred to the guest OS. During the hypervisor handling the trap, it will verify the data whether it is a password data for the protected application and then copy it to the safe memory space monitored by hypervisor. Furthermore, VASP will fill old buffer with other pseudo data which can cheat the malicious software to protect the application. At last, with a #VMENTRY event the hypervisor transfers control to the guest. When this application wants to catch this password data, there will be another interception by hypervisor which can give the right password to application. The details of this case is not the key research in this paper.
3.3.2 Anti-debugging
Another case study is the anti-debugging protection, which can be used to defend the malicious behaviors with the help of debugging related technology. Debugging is a method which usually facilitates the dynamic program analysis of run-time application for software development. However, as everything has two sides, debugging could also be adopted by attackers. Our protection goal is: intercepting the debugging related instructions, such as INT1 instruction and INT3 instruction, and verifying the malicious behavior, then making this debugging behavior invalid.
Figure 4 shows the design and implementation of VASP used for anti-debugging. Besides the significant and necessary memory self-transparency module, we will use the additional anti-debugging module. The hypervisor first intercepts the sensitive behavior including INT3, INT1 exceptions, CPUID instruction and CR3 conversion which are the features of debugging behaviors. Then it will trace the debugging routines in the system and verify whether debugging is used by other illegal applications. If the result is true, the hypervisor could stop this debugging behavior and pass it to a normal execution flow. The detailed implementation is published as our previous work which shows the anti-debugging protection only in Windows platform.
4. EVALUATION
In this section we present a thorough performance evaluation and analysis of VASP. We begin by benchmarking VASP with macro benchmarks that represent real-life workloads. Next, we evaluate the overhead of virtualization as measured by micro benchmarks which show micro behavior effects. Most of our experiments are executed both in Windows XP and Linux operating system with the similar hardware environment.
4.1 Experiment Setup
We utilize both microbenchmarks and application benchmarks to do the performance test with the desktop computer hardware platform. Our setup consists of two different configurations. One is configured with a dual-core Intel Core2 Duo E6320 and with 2GB DDR2 of memory for Windows.
XP platform. And the other is configured with a single-core Intel Core2 Duo E8400 and with 1GB DDR2 of memory for Fedora 12 (Linux version 2.6.31) platform. Although the hardware environment for each OS is different, our approach shows the same trend of performance overhead. What’s more, the distinction of memory size is due to the SPEC requirement that each core need at least 1GB.
4.2 Application Benchmark
We have performed a set of experiments in order to evaluate the overhead of the various operating system relative to running on VASP platform. The SPEC CPU suite contains a series of long-running computationally-intensive applications intended to measure the performance of a system’s processor, memory system and compiler quality, but performs little I/O related qualities.
Figure 5 presents the results of benchmarking VASP against monitoring Windows XP from the SPEC CINT2006 suite. The black bar shows the native performance before loading VASP hypervisor as the base of this test. The gray one indicates the real-life workload result after loading VASP relative to the native. Most of the results demonstrate that the overhead of VASP protection is negligible and is increased only about 0.9% in average, because VASP intervenes few operating system behaviors which are configured sensitive instructions and proceedings. Furthermore, the handling process for each trap in the hypervisor is always very quickly. But only bzip2 test shows a lower overhead than the native system, and regrettably the reason of this phenomenon can not be explained by us currently.
Another figure (Figure 6) illustrates the results of performance test when VASP monitors Fedora Linux system from the SPEC CINT2006 suite as well. The black bar and the gray one still represent the same as what in the last figure. The results show that running the hypervisor only brings in a little overhead about 2.6% in average, even if it is based on Linux. More overhead proportion than that in Windows XP is due to the different design architecture, like process scheduling management, memory management and so on. But total time spent in the test is less than the time cost in Windows, because the system processor is more powerful.
4.3 Microbenchmark
To more precisely measure the areas of overhead in Windows and Linux operating system with VASP protection, we perform a number of experiments targeting particular subsystems. We compute the CPU execution cycles of CPUID instruction to measure the overhead of processor intercepting in Windows XP. On the other hand, we use McVoy’s lmbench program [8] of version 3.0, as this addresses many testing issues, to examine the overhead of VASP protecting Linux system. We present tables for both native Linux (Native) and loading VASP (VASP) results.
In Table 1 we show the results of microbenchmarks running in Windows XP. In this experiment, the benchmark exhibits low performance on executing intercepted instructions on guest machine, nearly 11 times more cycles needed to handle the interception and relevant events after loading VASP. The reason is that trapping into hypervisor introduces extra CPU cycles overhead due to accessing VMCS region, so does invoking the proper callback function.
In the process microbenchmarks (Table 2), VASP exhibits slower fork, exec and sh performance than native Linux’s and others are very close. This is expected, since these operations require large number of page table updates which will cause a bit of traps during CR3 conversion. Table 3 shows context switch times between different numbers of processes with different working set sizes. VASP incurs almost a twice larger overhead than native Linux in each test. That is also because context switch accompanied by CR3 switch will be intercepted by hypervisor. The mmmap l...
5. FUTURE WORK AND CONCLUSION
We have presented the architecture and design of VASP, which hosts privilege to implement hardware virtualization based hypervisor running transparently under the guest machine and supporting cross-platform protection to the guest OS without any modification to the existing OS.
5.1 Future Work
We believe that VASP is useful and efficient to monitor and protect the target system. After the initial release we plan a number of extensions and improvements to VASP. We will popularize this platform to a heterogeneous multicore system which hosts privilege to implement hardware virtualization support. In Proceedings of the 5th EuroSys Workshop on Virtualization Technology for Dependable Systems, 2009.
5.2 Conclusion
VASP provides an excellent platform based on hardware virtualization technology for the cross-platform security protection to common operating system, with the features of lightweight, transparent and extension capability. VASP protection will not affect the file system and the additional latency caused by hypervisor interception is very small relative to the original overhead.
Table 3: Imbench: Context switching times in µs.
<table>
<thead>
<tr>
<th>Config</th>
<th>Create</th>
<th>Delete</th>
<th>Create</th>
<th>Delete</th>
<th>Mmap</th>
<th>Prot</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Native</td>
<td>0.72</td>
<td>1.10</td>
<td>0.85</td>
<td>1.53</td>
<td>1.06</td>
<td>1.52</td>
<td>1.12</td>
</tr>
<tr>
<td>VASP</td>
<td>1.91</td>
<td>2.36</td>
<td>2.17</td>
<td>2.98</td>
<td>2.53</td>
<td>2.95</td>
<td>2.46</td>
</tr>
</tbody>
</table>
Table 4: Imbench: File & VM system latencies in µs.
<table>
<thead>
<tr>
<th>Config</th>
<th>0K File</th>
<th>10K File</th>
<th>Mmap</th>
<th>Prot</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Native</td>
<td>11.4</td>
<td>18.3</td>
<td>12.7</td>
<td>298.0</td>
<td>0.354</td>
</tr>
<tr>
<td>VASP</td>
<td>11.5</td>
<td>17.83</td>
<td>18.4</td>
<td>303.0</td>
<td>0.357</td>
</tr>
</tbody>
</table>
6. ACKNOWLEDGEMENT
This work is supported by National Natural Science Foundation of China (Grant No.60773093, 60873209, 60970107), the Key Program for Basic Research of Shanghai (Grant No.09JC1407900, 09510701600, 10511500100), and IBM SUR Funding and IBM Research-China JP Funding.
7. REFERENCES
|
{"Source-Url": "http://www.contrib.andrew.cmu.edu/~miaoy1/papers/sac/vasp.pdf", "len_cl100k_base": 4695, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19338, "total-output-tokens": 5928, "length": "2e12", "weborganizer": {"__label__adult": 0.0004832744598388672, "__label__art_design": 0.0005865097045898438, "__label__crime_law": 0.0008873939514160156, "__label__education_jobs": 0.0007219314575195312, "__label__entertainment": 0.00013816356658935547, "__label__fashion_beauty": 0.00019407272338867188, "__label__finance_business": 0.0004029273986816406, "__label__food_dining": 0.00035309791564941406, "__label__games": 0.0011091232299804688, "__label__hardware": 0.017242431640625, "__label__health": 0.0008831024169921875, "__label__history": 0.0003559589385986328, "__label__home_hobbies": 0.00015401840209960938, "__label__industrial": 0.0007901191711425781, "__label__literature": 0.00025177001953125, "__label__politics": 0.0003209114074707031, "__label__religion": 0.0004925727844238281, "__label__science_tech": 0.433349609375, "__label__social_life": 0.00010031461715698242, "__label__software": 0.05096435546875, "__label__software_dev": 0.4892578125, "__label__sports_fitness": 0.00024628639221191406, "__label__transportation": 0.0005335807800292969, "__label__travel": 0.00018775463104248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25058, 0.05397]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25058, 0.3364]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25058, 0.87867]], "google_gemma-3-12b-it_contains_pii": [[0, 2845, false], [2845, 7203, null], [7203, 11790, null], [11790, 16368, null], [16368, 20181, null], [20181, 25058, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2845, true], [2845, 7203, null], [7203, 11790, null], [11790, 16368, null], [16368, 20181, null], [20181, 25058, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25058, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25058, null]], "pdf_page_numbers": [[0, 2845, 1], [2845, 7203, 2], [7203, 11790, 3], [11790, 16368, 4], [16368, 20181, 5], [20181, 25058, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25058, 0.07843]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
7b6a4dabb5fdb384c2d91b2ab1086f7e43d05dfa
|
Towards a Self-Healing approach to sustain Web Services Reliability.
Mohamed Hedi Karray, Chirine Ghedira, Zakaria Maamar
To cite this version:
HAL Id: hal-00602863
https://hal.archives-ouvertes.fr/hal-00602863
Submitted on 23 Jun 2011
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Towards a Self-Healing Approach to Sustain Web Services Reliability
Mohamed-Hedi Karray
Femto-st Institute, Besançon, France
hedi.karray@femto-st.fr
Chirine Ghedira
Lyon 1 University, Lyon, France
chirine.ghedira@liris.cnrs.fr
Zakaria Maamar
Zayed University, Dubai, U.A.E
zakaria.maamar@zu.ac.ae
Abstract—Web service technology expands the role of the Web from a simple data carrier to a service provider. To sustain this role, some issues such as reliability continue to hinder Web services widespread use, and thus need to be addressed. Autonomic computing seems offering solutions to the specific issue of reliability. These solutions let Web services self-heal in response to the errors that are detected and then fixed. Self-healing is simply defined as the capacity of a system to restore itself to a normal state without human intervention. In this paper, we design and implement a self-healing approach to achieve Web services reliability. Two steps are identified in this approach: (1) model a Web service using two behaviors known as operational and control; and (2) monitor the execution of a Web service using a control interface that sits between these two behaviors. This control interface is implemented in compliance with the principles of aspect-oriented programming and case-based reasoning.
Keywords—Web service; reliability; self-healing; case-based reasoning; AOP.
I. INTRODUCTION
We all agree that the Web is dynamic by nature; new services are offered, some services cease to exist without prior notice, new business opportunities arise, etc. This nature puts a lot of pressure on those who are in charge of developing business applications that should be loosely coupled and spread over enterprises’ organizational boundaries. Making Web services the technology of choice upon which these applications could be built would require looking into some issues, with emphasis on reliability in this paper, that still hinder the acceptance of Web services. Reliability is the ability to perform independently of the current execution circumstances, which permits to guarantee business continuity. To address Web services reliability, several works are reported in the literature [1]. Recently, self-healing seems leading these solutions [2,3]. In information technology, “self-healing describes any device or system that has the ability to perceive that it is not operating correctly and, without human intervention, make the necessary adjustments to restore itself to normal operation”1.
In [4], we started examining the reliability of Web services through the use of two behaviors, which we denoted by control and operational. Both behaviors are specifically used to specify the functioning of a Web service and comply with separation of concerns principle. By doing this, the development and maintenance of Web services is made simple. On the one hand, the operational behavior illustrates the business logic that underpins the functioning of a Web service, i.e., how the functionality of a Web service is achieved. On the other hand, the control behavior guides in a controlled way the progress of executing the operational behavior (i.e., business logic) by stating the actions to take and the constraints to put on this progress. In this paper, we capitalize on both behaviors to let Web services self-heal and hence, achieve the reliability of the business applications they implement. Mainly injecting self-healing mechanisms into Web services should help in discovering, diagnosing, and reacting to disruptions that affect Web services operation [5]. We discuss how our self-healing Web services are developed and oversee the progress of both behaviors towards completion.
Enhancing a system with self-healing capacities could be based on internal or external mechanisms [6]. The former refer to trapping errors (includes exceptions: we consider an exception as an exceptional error) when happened like modern programming languages (e.g., Java exceptions, assertion checking) and run-time libraries (e.g., timeouts for RPC) do. The latter refer to monitoring a system using some “outsider” components (e.g., monitoring, recovery, etc.) that determine when a system’s behavior is acceptable and whether self-healing should be initiated or not.
Given the black box nature of a Web service, its implementation details are only known to those who took part in its development. Therefore, these persons would be in charge of developing the self-healing functionalities as well [7]. Since the external components’ features (monitoring, recovery, etc.) are more attractive and effective [6,8], we adopt an hybrid approach that combines the benefits of both internal and external mechanisms, by first, separating the external features of self-healing (monitoring, recovery, etc.) from the execution of the Web service (which is not the case with, for instance, Java exceptions), and second, encapsulating these features into modules that run internally and in parallel with the execution of the operations of the Web service.
In this paper, we propose a self-healing approach that is built upon a set of dedicated modules that would support the reliability of Web services. These modules are part of a “control interface” that ensures the monitoring of a Web service’s behavior, the catching of errors, and the recovery from these errors. The design of this control interface complies with Aspect-Oriented Programming (AOP) and Case-Based Reasoning (CBR) to benefit from the dynamic weaving principles and previous similar recovery cases in order to use previously adapted solutions.
1 (www.bitpipe.com/tlist/Autonomic-Computing.html)
Section 2 presents some related work on Web services self-healing. Section 3 suggests a motivating example to highlight the run-time errors problem that hinder Web services execution. Section 4 presents our approach to set up the control interface. Section 5 discusses our implementation. A brief discussion about the approach is presented in section 6. Finally, Section 7 concludes the paper and sets guidelines for future work.
II. RELATED WORK
To make Web services the technology of choice when developing critical applications, it is important to enhance them with mechanisms that guarantee continuity of operations despite failure. Self-healing is among these mechanisms and could fall into the research theme of reliability as reported in [12]. We identify two categories of works on the topic, works that are based on models and works that based on intelligence and technology.
Concerning the first category, we find the work presented by Dabrowski et al. who use architectural models to characterize how different elements such as architecture, topology, consistency-maintenance mechanism, and failure-recovery strategy could contribute to self-healing during communication failure [14]. In this specific failure and using notification as a consistency-maintenance mechanism, the authors divided self-healing properties into recovery techniques and topology. In [3], Ben Halima et al. propose a self-healing framework that is capable of managing Web services-based distributed interactive applications. This framework focuses on QoS monitoring and uses models for QoS analysis. In fact, the framework considers the communication level monitoring while intercepting exchanged SOAP messages and extending them with QoS parameter values. Glosh et al. classify self-healing systems based on similarities or relationships between approaches, mechanisms, architectures and technologies applied in these systems. In this classification, the authors indicate that such systems use models, whether external or internal, to monitor behaviors so these systems can adapt themselves to the run-time environment [13].
Regarding the second category, Gurguis et al. present an approach to achieve the autonomic computing of Web services. They divide Web services into functional Web services providing computing functionalities over the Internet, and autonomic Web services encapsulating atomic attributes such as self-configuration, self-healing, and self-optimization [2]. Monatni and Anglano present a CBR approach for providing large-scale, distributed software systems with self-healing capabilities [18]. However, the approach does not use structured knowledge such as models of the system behavior, thus easing its applicability to large-scale in complex software systems. Friese et al. present the design and implementation of a Robust Execution Layer that acts as a transparent, configurable add-on to any BPEL4WS execution engine to support self-healing business processes. The continuity of process execution is achieved through service replacement in case of communication failures [16]. Baresi and Guinea identify and classify the major faults in service-oriented systems and draw some solutions that allow recovery strategies using pre and post-conditions for the required and provided operations based on service and process description via BPEL [15]. As for us, our approach is in another context. Indeed, given the black box nature of the web services, knowing the service behaviors (i.e., control and operational) is essential.
The EU SHADOWS can be considered as an hybrid work classified into the two categories. It concentrates on self-healing of complex systems using a model base approach [19]. The project introduces pioneering technologies such as the automatic concurrent debugging and the data race detection to enable the systematic self-healing of failures classes and an approach to the integration of several self-healing technologies into a common framework solution. In this approach a game-based model-checking technique is used to the verification and the adaptation task: the system acts as a player in a hostile world. Based on this model, if anything goes wrong, the system adapts its behavior to accomplish its task in a different way. The system doesn’t try to recover the confronted problem. In our approach, when the service catches any error, it tries to recover the error in the aim to not change its behavior.
III. MOTIVATING SCENARIO
We choose WeatherWS whose functionality is to return a 5-day weather-forecast report on a certain city. In [4], we argue why this Web service is not simple and can be used to illustrate different types of errors.
We assume that WeatherWS requires two inputs: city name and date. At run-time, one of the following cases happens knowing that WeatherWS searches the city’s name in a dedicated database: (1)The access to the database fails; (2)The city requested does not exist in the database; (3)The city requested exists in the database; WeatherWS submits a report to the user.
The execution of WeatherWS can be reflected using different states. We refer to these states as business and use them to form the operational behavior of a Web service. “City located”, “weather collected”, and “report delivered” are examples of business states in the operational behavior of WeatherWS. This latter passes from one state to another subject to first, completing the operations that are included in each state and second, satisfying the transitions that connect states. To follow up the execution progress of a Web service, we use the control behavior along with a specific set of states that are extracted from the field of transactional Web services.
To achieve self-healing Web services, we look into the interactions occurring between these two behaviors. Different types of failure could lead to self-healing such as bugs in the business logic and resource removal.
IV. OUR PROPOSED SELF-HEALING APPROACH
Our proposed self-healing approach for Web services takes place over two steps: how to model the behaviors of a Web service, and how to set up the self-healing mechanisms in terms of operation safety and error recovery at run-time. The identification of these mechanisms is built upon the closed-loop of Garlan et al. in the control system paradigm [6] which consists to monitoring, interpretation, resolution and adaptation.
A. Behavior modeling
A behavior illustrates the actions that a Web service takes in response to some event occurrence and condition satisfaction. In [4], the operational behavior shows the business logic that underlies the functioning of a Web service, and the control behavior guides the execution progress of the business logic (i.e., operational behavior) of this Web service. As briefly reported in Section 2, the control behavior uses a set of states that are reported in the literature of transactional Web services [9]. The complete list of these states is as follows: “activated”, “not-activated”, “done”, “aborted”, “suspended”, and “compensated”. The state chart diagrams is use to represent both behaviors.
In addition to the control and operational behaviors, Maamar et al. developed mechanisms that support the interactions between them. These mechanisms are used to convey details from one behavior to another and vice versa. For example, a message from the control to the operational behaviors carries a temporal event that permits to trigger the execution of a Web service.
The use of state chart diagrams to model the control and operational behaviors shows what should happen at run-time but does not permit to follow up the execution progress at the operation level. In fact, questions like which operation was recently executed, what dependency exists between operations, and what operation failed cannot be tracked if state chart diagrams are used. Any self-healing exercise requires a clear access to the operations that were executed and the operations that encountered problems [17]. To overcome this limitation of state chart diagrams, we decided to use activity charts to model the operations of a Web service.
After modeling both behaviors, the next step consists of mapping some states in the control behavior of a Web service onto other appropriate states in the operational behavior as discussed in [4].
The control and operational behaviors of a Web service are based on a finite set of sequences. In [4], these sequences are called path and defined as follows: A path \( p_{i→j} \) in a Web service behavior B is a finished sequence of states and transitions starting with state \( S_i \) and finishing at state \( S_j \) noted: \( p_{i→j} = \{s_0 \rightarrow t(l_1) s_1 \rightarrow t(l_2) s_2 \ldots s_{i,j-1} \rightarrow t(l_{i,j}) s_j \} \); such that \( \forall k \in \{i,j-1\} \{s_k, l_{k+1}\} \in T \).
In the other hand, we define an execution scenario in a Web service as the association of a control state and a path in the operational behavior along with an execution priority defined by the developer of the Web service. This priority defines the recommended paths that need to be executed in order to satisfy a user’s needs. Scenarios having the smallest value are considered as the more adequate to meet a user’s expectations.
Also, a function \( \text{Next} \) was defined in [4]. This function specifies which control state the Web service must go after taking a final state in the operational path. We redefine this function to specify the control state that needs to be taken following the execution of a scenario.
Scenarios and Next function are used to oversee the progress of the execution of a Web service. Our self-healing approach relies on the interactions that exist between behaviors and is built upon a control interface that drives these interactions.
B. Control Interface
Like any other program, Web services may be subject to events that could affect their normal execution progress. Our self-healing approach is based on a control interface (see Fig.1) that contains the following modules: monitoring, interpretation, resolution, and adaptation. These modules support synchronization, verification, detection, and recovery.
In the control interface, the Mapping Module (MAM) is a repository of XML schemas and XML data that result from the mapping between the control and the operational behaviors. The MAM contains, also, additional elements such as matching paths, execution scenarios, and the results of the \( \text{Next} \) function. In addition, the MAM provides data regarding the expected behaviors during the execution of other modules to (i) instantiate the conversations between the two behaviors by the Conversation Management Module (CMM) or (ii) create recovery by the Error Recovery Module (ERM) in case of error.
In the control interface, the CMM instantiates, manages, and checks the conversational messages that are exchanged between the two behaviors. The CMM collects the scenario execution priority based on the current state in the control behavior of the Web service and initiates conversations with each state in the operational path that is included in this scenario. In this work we adapt the conversational messages that are reported by Maamar et al in [4] like \( \text{Sync, Success, Fail, Ack, just to cite some} \).
Through the management and monitoring of the different conversational messages that are exchanged, the CMM catches errors that interrupt the normal progress of the execution of a Web service. These errors are usually detected using \( \text{Fail} \) message.
The CMM keeps track of different elements that help a Web service self-heal. These elements include (1) a component that instantiates conversations, (2) a list including all messages types, (3) details of messages related to the conversation in progress such as message Id, message origin, message destination, etc., and (4) log of the previous conversations. It should be noticed that there is a watchdog that monitors the messages of each conversation and raises alerts for the benefit of the CMM when it catches a failure message.
In the control interface, the Transition Management Module (TMM) comes into play before claiming the successful execution of a scenario. This claim depends on the operations to execute per state as well as the transitions that connect states. The TMM stores the intra-behavior transitions (i.e., transitions from one state to another in the same behavior) and all the information about the business operations (i.e., constraints, functioning description, implementation and execution). We define this information using pre- and post-conditions and constraints.
Aspects base. AOP is a paradigm that captures and modularizes concerns that split a software system into modules called Aspects. Aspects can be integrated into a system using dynamic weaving [11]. An aspect contains different code fragments (advices) and location descriptions (pointcuts) to identify where code fragments should be plugged. Our use of AOP is motivated by the dynamic weaving of aspects. An aspect can be enabled and disabled at run-time. In our self-healing approach, we define a base that contains different types of aspects that could characterize the different errors during a Web service execution. These aspects are triggered by an aspect weaver that exists in the TMM when an error arises. We identify aspects with a triple (Name of the module including the aspect, Set of advices, Set of pointcuts).
Patterns base. It is a container of execution patterns for business operations. Each business operation is associated with a set of execution patterns. A pattern is used to decompose an operation into segments according to a certain semantics and implementation constraints. We define three types of patterns. Two are defined at design time (normal patterns and error patterns) and one at run time. The Case base. It contains cases of errors along with their solutions. Whenever the ERM receives an alert, it creates a new solution by assembling an error pattern with an aspect and records this case in the base if it does not already exist. A case is characterized by the 3-uplet \(<Sy, Con, R>\) where \(Sy\) presents sets of the problem symptoms and the “error patron”; \(Con\) is the case context, that means which aspect in which business operations; \(R\) is the carried out treatment i.e. resolution made.
When the ERM receives an alert, it recovers the execution scenario from the MAM as well as the details from the fail message that is related to the control state. This indicates that there is an execution problem that the CMM reports. First, the CMM starts to synchronize itself with the MAM and TMM to retrieve information related to the current scenario and the operations in the control state that is affected by this error. The ERM consults its base of patterns in order to compare the Log pattern received with “normal” and “error” patterns so that the aspect related to this error is detected. If the pattern is already in the database, the ERM consults its base cases to see if a similar error has already been treated and solved. If yes, the ERM sends a solution to the CMM and the TMM for deployment.
Otherwise, if it is not according to this pattern and the base of aspects, the ERM selects the associated aspect. Then the ERM sends the solution to the TMM module to apply it. In the case where the Log pattern does not exist in the error patterns list, the ERM adds this new pattern to the patterns base and sends an alert to the Web Service developer, asking its (new pattern) assignment to one or many aspect. After the application of the solution, the ERM updates its case base.
V. EXPERIMENTS
The feasibility of our self-healing approach was tested by implementing the control interface. We used C# from .Net beta 2005 platform to program the different modules and XML to define behaviors, conversation messages, execution scenarios, patterns, and last but not least the manipulated data at run time.
Our experiments started by injecting errors to check the mechanisms for error detection and aspect activation. An example of error was an empty object (NULL) that was returned following the execution of “weather collection” operation and then submitted to another operation namely “report delivery”. Our prototype catches, collects, and locates the error, identifies the appropriate error pattern, and finally determines the aspect that is associated with this error. All this happens without propagating the error to the client. We are now working on the injection of the set of advices related to aspects, to test the reaction of the web service behavior after this modification.
The prototype architecture as Fig.2 presents is decomposed in two layers: “system layer” and “resource layer”. The running system consists of two components: “service execution” and “control interface”. An internally exchange for instantiation, monitoring and recovery (i.e. aspects injection) is made in the running system between the two components. The control interface is connected to the different XML data bases (Scenarios, Aspects, Patterns, Cases) managed by its modules.
VI. DISCUSSIONS
When it comes to self-healing, several requirements are suggested in the literature such as adaptability, dynamicity, and autonomy [20]. We took into account these requirements while working on our approach to self-heal Web services. For example, the Web services adapt their course of actions in reaction to the errors that are detected. We, also, took into account other guidelines such as failure detection, fault diagnosis, fault healing, and validation [21]. The purpose of these guidelines is to attempt the completeness, soundness, and robustness of any self-healing approach. If one of these guidelines is not satisfied, the usability of the approach can be questioned. The externalization of self-healing mechanisms is generic in dealing with multiple Web services at once. This approach coupled with the black box nature of Web services increases the complexity of making them self-heal independently of any human assistance [18], contrarily to the internal approach that is specified within the Web service itself. This approach relies on the knowledge of a Web service’s behaviors. Although most of the aforementioned works adopt an external approach, we adopted an hybrid approach taking advantage of both approaches’ benefits through the use of first, the closed loop that the external approach offers [6] and second, the visibility and controllability elements that the internal approach offers [21]. Visibility is the ability to observe states, outputs, and resource usage during a Web service execution. Controllability is the ability to modify inputs and states during service running, to study different behaviors, and what-if situations. Hence, given the black box nature of a Web service, these two functionalities cannot be ensured only by an internal approach. To this purpose, our approach is based on modeling Web service behaviors. It implements a model-based approach, where models of a desired Web service behavior governs the self-healing process throughout the service design and implementation phase and the service deployment phase as used in the EU SHADOWS project [19]. We based our approach on two types of models: control/operational and fault. The control/operational model specifies the nominal behavior that must be satisfied by the Web service and provided by the Web service developers. This model ensures the behavior synchronization in order to facilitate
the monitoring, control of states, and fault localization. The fault model specifies the types of faults that can be identified and repaired by our control interface. We specifically address fault types related to aspects, which were designed and implemented in a separated module in of this control interface.
In term of CBR, our approach differs from the work of Montani and Anglano in [18]. These ones infer new cases from previous resolutions using similarity mechanisms. Contrarily, in our approach we verify if a case is resolved before any similarity reasoning is carried out. In the future, we aim at enhancing this part to handle the cases that require more than one repair action.
Regarding the effectiveness of our approach, we could not evaluate it in this paper which applies only the feasibility and implementation aspects. All other points such as the calculation of metrics will be addressed in future work.
VII. CONCLUSION
Self-healing is one of the important elements that could enhance the reliability of Web services [21]. In this paper, we examined self-healing Web services by first, describing their control and operational behaviors and second, implementing a control interface that oversees the performance of these Web services and takes corrective actions when necessary. This interface was implemented in compliance with the principles of aspect-oriented programming and case-based reasoning.
Our future work revolves around different aspects. Firstly, we will continue enhancing the prototype to conclude additional tests about WeatherWS, and more examples of real Web services will be developed to identify the failures that are not detected as expected. Secondly, we would like to make the control interface ‘‘learn’’ new patterns and study failure possibilities using proactive and predictive methods to predict when a failure would occur so that corrective actions are taken. Moreover, we plan to apply our proposed self-healing approach to the composition level by looking into the combination of the respective self-healing mechanisms of component Web services.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00602863/file/karray_FINA2011_VF.pdf", "len_cl100k_base": 5583, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19876, "total-output-tokens": 7319, "length": "2e12", "weborganizer": {"__label__adult": 0.0002982616424560547, "__label__art_design": 0.0003116130828857422, "__label__crime_law": 0.0002892017364501953, "__label__education_jobs": 0.0004265308380126953, "__label__entertainment": 6.276369094848633e-05, "__label__fashion_beauty": 0.00013387203216552734, "__label__finance_business": 0.0001888275146484375, "__label__food_dining": 0.0002872943878173828, "__label__games": 0.00037741661071777344, "__label__hardware": 0.0005970001220703125, "__label__health": 0.0004901885986328125, "__label__history": 0.00021922588348388672, "__label__home_hobbies": 5.435943603515625e-05, "__label__industrial": 0.00025916099548339844, "__label__literature": 0.0002944469451904297, "__label__politics": 0.0002281665802001953, "__label__religion": 0.00038504600524902344, "__label__science_tech": 0.018218994140625, "__label__social_life": 8.0108642578125e-05, "__label__software": 0.00835418701171875, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.00021529197692871096, "__label__transportation": 0.000370025634765625, "__label__travel": 0.0001817941665649414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32572, 0.02201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32572, 0.24779]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32572, 0.91456]], "google_gemma-3-12b-it_contains_pii": [[0, 1100, false], [1100, 6779, null], [6779, 12723, null], [12723, 18372, null], [18372, 21207, null], [21207, 26408, null], [26408, 32572, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1100, true], [1100, 6779, null], [6779, 12723, null], [12723, 18372, null], [18372, 21207, null], [21207, 26408, null], [26408, 32572, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32572, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32572, null]], "pdf_page_numbers": [[0, 1100, 1], [1100, 6779, 2], [6779, 12723, 3], [12723, 18372, 4], [18372, 21207, 5], [21207, 26408, 6], [26408, 32572, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32572, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
c016b7dfa70e5a3d2c77bc45e869ae8135abc3b7
|
High-Level Synthesis Using Application-Specific Arithmetic: A Case Study
Yohann Uguen, Florent De Dinechin, Steven Derrien
To cite this version:
Yohann Uguen, Florent De Dinechin, Steven Derrien. High-Level Synthesis Using Application-Specific Arithmetic: A Case Study. 2017. <hal-01502644>
HAL Id: hal-01502644
https://hal.archives-ouvertes.fr/hal-01502644
Submitted on 5 Apr 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
High-Level Synthesis
Using Application-Specific Arithmetic:
A Case Study
Yohann Uguen
Univ Lyon, INSA Lyon, Inria, CITI
F-69621 Villeurbanne, France
Yohann.Uguen@insa-lyon.fr
Florent de Dinechin
Univ Lyon, INSA Lyon, Inria, CITI
F-69621 Villeurbanne, France
Florent.de-Dinechin@insa-lyon.fr
Steven Derrien
University Rennes 1, IRISA
Rennes, France
Steven.Derrien@univ-rennes1.fr
Abstract—On the one hand, a strength of FPGAs is their ability to perform non-standard computations not supported by classical microprocessors. Many libraries of highly customizable application-specific IPs have been developed to exploit this strength.
On the other hand, HLS tools, which allow to program an FPGA using a dialect of the C language, are gaining traction. However, the ease of use of the C language becomes a hindrance when one wants to express non-standard computations. Indeed, the C language was designed for programming microprocessors and carries with it many restrictions of the microprocessor paradigm. This is especially true when computing with floating-point, whose data-types and evaluation semantics are defined by the IEEE-754 and C11 standards. If the high-level specification was a computation on the reals, HLS imposes a very restricted implementation space.
This work attempts to bridge FPGA application-specific efficiency and HLS ease of use. This is illustrated on the ubiquitous floating-point summation-reduction pattern. A source-to-source compiler rewrites, inside critical loop nests of the input C code, selected floating-point additions into sequences of simpler operators using non-standard arithmetic formats.
Evaluation of this method demonstrates that the benefits of application-specific operators (better performance and better accuracy) can be brought to HLS workflows while keeping their ease of use.
I. INTRODUCTION
Many case studies have demonstrated the potential of Field-Programmable Gate Arrays (FPGAs) as accelerators for a wide range of applications, from scientific or financial computing to signal processing and cryptography. FPGAs offer massive parallelism and programmability at the bit level. This enables programmers to exploit a range of techniques that avoid many bottlenecks of classical von Neumann computing: dataflow operation without the need of instruction decoding; massive register and memory bandwidth, without contention on a register file and single memory bus; operators and storage elements tailored to the application in nature, number and size.
However, to unleash this potential, development costs for FPGAs are orders of magnitude higher than classical programming. High performance and high design costs are the two faces of the same coin.
Hardware design flow and High-level synthesis: To address this, languages such as C or Java are increasingly being considered as hardware description languages. This has many advantages. The language itself is more widely known than any HDL. The sequential execution model makes designing and debugging much easier. One can even use software execution on a processor for simulation. All this drastically reduces development time.
The process of compiling a software program into hardware is called High-Level Synthesis (HLS), with tools such as Vivado HLS \footnote{Vivado HLS, Xilinx, 2011, http://www.xilinx.com/products/design-tools/vivado/} or Catapult C among others \footnote{Catapult C Synthesis, Mentor Graphics, 2011, http://calypto.com/en/products/catapult/overview/}. These tools are in charge of turning a C description into a circuit. This task requires to extract parallelism from sequential programs constructs (e.g. loops) and expose this parallelism in the target design. Today’s HLS tools are reasonably efficient at this task, and can automatically synthesize highly efficient pipelined dataflow architectures.
They however miss one important feature: they are not able to tailor operators to the application in size, and even less in nature. This comes from the C language itself: its high-level datatypes and operators are limited to a small number (more or less matching the hardware operators present in mainstream processors). Indeed, such a high-level language have been designed to be compiled and run on hardware and not to describe hardware. Therefore, HLS tools performs better for general purpose computing whereas FPGAs performs better for application-specific computing.
The broader objective of this work is to show that HLS tools can produce application-specific hardware and therefore, unleash FPGAs potential again.
Arithmetic in HLS: To better exploit the freedom offered by hardware and FPGAs, HLS vendors have enriched the C language with integer and fixed-point types of arbitrary size. However the operations on these types remain limited to the basic arithmetic and logic ones. Exotic or complex operators (for instance for finite-field or floating-point arithmetic) may
\footnote{Arbitrary-size floating-point should follow some day, it is well supported by mature libraries and tools}
be encapsulated in a C function that is called to instantiate the operator.
The case study in this work is a program transformation that applies to floating-point additions on a loop’s critical path. It decomposes them into elementary steps, resizes the corresponding sub-components to guarantee some user-specified accuracy, and merges and reorders these components to improve performance. The result of this complex sequence of optimizations could not be obtained from an operator generator, since it involves global loop information.
For this purpose, we envision a compilation flow involving one or several source-to-source transformations, as illustrated by Figure 1. Before detailing it, we must digress a little on the subtleties of the management of floating-point arithmetic by compilers.
_HLS faithful to the floats:_ Most recent compilers, including the HLS ones [10], attempt to follow established standards, in particular C11 and, for floating-point arithmetic, IEEE-754. This brings the huge advantage of almost bit-exact reproducibility – the hardware will compute exactly the same results as the software. However, it also greatly reduces the freedom of optimization by the compiler. For instance, as floating point addition is not associative, C11 mandates that code written \(a+b+c+d\) should be executed as \(((a+b)+c)+d\), although \((a+b)+(c+d)\) would have shorter latency. This also prevents the parallelization of loops implementing reductions. A reduction is an associative computation which reduces a set of input values into a reduction location. Listing 1 provides the simplest example of reduction, where \(acc\) is the reduction location.
The first column of Table I shows how Vivado HLS synthesizes Listing 1 on Kintex7. The floating-point addition takes 7 cycles, and the adder is only active one cycle out of 7 due to the loop-carried dependency. Listing 2 shows a different version of Listing 1 that we coded such that Vivado HLS expresses more parallelism. Vivado HLS will not transform Listing 1 into Listing 2 because they are not semantically equivalent (the floating-point additions are reordered as if they were associative). However as Listing 2 expresses more parallelism, Vivado HLS is able to exploit (second column of Table I). The main adder is now active at each cycle on a different sub-sum. Note that a parallel execution with the sequential semantics is also possible, but very expensive [13].
Note that Listing 2 is only here as an example and might need more logic if \(N\) was not a multiple of 10.
Towards HLS faithful to the reals: Another point of view, chosen in this work, is to assume that the floating-point C program is intended to describe a computation on real numbers when the user specifies it. In other words, the floats are interpreted as real numbers in the initial C, thus recovering the freedom of associativity (among other). Indeed, most programmers will perform the kind of non-bit-exact optimizations illustrated by Listing 2 (sometimes assisted by source-to-source compilers or “unsafe” compiler optimizations). In a hardware context, we may also assume they wish they could tailor the precision (hence the cost) to the accuracy requirements of the application – a classical concern in HLS [9], [2]. In this case, a pragma should specify the accuracy of the computation with respect to the exact result. A high-level compiler is then in charge of determining the best way to ensure the prescribed accuracy.
The proposed approach uses number formats that are larger or smaller than the standard ones. These, and the corresponding operators, are presented in Section II. The contribution of this paper, which are compiler transformations to generate C description of these operators in a HLS workflow, is presented in Section III. Section IV evaluates our approach on the FPMark benchmark suite.
**II. THE ARITHMETIC SIDE: AN APPLICATION-SPECIFIC ACCUMULATOR IN VIVADO HLS**
The accumulator that we used for this paper is based on a more general idea developed by Kulisch. He advocated a very large floating-point accumulator [14] whose 4288 bits would
cover the entire range of double precision floating-point. Such an accumulator would remove rounding errors from all the possible floating-point additions and sums of products, with the added bonus that addition would become associative.
So far, Kulisch’s full accumulator has proven too costly to appear in mainstream processors. However, in the context of application acceleration with FPGAs, it can be tailored to the accuracy requirements of applications. Its cost then becomes comparable to classical floating point operators, although it vastly improves accuracy [6]. This operator can be found in the FloPoCo [5] generator and in Altera DSP Builder Advanced. Its core idea, illustrated on Figure 2, is to use a large fixed-point register into which the mantissas of incoming floating-point summands are shifted (top) then accumulated (middle). A third component (bottom) converts the content of the accumulator back to the floating-point format. The sub-blocks visible on this figure (shifter, adder, and leading zero counter) are essentially the building blocks of a classical floating-point adder.
The accumulator described is the one offered in FloPoCo [6], although it is not the contribution of this paper, we included two improvements to it:
- In FloPoCo, Float-to-Fix and Accumulator form a single component, which restricts its application to simple accumulations similar to Listing 1. The two components of Figure 2 enable a generalization to arbitrary summations within a loop, as Section III will show.
- Our implementation supports subnormal numbers (special floating-point numbers with leading zeros to their significand to fill the underflow gap around 0).
Note that we could have implemented any other non-standard operator performing a reduction such as [16], [12].
A. The parameters of a large accumulator
The main feature of this approach is that the internal fixed-point representation is configurable in order to control accuracy. It has two parameters:
- MSB_A is the weight of the most significant bit of the accumulator. For example, if MSB_A = 20, the accumulator can accommodate values up to a magnitude of 2^{20} \approx 10^6.
- LSB_A is the weight of the least significant bit of the accumulator. For example, if LSB_A = -50, the accumulator can hold data accurate to 2^{-50} \approx 10^{-15}.
Such a fixed-point format is illustrated in Figure 3.
The accumulator width w_a is then computed as MSB_A - LSB_A + 1, 71 bits in the previous example. 71 bits represents a wide range and high accuracy, and still additions on this format will have one-cycle latency for practical frequencies on recent FPGAs. If this is not enough the frequency can be improved thanks to partial carry save [6] but this was not useful in the present work. For comparison, for the same frequency, a floating-point adder has a latency of 7 to 10 cycles, depending on the target.
In the following of this paper, we refer as latency of a
<table>
<thead>
<tr>
<th>Listing 1 (float)</th>
<th>Listing 2 (float)</th>
<th>Listing 1 (double)</th>
<th>Listing 2 (double)</th>
<th>Listing 3 (71 bits)</th>
<th>FloPoCo VHDL (71 bits)</th>
</tr>
</thead>
<tbody>
<tr>
<td>LUTs</td>
<td>266</td>
<td>907</td>
<td>801</td>
<td>2193</td>
<td>736</td>
</tr>
<tr>
<td>DSPs</td>
<td>2</td>
<td>4</td>
<td>3</td>
<td>6</td>
<td>0</td>
</tr>
<tr>
<td>Latency</td>
<td>700K</td>
<td>142K</td>
<td>700K</td>
<td>142K</td>
<td>100K</td>
</tr>
<tr>
<td>Accuracy</td>
<td>17 bits</td>
<td>17 bits</td>
<td>24 bits</td>
<td>24 bits</td>
<td>24 bits</td>
</tr>
</tbody>
</table>
Table I: Different ways of implementing a simple accumulation.


circuit the number of cycles needed for the entire application to complete.
B. Implementation within a HLS tool
This accumulator has been implemented in C, using arbitrary-precision fixed point types (ap_int). The leading zero count, bit range selections and other operations are implemented using Vivado HLS built-in functions. For modularity purposes, the FloatToFix and FixToFloat are wrapped into C functions (28 LoC for the FloatToFix, 22 LoC for FixToFloat) whose calls are inlined to enable HLS optimizations.
Because the internal accumulation is performed on a fixed-point integer representation, the combinational delay between two accumulations is lower compared to a direct floating-point addition. We expect HLS tools to benefit from this delay reduction by taking advantage of more agressive loop pipelining (with shorter Initiation Interval pipeline) resulting in a design with a shorter overall latency.
C. Validation
To evaluate and refine this implementation, we used Listing 3 which we compared to Listings 1 and 2. In the latter, the loop was unrolled by a factor 7, as it is the latency of a floating-point adder on our target FPGA (Kintex-7).
For test data, we use as in Muller et al. [17] the input values $c[i] = \cos(i)$, where $i$ is the input array’s index. Therefore the accumulation computes $\sum_i c[i]$.
The parameters chosen for the accumulator are:
- MSBA = 17. Indeed, as we are adding $\cos(i)$ 100K times, an upper bound is 100K, which can be encoded in 17 bits.
- MAXMSBx = 1 as the maximum input value is 1.
- LSBA = -50: the accumulator itself will be accurate to the 50th fractional bit. Note that a float input will see its mantissa rounded by FloatToFix only if its exponent is smaller than $2^{-25}$, which is very rare. In other words, this accumulator is much more accurate than the data that is thrown to it.
The results are reported in Table I for simple and double precision. The Accuracy line of the table reports the number of correct bits of each implementation, after the result has been rounded to a float. All the data in this table was obtained by synthesis using Vivado HLS followed by route from Vivado v2015.4, build 1412921. This table also reports synthesis results for the corresponding FloPoCo-generated VHDL, which doesn’t include the array management.
Vivado HLS uses DSPs to implement the shifts in its floating-point adders. Even if the shifts were implemented in LUTs, the first column would remain well below 500 LUTs: it has the best resource usage. However the latency of one iteration is 7 cycles, hence 100K iterations takes 700K cycles. When unrolling the loop, Vivado HLS is using almost 4 times more LUTs for floats, and 3 times more for doubles. The unrolled versions improves latency over naive versions. Nevertheless, our approach gets even better latencies for a reasonable LUT usage. Also, we achieve maximum accuracy for the float format which caps at 24 bits (the internal representations of the double, unrolled double and our approach have a higher accuracy than 24 bits, but are then casted to the 24 bits of the float format). Finally, our results are very close to FloPoCo ones, both in terms of LUTs usage, DSPs and latency.
Listing 3: Sum of floats using the large fixed-point accumulator
```c
#define N 100000
float acc = 0; ap_int<68> long_accumulator = 0;
for(int i = 0; i < N; i++) {
long_accumulator += FloatToFix(in[i]);
}
acc = FixToFloat(long_accumulator);
```
Using this implementation method, we also created an exact floating-point multiplier with the final rounding removed as in [6]. This function is called ExactProductFloatToFix. Due to lack of space we do not present it in detail. As the output of this multiplier is not standard, we also created an adapted Float-to-fix block called ExactProductFloatToFix. These functions represent 44 lines of code for ExactProduct and 21 lines of code for ExactProductFloatToFix.
III. THE COMPILER SIDE: GeCoS SOURCE-TO-SOURCE TRANSFORMATIONS
We have shown in previous Section that Vivado HLS can be used to synthesize very efficient specialized floating point operators which rival in quality with those generated by FloPoCo. Our goal is now to study how such optimization can benefit from automation. More precisely, we aim at being able to automatically optimize Listing 1 into Listing 3 and generalizes this transformation to many more situations.
For convenience, we chose to develop our optimization as a source-to-source transformation implemented within the open source GeCoS compiler framework [8], and aim at making our tool publicly available. Source-to-source compiler are very convenient in a HLS context since they can be used as optimization front-ends on top of closed-source commercial tools.
This work focuses on two computational patterns, namely the accumulation and the sum of product. Both are specific instances of the reduction pattern, which can be optimized by many compilers or parallel run-time environments. Reduction pattern are exposed to the compiler/runtime either though user directives (e.g #pragma reduce in openMP), or automatically inferred using static analysis techniques [19], [7].
As the problem of detecting reductions is not the main focus on this work, our tool uses a straightforward solution to the problem using a combination of user directive and (simple) program analysis. More specifically, the user must identify a target accumulation variable through a pragma, and provide additional information such as the dynamic range...
of the accumulated data along with the target accuracy (in the future, we expect to automate our flow such that the two later parameter could systematically be omitted).
We found this approach easier, more general and less invasive than those attempting to convert a whole floating-point program into a fixed-point implementation [20].
A. Proposed compiler directive
In imperative languages such as C, reductions are implemented using for or while loop constructs. Our compiler directive must therefore appear right outside such a construct. Listing 4 illustrates its usage on the code of Listing 1. The pragma must contain the following information:
- The keyword FPacc, which triggers the transformations.
- The name of the variable in which the accumulation is performed, preceded with the keyword VAR. In the example, the accumulation variable is acc.
- The maximum value that can be reached by the accumulator through the use of the MaxAcc keyword. This value is used to determine the weight MSB_A.
- The desired accuracy of the accumulator using the epsilon keyword. This value is used to determine the weight LSB_A.
- Optional: The maximum value among all inputs of the accumulator in the MaxInput field. This value is used to determine the weight MaxMSB_X. If this information is not provided, then MaxMSB_X is set to MSB_A.
Listing 4: Illustration of the use of a pragma for the naive accumulation
```c
#define N 100000
float accumulation(float in[N]){
float acc = 0;
#pragma FPacc VAR=acc MaxAcc=100000.0 epsilon=1E-15 MaxInput=1.0
for(int i=0; i<N; i++){
acc+=in[i];
}
return acc;
}
```
In the case when no size parameters are given, a full Kulisch accumulator is produced. Also note that the user can quietly overestimate the maximum value of it’s accumulator without major impact on area. For instance, overestimating MaxAcc by a factor 10 only adds 3 bits to the accumulator width.
B. Proposed code transformation
The proposed transformation operates on the compiler program intermediate representation (IR), and rely on the ability to identify loops constructs and expose def/use relations between instructions of a same basic block in the form of an operation dataflow graph (DFG).
To illustrate our transformation, consider the sample program shown in Listing 5. This program performs a reduction into the variable sum, involving both sums and sums of product operations. The operation dataflow graph associated to the basic block forming the loop body in this program is depicted in Figure 4a. In this Figure, dotted arrows represent loop carried dependencies between operations belonging to distinct loop iterations. Such loop carried dependencies have a very negative impact on the kernel latency as they prevent loop pipelining. For example, when using a pipelined floating-point adder with a seven cycle latency, the HLS tool will schedule a new iteration of the loop at best every seven cycle.
As illustrated in Figure 5a, our proposed optimization hoists the floating-point normalization step out of the loop, and performs the accumulation using fixed point arithmetic. Since integer add operations can generally be implemented with a 1-cycle delay, the HLS tool may now be able to initiate a new iteration every cycle, improving the overall latency by a factor of 7.
Listing 5: Simple reduction with multiple accumulation statements
```c
#define N 100000
float computeSum(float in1[N], float in2[N]){
float sum = 0;
#pragma FPacc VAR=acc MaxAcc=300000.0 epsilon=1e-15 MaxInput=3.0
for (int i=1; i<N-1; i++){
sum+=in1[i]*in2[i-1];
sum+=in1[i];
}
return sum;
}
```
The code transformation first identifies all relevant basic blocks (i.e those associated to the pragma directive). It then performs a backward traversal of the dataflow graph, starting from a Float Add node that writes to the accumulation variable identified by the #pragma.
During this traversal, the following actions are performed depending on the visited nodes:
- A node with the summation variable is ignored.
- A Float Add node is transformed to an accurate fixed-point adder. The analysis is then recursively launched on that node.
- A Float Mul node is replaced with a call to the ExactProduct function followed by a call to ExactProdFloatToFix.
- Any other node has a call to FloatToFix inserted.
This algorithm rewrites the DFG from Figure 4a into the new DFG shown on Figure 5a. In addition, a new basic block containing a call to FixToFloat is inserted immediately after the transformed loop, in order to expose the floating point representation of the results to the remainder of the program.
From there, it is then possible to regenerate the corresponding C code. As an illustration of the whole process, the synthesized codes from before and after the transformations result in the architectures from Figure 4b and Figure 5b respectively.
C. Evaluation of the toy example of Listing 5
The proposed transformations work on non-trivial examples such as Listing 5. Table II shows how resource consumption
depends on \( \text{epsilon} \), all the other parameters being those given in the \text{pragma} of Listing 5. All these versions where synthesised for 100 MHz. Indeed, some circuits might need more work than others to increase their frequency and would not give a fair comparison.
Our transformed code makes Vivado HLS use more LUTs for less DSPs compared to the classical IEEE-754 implementation. This is due to the smaller shifter and multiplier of our implementation. In all cases, on this example, the transformed code has its latency reduced by a factor 20. This is due to the fact that Vivado HLS is able to perform the \text{Float Mul} and \text{Float Adder} in a single block with a latency of 10 cycles for the naive version. The two following \text{Float Adder} are accelerated from 14 cycles to 10 cycles due to a very short pipeline that follows the IEEE-754 standard. Comparatively, the transformed code has a cycle latency operators that can be pipelined.
IV. EVALUATION ON FPMARK BENCHMARKS
In order to evaluate the relevance of the proposed transformations on real-life programs, we used the EEMBC FPMark benchmark suite [1].
This suite consists of 10 programs. A first result is that half of these programs contain visible accumulations:
- Enhanced Livermore Loops (1/16 kernels contains one accumulation)
- LU Decomposition (multiple accumulations)
- Neural Net (multiple accumulations)
- Fourier Coefficients (one accumulation)
- Black Scholes (one accumulation)
The following focuses on these, and ignores the other half (Fast Fourier Transform, Horner’s method, Linpack, ArcTan, Ray Tracer).
Most benchmarks come in single-precision and double-precision versions, and we focus here on the single-precision ones.
### A. Benchmarks and accuracy: methodology
Each benchmark comes with a golden reference against which the computed results are compared. As the proposed transformations are controlled by the accuracy, it may happen that the transformed benchmark is less accurate than the original. In this case, it will not pass the benchmark verification test, and rightly so.
A problem is that the transformed code will also fail the test if it is more accurate than the original. Indeed, the golden reference is the result of a certain combination of rounding errors using the standard FP formats, which we do not attempt to replicate.
To work around this problem, each benchmark was first transformed into a high-precision version where the accumulation variable is a 10,000-bit floating-point numbers using the MPFR library. We used the result of this highly-accurate version as a “platinum” reference, against which we could measure the accuracy of the benchmark’s golden reference. This allowed us to choose our epsilon parameter such that the transformed code would be at least as accurate as the golden reference. This way, the epsilon of the following results is obtained through profiling. The accuracy of the obtained results are computed as the number of correct bits of the result.
We first present the benchmarks that are improved by our approach before discussing the reasons why we can’t prove that the others are.
### B. Benchmarks improved by the proposed transformation
**Enhanced Livermore Loops:** This program contains 16 kernels of loops that compute numerical equations. Among these kernels, there is one that performs a sum-of-products (banded linear equations). This kernel computes 20000 sums-of-products. The values accumulated are pre-computed. This is a perfect candidate for the proposed transformations.
For this benchmark, the optimal accumulation parameters were found as:
MaxAcc=50000.0 epsilon=1e-5
MaxInput=22000.0
Synthesis results of both codes (before and after transformation) are given in Table III. As in the previous toy examples, latency and accuracy are vastly improved for comparable area.
**LU Decomposition and Neural Net:** Both the LU decomposition and the neural net programs contain multiple nested small accumulations. In the LU decomposition program, an inner loop accumulates between 8 and 45 values. Such accumulations are performed more than 7M times. In the neural net program, inner loops accumulate between 8 and 35 values, and such accumulations are performed more than 5K times.
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Type</th>
<th>LUTs</th>
<th>DSPs</th>
<th>Latency</th>
<th>Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Livermore</td>
<td>Original</td>
<td>384</td>
<td>5</td>
<td>80K</td>
<td>11 bits</td>
</tr>
<tr>
<td></td>
<td>Transformed</td>
<td>576</td>
<td>2</td>
<td>20K</td>
<td>13 bits</td>
</tr>
<tr>
<td>LU-8</td>
<td>Original</td>
<td>809</td>
<td>5</td>
<td>82</td>
<td>8-23 bits</td>
</tr>
<tr>
<td></td>
<td>Transformed</td>
<td>1007</td>
<td>2</td>
<td>17</td>
<td>23 bits</td>
</tr>
<tr>
<td>LU-45</td>
<td>Original</td>
<td>819</td>
<td>5</td>
<td>452</td>
<td>8-23 bits</td>
</tr>
<tr>
<td></td>
<td>Transformed</td>
<td>1034</td>
<td>2</td>
<td>54</td>
<td>23 bits</td>
</tr>
<tr>
<td>Scholes</td>
<td>Original</td>
<td>15640</td>
<td>175</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Transformed</td>
<td>15923</td>
<td>N/A</td>
<td>19 bits</td>
<td></td>
</tr>
<tr>
<td>Fourier</td>
<td>Original</td>
<td>24596</td>
<td>64</td>
<td>N/A</td>
<td>6 bits</td>
</tr>
<tr>
<td></td>
<td>Transformed</td>
<td>24681</td>
<td>59</td>
<td>N/A</td>
<td>11 bits</td>
</tr>
</tbody>
</table>
Table III: Synthesis results of benchmarks before and after transformations
Both of these programs accumulate values from registers or memory that are already computed. It makes these programs good candidates for the proposed transformations.
Vivado HLS is unable to predict a latency for these implemented designs due to their non-constant loop counts, therefore we do not present complete results for these two benchmarks. Still, in order to show that the approach works on these examples, the LU inner loops were transformed and synthesized. Table III shows the results obtained for the smallest (8 terms) and the largest (45 terms) sums-of-products in lines LU-8 and LU-45 respectively. The latency is vastly improved even for the smallest one. The accuracy results of the original code here varies from 8 to 23 bits between different instances of the loops. To have a fair comparison, we generated a conservative design that performs 23 bits accuracy on all loops, using a sub-optimal amount of resources.
### C. Benchmarks that exposed the limitations of HLS tools
**Black Scholes:** This program contains an accumulation that sums 200 terms. The result of this computation is divided by a constant (that could be optimized by using transformations based on [3]). This process is performed 5000 times.
Here the optimal accumulator parameters are the following:
MaxAcc=245000.0 epsilon=1e-4
MaxInput=278.0
This gives us an accumulator that uses 19 bits for the integer part and 10 bits for the fractional part. The result of the synthesis are provided in Table III.
For comparable area, accuracy is vastly improved but latency could not be obtained from Vivado HLS. Indeed, the Black Scholes algorithm uses the mathematical function power. Such a function is not implemented in Vivado HLS, therefore we coded it using a loop with a non-constant count. As the latency of this operator depends on the input data, the latency of the all circuit cannot be statically recovered.
**Fourier Coefficients:** The Fourier coefficients program, which computes the coefficients of a Fourier series, contains an accumulation which is performed in single precision. This program comes in three different configurations: small, medium and big. Each of them computes the same algorithm but with a different amount of iterations. The “big” version is supposed to compute the most accurate answer. We obtain similar results for the three versions of this program, as a consequence
we only present the "big" version here. In this version, there are multiple instances of 2K terms accumulations. The accumulator is reset at every call.
The parameters determined for this benchmark were the following:
MaxAccc=6000.0 epsilon=1e-7 MaxInput=10.0
This results in an accumulator using 14 bits for the integer part and 24 bits for the fractional part. The synthesis results obtained for the original and transformed codes are given in Table III.
Here, area is again comparable, accuracy is improved by 5 bits (which represents one order of magnitude), but latency could not be obtained. Vivado HLS faces the same problem as in Black Scholes, it cannot compute the latency of the power function.
Note that our operators have a shorter latency by nature. Therefore, even if Vivado HLS is not able to provide latency results, the circuits indeed have a shorter latency.
The pow operator could be implemented such as in [4] in the near future. This would allow us to obtain the latency of the two last benchmarks.
V. CONCLUSION
The main result of this work is that HLS tools have the potential to generate efficient designs for handling floating-point computations in a completely non-standard way. The use of application-specific intermediate formats can provide both performance and accuracy at a competitive cost. For this, we have to sacrifice the strict respect of the IEEE-754 and C11 standards. It is replaced by the strict respect of a high-level accuracy specification.
Classically, designers have to face a trade-off between performance and cost. This approach adds computation accuracy to this trade-off. Some designers may not like this. To convince them, consider that established performance benchmarks compute results which are accurate only to a few bits. If only a few bits are important, do we really need to instantiate 32-bit or 64-bit floating-point operators to compute them? Isn’t this accuracy information worth investigating and exploiting?
This work also provides a practical tool that improves a given C program. The input to the tool is application-specific information representing high-level domain knowledge such as the range and desired accuracy of a variable. The resulting code is compatible with Vivado HLS.
The proposed transformation already works very well on all the FPMarks that contains a reduction where it improves both latency and accuracy by an order of magnitude for comparable area.
In the longer term, we believe there is much more to come. The arithmetic optimizations that a classical compiler can do are very limited by the fixed hardware of classical processors. With compilers of high-level software to hardware, there is much more freedom, hence many more opportunities to build application-specific arithmetic operators.
Future work will attempt to explore this new realm, starting with operator specialization; operator fusion such as in [21] but at a more coarse grain allowing more aggressive fusion; compile-time generation of application-specific cores; error analysis such as in [15] benefiting from compilers static analysis and more generally building upon compiler progress in program analysis.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01502644/file/HLS-Using-App-Specific-Arith_A-Case-Study.pdf", "len_cl100k_base": 7885, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29180, "total-output-tokens": 9531, "length": "2e12", "weborganizer": {"__label__adult": 0.00067138671875, "__label__art_design": 0.0007791519165039062, "__label__crime_law": 0.0005049705505371094, "__label__education_jobs": 0.0006227493286132812, "__label__entertainment": 0.00015437602996826172, "__label__fashion_beauty": 0.0003097057342529297, "__label__finance_business": 0.00035572052001953125, "__label__food_dining": 0.0005974769592285156, "__label__games": 0.00107574462890625, "__label__hardware": 0.01296234130859375, "__label__health": 0.000980377197265625, "__label__history": 0.00054931640625, "__label__home_hobbies": 0.0002256631851196289, "__label__industrial": 0.0016345977783203125, "__label__literature": 0.0002834796905517578, "__label__politics": 0.0005307197570800781, "__label__religion": 0.0012483596801757812, "__label__science_tech": 0.2578125, "__label__social_life": 9.465217590332033e-05, "__label__software": 0.006931304931640625, "__label__software_dev": 0.708984375, "__label__sports_fitness": 0.0005779266357421875, "__label__transportation": 0.0014791488647460938, "__label__travel": 0.00034046173095703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39087, 0.03981]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39087, 0.44632]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39087, 0.88996]], "google_gemma-3-12b-it_contains_pii": [[0, 928, false], [928, 5977, null], [5977, 10112, null], [10112, 14261, null], [14261, 19803, null], [19803, 24892, null], [24892, 26379, null], [26379, 32361, null], [32361, 39087, null]], "google_gemma-3-12b-it_is_public_document": [[0, 928, true], [928, 5977, null], [5977, 10112, null], [10112, 14261, null], [14261, 19803, null], [19803, 24892, null], [24892, 26379, null], [26379, 32361, null], [32361, 39087, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39087, null]], "pdf_page_numbers": [[0, 928, 1], [928, 5977, 2], [5977, 10112, 3], [10112, 14261, 4], [14261, 19803, 5], [19803, 24892, 6], [24892, 26379, 7], [26379, 32361, 8], [32361, 39087, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39087, 0.0786]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
a9aab8f88918553085e41011a13111a8a7f6ebbf
|
The Effects of Tinkerability on Novice Programming Skill Acquisition
Tian Luo
Old Dominion University, tluo@odu.edu
Follow this and additional works at: https://digitalcommons.odu.edu/stemps_fac_pubs
Part of the Curriculum and Instruction Commons, Educational Methods Commons, and the Science and Mathematics Education Commons
Repository Citation
Original Publication Citation
The Effects of Tinkerability on Novice Programming Skill Acquisition
Tian Luo
Instructional Technology
Ohio University
United States
tl303308@ohio.edu
Abstract: This paper reports on an exploratory study which used a graphical programming environment, Scratch, to facilitate the comprehension of a scripting programming language, ActionScript. Online survey questionnaires were distributed to 34 enrolled students, in a graduate level programming course with a 70% response rate. Findings indicated that Scratch contributed to the understanding of basic programming concepts such as event handling, sequential, and conditional statement but it was less helpful in assisting students’ understanding of more abstract concepts such as variables. This study also suggests that students’ learning style preference and proficiency with programming also make a difference in their perception of using Scratch. Educators are recommended to provide students with explicit guidance on how the different programming environments manifest similar programming ideas and concepts.
Introduction
Why is it Difficult to Learn Programming?
Programming has been often perceived as one of the most difficult skills to master among all the modern subjects (Lahtinen, Mutka, & Järvinen, 2005). Research studies indicate that its difficulty resides not only in understanding abstract concepts but in designing or generating a functional program to solve tasks based on this understanding (Brusilovsky, Kouchnirenko, Miller, & Tomek, 1994; Robins, Rountree, & Rountree, 2003).
Researchers have investigated programming education and suggested that different strategies should be available for senior, junior, and novice programmers (Robins, et al., 2003) due to different characteristics between expert and novice programmers. Expert programmers tend to be able to proficiently associate their knowledge schemas with problem-solving strategies such as debugging (Linn & Dalbey, 1989), while novice programmers lack the strengths that experts have and often fail in applying their acquired knowledge into designing a program. They often learn programming line by line without being able to comprehend the overall program structure, and constantly find it difficult to combine and arrange the codes into functioning programs (Winslow, 1996).
Why We Learn Programming?
Although programming languages are one of the most challenging subjects to master, learning programming has gained increasing attention to date. In the 21st century, when most of the younger people are considered digital natives (Prensky, 2001), programming is deemed as a critical skill to acquire (Rushkoff, 2010). As Resnick, Kafai, & Maeda (2003) stated, being able to program greatly expands the range of what the users can create and how they can express themselves with the computer.
Furthermore, programming promotes computational thinking, which involves analyzing and organizing data and other problem-solving strategies that can be applied to non-programming domains (Wing, 2006). Programming involves the creation of “external representations of problem-solving processes” (Resnick et al., 2009, p.3) and it also provides one with opportunities to ponder users’ own thinking, or even to “think about thinking itself” (diSessa, 2000). In many cases, the motivation behind learning programming and understanding programming concepts does not end with the mastery of programming ability itself; instead, it fosters a creative, systematic way of thinking, which is greatly needed in this day and age (Resnick et al., 2009).
Introduction to Scratch
What is Scratch?
Scratch is a visual programming environment which was created by the Lifelong Kindergarten Group from the MIT Media Laboratory in partnership with Yasmin Kafai’s group at UCLA (Maloney, Peppler, Kafai, Resnick, & Rusk, 2008). It is a programming language designed specifically for novice programmers, which supports programming activities that interest young programmers. Scratch enables them to create interactive, media-rich projects such as animated stories, online news shows, games, book reports, music videos, science projects, simulations, tutorials, music projects and interactive presentations. Scratch also encourages them to share their projects with one another on the online community.
A Scratch project has multiple components, which essentially encompasses a stage (also called a background), and a series of movable objects called sprites. Programmers can import built-in sounds and images to a sprite and enable the mobility of a sprite by using variables and scripts. Meanwhile, they can also create their own images and record their own sounds through the paint tool and sound recorder.
Figure 1: Screenshot of Scratch Interface
As demonstrated in Figure 1, Scratch interface is separated into four sections. On the right hand side is the stage. Programmers can maximize the stage to a full screen mode to showcase a completed project by clicking the button on the bar below the stage. The section below the stage displays thumbnails of all sprites in the Scratch project. One can select a sprite by simply clicking on its corresponding thumbnail. The middle section displays all the scripts executed in the Scratch project. The left-most section is the palette, which is divided into eight color-coded categories of command blocks that can be dragged into the scripting section. These four sections and the top drop-down menu constitute the overall interface of Scratch.
What can Scratch do for Programming?
Novice programmers can program in Scratch by dragging command blocks from a palette into the scripting pane and then assemble them to create various stacks of blocks (Maloney, 2008). Various blocks can be added on top of a stack of blocks to trigger that stack to be responsive to certain run-time events, such as mouse clicking or keyboard events.
In addition, Scratch provides a number of features within its interface. First, Scratch employs a single-window user interface to make navigation uncomplicated. Scratch is always live so there is no distinction in compilation step or edit/run mode. People can remain on-task in testing, debugging, and improving their projects without understanding the mechanisms of compilation. Second, the interface allows for user experimentation "with commands and code snippets the way one might tinker with mechanical or electronic components", which is called tinkerbility by its developers (Maloney et al., 2008, p.17). The feature, tinkerbility, enhances people's hands-on learning, embraces a bottom-up approach of syntax writing and facilitates user comprehension of the functionality of blocks. An execution function is made visible so that users can receive immediate feedback, command sequence and flow of control. Tinkerbility is a key feature in Scratch as it reinforces the iterative, incremental programming design process that is critical to novice programmers.
Another feature is that no error messages are shown to users so they are free of handling tedious errors. In addition, a variable can be placed on the stage, which monitors activity in order to assist users in understanding the effects of each command. Therefore, novice programmers only need to know a small set of commands to build whole projects (Maloney, 2010).
**What Concepts does Scratch Teach?**
The programming concepts are taught through manipulating the command blocks in the Palette and putting them into the Scripts section. As the command blocks are categorized into eight groups, respectively, motion, looks, sound, pen, control, sensing, operators, and variables, the corresponding programming concepts are taught by changing the motion, look, sound, or other functionalities of a sprite. For example, under the Motion category, programmers are to set Cartesian coordinates to change its position and mobility. Similarly, programmers can also manipulate numbers in the Looks, Sound, and Pen categories and thus alter its image transformations, musical notes, and so on. Currently, Scratch accommodates arithmetic, comparison and simple Boolean operations. Later it will be able to support higher-level scientific functions such as sine will (Maloney et al., 2008). Through using these commands to manipulate numbers and see the corresponding change on a sprite, young programmers will have a better understanding of numbers as parameters in a programming language.
Scratch also teaches multiple control structures under the Control category. For example, forever and repeat can be used for understanding of looping (repeating a series of instructions) in programming language. The command blocks, when key pressed and when sprite clicked demonstrates event handling concept – programmers will see how the sprite responds to events triggered by the user or another part of the program. Synchronization is also enabled in Scratch by allowing a set of commands performing collaboratively— one wait for all triggered scripts to complete before broadcasting.
More advanced programming concepts are introduced in the Operators and Variables categories. and, or, not are examples of Boolean logic which reside in the Operators category. Programmers can also create their own variables using commands under the Variables category. Two different kinds of variables, sprite variables and global variables, are supported by Scratch (Maloney et al., 2008). It is intuitive to understand that sprite variables are visible to the scripts limited to the specific sprite, while global variables are visible to all objects in Scratch. Global variables are often harnessed in order to pass data between objects.
**Significance of the Research**
As the difficulties in learning programming language were discussed previously, Scratch was recently developed as a new programming language learning environment that encourages the novice learners to better understand programming concepts. Although previous research suggested that it makes learning programming more accessible, meaningful and more tinkerable (Maloney et al., 2008; Maloney et al., 2010), there are few exploratory research studies conducted elsewhere to investigate Scratch’s impact on learning other programming languages. Furthermore, although the developers claimed Scratch is a programming for all (Maloney et al., 2010), there is only
anecdotal evidence to support the statement. It is also unclear whether Scratch’s paradigm is useful for a wide variety of users with various experiences, ages and other backgrounds.
Research Questions
This study investigates the following questions:
1. Does a Scratch, the graphical programming environment, help users learn to program?
2. What are the programming concepts that students learn most from Scratch?
3. Do learners use the online learning community specifically designed for potential users?
4. Does the use of Scratch ease the transition to a more advanced scripting language?
5. Do novice programmers have similar experiences as intermediately skilled programmers?
Method
The design methodology used in this research study was a questionnaire survey. The first part aimed to examine learners’ engagement in Scratch and its potential relationship with their familiarity with programming language. The second part was meant to pinpoint the specific concepts that students learn most from Scratch. The third part dealt with the overall perception of Scratch and the Scratch community. The last part was the demographic questionnaire. Two open-ended questions inquired learners to describe their experience. At the end of the quarter, email surveys were distributed to the 34 students in the course. The response rate was 70%.
The participants in this research were students from an introductory programming class for educators. The primary programming language learned in this course was Actionscript 3.0, a command line language based upon the ECMAScript specification. The participants were all graduate level education majors. Scratch was introduced in the first week of the class as an initial exposure to a programming environment. Since Scratch is relatively self-explanatory and intuitive, students were asked to finish a project using Scratch after two weeks. Students were not given any guidance or assistance from the instructor but the materials from the online Scratch community. For the project, students were asked to use all eight Scratch categories, which are eight different types of basic programming concepts, and to write a description of this project. Students were also encouraged to explore the environments further after the required completion of their first project.
Results and Analysis
Learner’s Profile
The first three questions asked students to inquire about students’ familiarity with programming language. Among those students who have programmed, three of them have had more than ten years of programming experience; four of them have had more than five years. According to Winslow (1996), it takes roughly ten years to turn a novice into an expert programmer. Therefore, some of students can be classified as expert programmers, or at least senior programmers.
Understanding of Programming Concepts
As seen from Table 1, Event handling (i.e. when key pressed, when sprite clicked), Conditional statements (if, if-then), and Sequence (think of the order of steps, sequential programming) are the top three conceptual ideas that learners reported having improved in the most from playing with Scratch. These three concepts were followed by Object-oriented programming (consider sprite as an object) and Keyboard input (ask and wait for users to type in the text-box). In contrast, Variables and User interface design are the two concepts which learners reported that they valued least in regards to Scratch.
In the open-ended questions section, some of the respondents also mentioned the concepts that they learned most from Scratch, which are in accordance with the survey results. One of them stated, “It helped me with conditional statements the most. The colorful blocks in Scratch were helpful because I consider myself to be a visual learner. Scratch gives me a better picture than ActionScript.” Another student commented that it is a preferred
method to teach programming. On the other hand, for intermediate or senior users it was less useful: as one stated, “As I learned programming before, it was just another language to learn those things.”
**Perception of using Scratch**
As indicated from the Table 2, Scratch is generally beneficial for students in this class. 71% agreed that Scratch is helpful to their programming language learning. More than half of them remarked their preference over Scratch. However, people were less positive when associated Scratch with Actionscript: Less people agreed on the #5 statement. Overall, a majority of learners in this class believed that Scratch was helpful to their programming language learning. However, students were hesitant to make affirmative comments regarding its positive impact on learning Actionscript and their prospective usage.
**The Social Aspect**
Scratch programming language is tightly associated with development of the Scratch Web (Monroy-Hernández, 2009). On the Scratch interface, there are built-in functions which allow the user to share the Scratch project and post it online, which can be perceived as a method for gaining encouragement from the social community. However, in our research study, 18 out of 24 participants admitted statement #3. On the other hand, most of the students did receive assistance from the online community. 15 out of 24 students agreed to some extent with the statement #1. Therefore, the online community still serves as a place where learners can find resources and assistance to complete their Scratch project.
**Table 1. Programming concepts in Scratch which are helpful for learners’ understanding**
<table>
<thead>
<tr>
<th>#</th>
<th>Statements</th>
<th>SD</th>
<th>D</th>
<th>SD</th>
<th>SA</th>
<th>A</th>
<th>SA</th>
<th>Mean</th>
<th>SD</th>
</tr>
</thead>
<tbody>
<tr>
<td>6</td>
<td>Event handling (i.e. when key pressed, when sprite clicked)</td>
<td>0</td>
<td>2</td>
<td>3</td>
<td>3</td>
<td>10</td>
<td>6</td>
<td>4.63</td>
<td>1.24</td>
</tr>
<tr>
<td>1</td>
<td>Conditional statements (if, if…, then…)</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>4</td>
<td>12</td>
<td>4</td>
<td>4.54</td>
<td>1.25</td>
</tr>
<tr>
<td>2</td>
<td>Sequence(think of the order of steps, sequential programming)</td>
<td>0</td>
<td>4</td>
<td>3</td>
<td>4</td>
<td>8</td>
<td>5</td>
<td>4.29</td>
<td>1.40</td>
</tr>
<tr>
<td>12</td>
<td>Object-oriented programming (consider sprite as an object)</td>
<td>0</td>
<td>3</td>
<td>2</td>
<td>9</td>
<td>7</td>
<td>3</td>
<td>4.21</td>
<td>1.18</td>
</tr>
<tr>
<td>7</td>
<td>Keyboard input (ask and wait for users to type in the text-box)</td>
<td>1</td>
<td>2</td>
<td>4</td>
<td>4</td>
<td>10</td>
<td>3</td>
<td>4.21</td>
<td>1.35</td>
</tr>
<tr>
<td>4</td>
<td>Iteration (looping, forever and repeat)</td>
<td>1</td>
<td>3</td>
<td>3</td>
<td>6</td>
<td>8</td>
<td>3</td>
<td>4.08</td>
<td>1.38</td>
</tr>
<tr>
<td>8</td>
<td>Random numbers (i.e.pick random integers within a range)</td>
<td>2</td>
<td>1</td>
<td>5</td>
<td>4</td>
<td>9</td>
<td>3</td>
<td>4.08</td>
<td>1.44</td>
</tr>
<tr>
<td>9</td>
<td>Boolean logic(i.e and,or, not)</td>
<td>1</td>
<td>4</td>
<td>3</td>
<td>3</td>
<td>10</td>
<td>3</td>
<td>4.08</td>
<td>1.47</td>
</tr>
<tr>
<td>11</td>
<td>Coordination and synchronization (i.e. broadcast, when i receive; wait ...seconds; until)</td>
<td>1</td>
<td>3</td>
<td>2</td>
<td>9</td>
<td>8</td>
<td>1</td>
<td>3.96</td>
<td>1.23</td>
</tr>
<tr>
<td>5</td>
<td>Variables</td>
<td>2</td>
<td>3</td>
<td>5</td>
<td>3</td>
<td>8</td>
<td>3</td>
<td>3.88</td>
<td>1.54</td>
</tr>
<tr>
<td>10</td>
<td>User interface design</td>
<td>0</td>
<td>3</td>
<td>8</td>
<td>7</td>
<td>5</td>
<td>1</td>
<td>3.71</td>
<td>1.08</td>
</tr>
<tr>
<td>3</td>
<td>List (arrays)</td>
<td>2</td>
<td>8</td>
<td>2</td>
<td>4</td>
<td>6</td>
<td>2</td>
<td>3.42</td>
<td>1.59</td>
</tr>
</tbody>
</table>
**Table 2. Students’ Perception of using Scratch in this course**
**Challenges**
Although using Scratch has an overall positive impact, seven of the respondents reported that they did not believe that Scratch is conducive to their understanding. Among these respondents, many of them failed to find the association between ActionScript and Scratch due to a variety of reasons. Two of them mentioned that the lack of
syntax in Scratch made it difficult to relate to the ActionScript environment. Another one perceived it difficult to perceive any relevancy between the two: “I had a hard time grasping what the sprite or the program was attempting to do. I had some idea of what ActionScript was supposed to do, but not a clue what Scratch was trying to accomplish.” The senior learners certainly were not able to gain more benefits than the novice learners. Besides, another senior programmer commented bluntly that Scratch was useless and time-consuming to him.
Learner style is another notable element concerning the factors that contribute to a positive perception of using Scratch. One of them remarked that “I understood programming concepts by the way that Scratch presents them visually.” A non-visual learner may also find it difficult to learn from Scratch, as one noted in his words.
Lastly, some other learners found that a lack of instructions could also hinder the process of learning from Scratch. Scratch may not be as intuitive as the instructor assumed it to be. Another one said Scratch makes it easy for him/her to mix all the functions, methods, and variables together, especially when clear instruction or guidance was lacking.
**Discussion and Recommendation**
The findings in this study show an overall positive perception of using Scratch for programming learning. Some fundamental concepts are easily understood using Scratch, while complex concepts, are relatively less understandable in Scratch. These findings were paralleled with Maloney et al.’s (2008). With that said, these research findings also suggest some challenges with using Scratch for programming learning. First of all, as many novice learners find it difficult to relate Scratch to command-line programming, it is recommended that instructors should provide explicit guidance and instructions relating to the conceptual structures between the two programming environments for students, especially those at the beginner level. It would have been more helpful to provide materials which direct the learner to walk through some of basic programming concepts.
Second, there is a great need for separating the teaching method among learners in different levels. As indicated in many previous research studies, Scratch is more appropriate for novices, particularly children, who have just started to learn programming (Maloney et al., 2008; Resnick et al., 2009). Therefore, if there are several senior programming learners, it might not be as justifiable to integrate the usage of Scratch. Alternatively, different levels of tasks based on the learner’s proficiency with programming language could be provided.
Lastly, more encouragement for sharing one’s Scratch project on the online community could enhance the use of the online resource. As Scratch is social in nature (Resnick et al., 2009), students would be able to gain more support if they are more connected to the online community, especially considering the availability of the built-in share function. Furthermore, by being active in the online sphere, learners will be able to not only receive assistance from people online, but also potentially continue to support and collaborate with each other afterwards.
**Conclusions**
As a graphical environment, Scratch allows programmers to snap together a set of command blocks in different sequences and combinations and structure them in a way in which one programs in a formal programming language. This so-called *tinkerability* not only makes programming more playful and engaging to programmers, but also helps them experiment with new programming concepts incrementally and iteratively.
By and large, the Scratch learning experience did ease the transition to a more advanced scripting language as suggested in this exploratory study. However, challenges remain as it is patent that novice programmers benefited more from this Scratch experience than intermediately skilled programmers. Hence, instructors are recommended to separate the instruction between these two groups, providing explicit and more specific guidance to novice programmers while encouraging intermediate-level programmers to complete advanced-level tasks on Scratch so that potentials of using Scratch can be live up to a full state.
**References**
|
{"Source-Url": "https://digitalcommons.odu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1011&context=stemps_fac_pubs", "len_cl100k_base": 4927, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 18063, "total-output-tokens": 5665, "length": "2e12", "weborganizer": {"__label__adult": 0.00084686279296875, "__label__art_design": 0.0013399124145507812, "__label__crime_law": 0.0008287429809570312, "__label__education_jobs": 0.22021484375, "__label__entertainment": 0.00019121170043945312, "__label__fashion_beauty": 0.0004436969757080078, "__label__finance_business": 0.0008568763732910156, "__label__food_dining": 0.0012159347534179688, "__label__games": 0.0012989044189453125, "__label__hardware": 0.0016002655029296875, "__label__health": 0.0012683868408203125, "__label__history": 0.0005879402160644531, "__label__home_hobbies": 0.0004911422729492188, "__label__industrial": 0.0010213851928710938, "__label__literature": 0.00089263916015625, "__label__politics": 0.0006480216979980469, "__label__religion": 0.0012760162353515625, "__label__science_tech": 0.0166168212890625, "__label__social_life": 0.0005507469177246094, "__label__software": 0.01163482666015625, "__label__software_dev": 0.7333984375, "__label__sports_fitness": 0.0009074211120605468, "__label__transportation": 0.0013380050659179688, "__label__travel": 0.0005321502685546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24757, 0.02805]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24757, 0.71391]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24757, 0.93563]], "google_gemma-3-12b-it_contains_pii": [[0, 754, false], [754, 4380, null], [4380, 6312, null], [6312, 11136, null], [11136, 15045, null], [15045, 18890, null], [18890, 23460, null], [23460, 24757, null]], "google_gemma-3-12b-it_is_public_document": [[0, 754, true], [754, 4380, null], [4380, 6312, null], [6312, 11136, null], [11136, 15045, null], [15045, 18890, null], [18890, 23460, null], [23460, 24757, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24757, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24757, null]], "pdf_page_numbers": [[0, 754, 1], [754, 4380, 2], [4380, 6312, 3], [6312, 11136, 4], [11136, 15045, 5], [15045, 18890, 6], [18890, 23460, 7], [23460, 24757, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24757, 0.13861]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
c002ac1aabe0b6f0e6b6e26b6c5ffc123ac481bc
|
15-213
“The course that gives CMU its Zip!”
Web services
Nov 28, 2000
Topics
• HTTP
• Serving static content
• Serving dynamic content
Web history
1945:
– Describes the idea of a distributed hypertext system.
– a “memex” that mimics the “web of trails” in our minds.
1989:
• Tim Berners-Lee (CERN) writes internal proposal to develop a distributed hypertext system.
– connects “a web of notes with links”.
– intended to help CERN physicists in large projects share and manage information
1990:
• Tim BL writes a graphical browser for Next machines.
Web history (cont)
1992
• NCSA server released
• 26 WWW servers worldwide
1993
• Marc Andreessen releases first version of NCSA Mosaic browser
• Mosaic version released for (Windows, Mac, Unix).
• Web (port 80) traffic at 1% of NSFNET backbone traffic.
• Over 200 WWW servers worldwide.
1994
• Andreessen and colleagues leave NCSA to form "Mosaic Communications Corp" (now Netscape).
Internet Domain Survey
(www.isc.org)
Internet hosts
Mosaic and Netscape
Web servers
Clients and servers communicate using the HyperText Transfer Protocol (HTTP)
- client and server establish TCP connection
- Client requests content
- Server responds with requested content
- client and server close connection (usually)
Current version is HTTP/1.1
- RFC 2616, June, 1999.
Web server statistics
source: Netcraft Web Survey
www.netcraft.com/survey
Static and dynamic content
The content returned in HTTP responses can be either static or dynamic.
Static content:
- content stored in files and retrieved in response to an HTTP request
- HTML files
- images
- audio clips
Dynamic content:
- content produced on-the-fly in response to an HTTP request
- Example: content produced by a CGI process executed by the server on behalf of the client.
URIs and URLs
network resources are identified by Universal Resource Indicators (URIs)
The most familiar is the absolute URI known as the HTTP URL:
- \texttt{http-url} = "http:" "//" host ["":" port] [abs\_path]
- \texttt{port} defaults to "80"
- \texttt{abs\_path} defaults to "/"
- \texttt{abs\_path} ending in / defaults to …/index.html
Examples (all equivalent):
- \texttt{http://www.cs.cmu.edu:80/index.html}
- \texttt{http://www.cs.cmu.edu/index.html}
- \texttt{http://www.cs.cmu.edu}
HTTP/1.1 messages
An HTTP message is either a Request or a Response:
HTTP-message = Request | Response
Requests and responses have the same basic form:
generic-message = start-line
*message-header
CRLF
[message body]
start-line = Request-line | Status line
message-header = field-name ":" [field value] CRLF
message-body = <e.g., HTML file>
HTTP/1.1 requests
Request = Method SP Request-URI SP HTTP-VERSION CRLF
*(general-header | request-header | entity header)*
CRLF
[ message-body ]
Method: tells the server what operation to perform, e.g.,
- GET: serve static or dynamic content
- POST: serve dynamic content
- OPTIONS: retrieve server and access capabilities
Request-URI: identifies the resource to manipulate
- data file (HTML), executable file (CGI)
headers: parameterize the method
- Accept-Language: en-us
- User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)
message-body: text characters
HTTP/1.1 responses
Response = HTTP-Version SP Status-Code SP Reason-Phrase CRLF
*(general-header | response-header | entity header) CRLF
[ message-body ]
Status code: 3-digit number
Reason-Phrase: explanation of status code
headers: parameterize the response
• Date: Thu, 22 Jul 1999 23:42:18 GMT
• Server: Apache/1.2.5 BSDI3.0-PHP/FI-2.0
• Content-Type: text/html
message-body:
• file
How servers interpret Request-URIs
GET / HTTP/1.1
• resolves to home/html/index.html
• action: retrieves index.html
GET /index.html HTTP/1.1
• resolves to home/html/index.html
• action: retrieves index.html
GET /foo.html HTTP/1.1
• resolves to home/html/foo.html
• action: retrieves foo.html
GET /cgi-bin/test.pl HTTP/1.1
• resolves to home/cgi-bin/test.pl
• action: runs test.pl
GET http://euro.ecom.cmu.edu/index.html HTTP/1.1
• resolves to home/html/index.html
• action: retrieves index.html
Example HTTP/1.1 conversation
kittyhawk> telnet euro.ecom.cmu.edu 80
Connected to euro.ecom.cmu.edu.
Escape character is '
'.
Request
sent by client
| GET /test.html HTTP/1.1 ;request line |
| Host: euro.ecom.cmu.edu ;request hdr |
| CRLF |
Response
sent by server
| HTTP/1.1 200 OK ;status line |
| Date: Thu, 22 Jul 1999 03:37:04 GMT ;response hdr |
| Server: Apache/1.3.3 Ben-SSL/1.28 (Unix) |
| Last-Modified: Thu, 22 Jul 1999 03:33:21 GMT |
| ETag: "48bb2-4f-37969101" |
| Accept-Ranges: bytes |
| Content-Length: 79 |
| Content-Type: text/html |
| CRLF |
| <html> ;beginning of 79 byte message body (content) |
| <head><title>Test page</title></head> |
| <body><h1>Test page</h1> |
| </html> |
OPTIONS method
Retrieves information about the server in general or resources on that server, without actually retrieving the resource.
Request URIs:
• if request URI = “*”, then the request is about the server in general
– Is the server up?
– Is it HTTP/1.1 compliant?
– What brand of server?
– What OS is it running?
• if request URI != “*”, then the request applies to the options that available when accessing that resource:
– what methods can the client use to access the resource?
OPTIONS (euro.ecom)
Host is a required header in HTTP/1.1 but not in HTTP/1.0
kittyhawk> telnet euro.ecom.cmu.edu 80
Trying 128.2.218.2...
Connected to euro.ecom.cmu.edu.
Escape character is '^]'.
OPTIONS * HTTP/1.1
Host: euro.ecom.cmu.edu
CRLF
HTTP/1.1 200 OK
Date: Thu, 22 Jul 1999 06:12:11 GMT
Server: Apache/1.3.3 Ben-SSL/1.28 (Unix)
Content-Length: 0
Allow: GET, HEAD, OPTIONS, TRACE
kittyhawk> telnet amazon.com 80
Trying 208.216.182.15...
Connected to amazon.com.
Escape character is '\'].
OPTIONS / HTTP/1.0
CRLF
HTTP/1.0 405 Because I felt like it.
Server: Netscape-Commerce/1.12
Date: Thursday, 22-Jul-99 04:17:32 GMT
Allow: GET, POST
Content-type: text/plain
GET method
Retrieves the information identified by the request URI.
- static content (HTML file)
- dynamic content produced by CGI program
- passes arguments to CGI program in URI
Can also act as a conditional retrieve when certain request headers are present:
- If-Modified-Since
- If-Unmodified-Since
- If-Match
- If-None-Match
- If-Range
Conditional GETs useful for caching
GET (euro.ecom.cmu.edu)
kittyhawk> telnet euro.ecom.cmu.edu 80
Connected to euro.ecom.cmu.edu.
Escape character is '^[].
GET /test.html HTTP/1.1
Host: euro.ecom.cmu.edu
CRLF
HTTP/1.1 200 OK
Date: Thu, 22 Jul 1999 03:37:04 GMT
Server: Apache/1.3.3 Ben-SSL/1.28 (Unix)
Last-Modified: Thu, 22 Jul 1999 03:33:21 GMT
ETag: "48bb2-4f-37969101"
Accept-Ranges: bytes
Content-Length: 79
Content-Type: text/html
CRLF
<html>
<head><title>Test page</title></head>
<body><h1>Test page</h1>
</body>
</html>
GET request to euro.ecom (Internet Explorer browser)
GET /test.html HTTP/1.1
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)
Host: euro.ecom.cmu.edu
Connection: Keep-Alive
CRLF
GET response from euro.ecom
HTTP/1.1 200 OK
Date: Thu, 22 Jul 1999 04:02:15 GMT
Server: Apache/1.3.3 Ben-SSL/1.28 (Unix)
Last-Modified: Thu, 22 Jul 1999 03:33:21 GMT
ETag: "48bb2-4f-37969101"
Accept-Ranges: bytes
Content-Length: 79
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html
CRLF
<html>
<head><title>Test page</title></head>
<body>
<h1>Test page</h1>
</html>
GET request to euro.ecom
(Netscape browser)
GET /test.html HTTP/1.0
Connection: Keep-Alive
User-Agent: Mozilla/4.06 [en] (Win98; I)
Host: euro.ecom.cmu.edu
Accept: image/gif, image/x-xbitmap, image/jpg, image/jpeg, image/pjpeg,
image/png, */*
Accept-Encoding: gzip
Accept-Language: en
Accept-Charset: iso-8859-1,*,utf-8
CRLF
GET response from euro.ecom
HTTP/1.1 200 OK
Date: Thu, 22 Jul 1999 06:34:42 GMT
Server: Apache/1.3.3 Ben-SSL/1.28 (Unix)
Last-Modified: Thu, 22 Jul 1999 03:33:21 GMT
ETag: "48bb2-4f-37969101"
Accept-Ranges: bytes
Content-Length: 79
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html
CRLF
<html>
<head><title>Test page</title></head>
<body>
<h1>Test page</h1>
</html>
HEAD method
Returns same response header as a GET request would have...
But doesn’t actually carry out the request and returns no content
• some servers don’t implement this properly
• e.g., espn.com
Useful for applications that
• check for valid and broken links in Web pages.
• check Web pages for modifications.
HEAD (etrade.com)
kittyhawk> telnet etrade.com 80
Trying 198.93.32.75...
Connected to etrade.com.
Escape character is '^[']'.
HEAD / HTTP/1.1
Host: etrade.com
CRLF
HTTP/1.0 200 OK
Server: Netscape-Enterprise/2.01-p100
Date: Fri, 23 Jul 1999 03:18:57 GMT
RequestStartUsec: 780328
RequestStartSec: 932699937
Accept-ranges: bytes
Last-modified: Tue, 20 Jul 1999 00:59:26 GMT
Content-length: 15370
Content-type: text/html
kittyhawk> telnet espn.com 80
Trying 204.202.136.31...
Connected to espn.com.
Escape character is '^]'.
HEAD / HTTP/1.1
Host: espn.com
HTTP/1.1 301 Document Moved
Server: Microsoft-IIS/4.0
Date: Fri, 23 Jul 1999 03:22:32 GMT
Location: http://espn.go.com/
Content-Type: text/html
<html>
Is now part of the http://espn.go.com service<br>
</html>
POST method
Another technique for producing dynamic content.
Executes program identified in request URI (the CGI program).
Passes arguments to CGI program in the message body
• unlike GET, which passes the arguments in the URI itself.
Responds with output of the CGI program.
Advantage over GET method:
• unlimited argument size
Disadvantages:
• more cumbersome
• can’t serve static content
POST request
POST /cgi-bin/post.pl HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg,
image/pjpeg, application/vnd.ms-excel, application/msword,
application/vnd.ms-powerpoint, */*
Accept-Language: en-us
Content-Type: application/x-www-form-urlencoded
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)
Host: kittyhawk.cmcl.cs.cmu.edu:8000
Content-Length: 25
CRLF
first=dave&last=ohallaron
POST response
HTTP/1.1 200 OK
Date: Fri, 23 Jul 1999 05:42:30 GMT
Server: Apache/1.3.4 (Unix)
Transfer-Encoding: chunked
Content-Type: text/html
CRLF
<p>first=dave&last=ohallaron
Generated by
server
Generated by
CGI script
post.pl
TRACE, PUT, and DELETE methods
TRACE
• Returns contents of request header in response message body.
• HTTP’s version of an echo server.
• Useful for debugging.
PUT:
• add a URI to the server’s file system
DELETE
• delete a URI from the server’s file system
Serving dynamic content
Client sends request to server.
If request URI contains the string “/cgi-bin”, then the server assumes that the request is for dynamic content.
GET /cgi-bin/env.pl HTTP/1.1
Serving dynamic content
The server creates a child process and runs the program identified by the URI in that process.
Serving dynamic content
The child runs and generates the dynamic content.
The server captures the content of the child and forwards it without modification to the client.
Serving dynamic content
The child terminates.
Server waits for the next client request.
Issues in serving dynamic content
How does the client pass program arguments to the server?
How does the server pass these arguments to the child?
How does the server pass other info relevant to the request to the child?
How does the server capture the content produced by the child?
These issues are addressed by the Common Gateway Interface (CGI) specification.
CGI
Because the children are written according to the CGI spec, they are often called CGI programs.
Because many CGI programs are written in Perl, they are often called CGI scripts.
However, CGI really defines a simple standard for transferring information between the client (browser), the server, and the child process.
add.com:
THE Internet addition portal!
Ever need to add two numbers together and you just can’t find your calculator?
Try Dr. Dave’s addition service at add.com: THE Internet addition portal!
• Takes as input the two numbers you want to add together.
• Returns their sum in a tasteful personalized message.
After the IPO we’ll expand to multiplication!
The add.com experience
Welcome to add.com: THE Internet addition portal.
The answer is: 1 + 5 = 6
Thanks for visiting!
Serving dynamic content with GET
Question: How does the client pass arguments to the server?
Answer: The arguments are appended to the URI
Can be encoded directly in a URL typed to a browser or a URL in an HTML link
- http://add.com/cgi-bin/adder?1&2
- adder is the CGI program on the server that will do the addition.
- argument list starts with “?”
- arguments separated by “&”
- spaces represented by “+” or “%20”
Can also be generated by an HTML form
<form method=get action="http://add.com/cgi-bin/postadder">
Serving dynamic content with GET
URL:
- http://add.com/cgi-bin/adder?1&2
Result displayed on browser:
Welcome to add.com: THE Internet addition portal.
The answer is: 1 + 2 = 3
Thanks for visiting! Tell your friends.
Serving dynamic content with GET
**Question**: How does the server pass these arguments to the child?
**Answer**: In environment variable `QUERY_STRING`
- a single string containing everything after the “?”
- for add.com: `QUERY_STRING = “1&2”`
```c
/* child code that accesses the argument list */
if ((buf = getenv("QUERY_STRING")) == NULL) {
exit(1);
}
/* extract arg1 and arg2 from buf and convert */
...
n1 = atoi(arg1);
n2 = atoi(arg2);
```
Serving dynamic content with GET
**Question:** How does the server pass other info relevant to the request to the child?
**Answer:** in a collection of environment variables defined by the CGI spec.
Some CGI environment variables
General
- SERVER_SOFTWARE
- SERVER_NAME
- GATEWAY_INTERFACE (CGI version)
Request-specific
- SERVER_PORT
- REQUEST_METHOD (GET, POST, etc)
- QUERY_STRING (contains GET args)
- REMOTE_HOST (domain name of client)
- REMOTE_ADDR (IP address of client)
- CONTENT_TYPE (for POST, type of data in message body, e.g., text/html)
- CONTENT_LENGTH (length in bytes)
Some CGI environment variables
In addition, the value of each header of type type received from the client is placed in environment variable HTTP_type
• Examples:
– HTTP_ACCEPT
– HTTP_HOST
– HTTP_USER_AGENT (any “-” is changed to “_”)
Serving dynamic content with GET
**Question:** How does the server capture the content produced by the child?
**Answer:** The child writes its headers and content to stdout.
- Server maps socket descriptor to stdout (more on this later).
- Notice that only the child knows the type and size of the content. Thus the child (not the server) must generate the corresponding headers.
```c
/* child generates the result string */
sprintf(content, "Welcome to add.com: THE Internet addition portal\n <p>The answer is: %d + %d = %d\n <p>Thanks for visiting!\n", n1, n2, n1+n2);
/* child generates the headers and dynamic content */
printf("Content-length: %d\n", strlen(content));
printf("Content-type: text/html\n");
printf("\r\n");
printf("%s", content);
```
Serving dynamic content with GET
bass> tiny 8000
GET /cgi-bin/adder?1&2 HTTP/1.1
Host: bass.cmcl.cs.cmu.edu:8000
<CRLF>
kittyhawk> telnet bass 8000
Trying 128.2.222.85...
Connected to BASS.CMCL.CS.CMU.EDU.
Escape character is '^]'.
GET /cgi-bin/adder?1&2 HTTP/1.1
Host: bass.cmcl.cs.cmu.edu:8000
<CRLF>
HTTP/1.1 200 OK
Server: Tiny Web Server
Content-length: 102
Content-type: text/html
<CRLF>
Welcome to add.com: THE Internet addition portal.
<p>The answer is: 1 + 2 = 3
<p>Thanks for visiting!
Connection closed by foreign host.
kittyhawk>
class26.ppt
HTTP request received by server
HTTP request sent by client
HTTP response generated by the server
HTTP response generated by the CGI program
|
{"Source-Url": "http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15213-f00/lectures/class26.pdf", "len_cl100k_base": 4523, "olmocr-version": "0.1.53", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 66261, "total-output-tokens": 6657, "length": "2e12", "weborganizer": {"__label__adult": 0.0003368854522705078, "__label__art_design": 0.0004229545593261719, "__label__crime_law": 0.00028705596923828125, "__label__education_jobs": 0.005252838134765625, "__label__entertainment": 0.0002562999725341797, "__label__fashion_beauty": 0.0001283884048461914, "__label__finance_business": 0.00034165382385253906, "__label__food_dining": 0.00029850006103515625, "__label__games": 0.00064849853515625, "__label__hardware": 0.001987457275390625, "__label__health": 0.0003800392150878906, "__label__history": 0.0005674362182617188, "__label__home_hobbies": 9.870529174804688e-05, "__label__industrial": 0.00031757354736328125, "__label__literature": 0.000591278076171875, "__label__politics": 0.0001766681671142578, "__label__religion": 0.0004968643188476562, "__label__science_tech": 0.0594482421875, "__label__social_life": 0.0002453327178955078, "__label__software": 0.1500244140625, "__label__software_dev": 0.77685546875, "__label__sports_fitness": 0.0002092123031616211, "__label__transportation": 0.0004239082336425781, "__label__travel": 0.0002865791320800781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16236, 0.05796]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16236, 0.29342]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16236, 0.77243]], "google_gemma-3-12b-it_contains_pii": [[0, 137, false], [137, 631, null], [631, 1018, null], [1018, 1196, null], [1196, 1499, null], [1499, 1574, null], [1574, 1980, null], [1980, 2476, null], [2476, 2834, null], [2834, 3405, null], [3405, 3814, null], [3814, 4334, null], [4334, 5039, null], [5039, 5540, null], [5540, 5932, null], [5932, 6214, null], [6214, 6598, null], [6598, 7093, null], [7093, 7349, null], [7349, 7745, null], [7745, 8075, null], [8075, 8471, null], [8471, 8797, null], [8797, 9217, null], [9217, 9566, null], [9566, 9968, null], [9968, 10463, null], [10463, 10697, null], [10697, 10967, null], [10967, 11167, null], [11167, 11287, null], [11287, 11460, null], [11460, 11549, null], [11549, 11918, null], [11918, 12243, null], [12243, 12604, null], [12604, 12726, null], [12726, 13255, null], [13255, 13477, null], [13477, 13934, null], [13934, 14135, null], [14135, 14527, null], [14527, 14771, null], [14771, 15536, null], [15536, 16236, null]], "google_gemma-3-12b-it_is_public_document": [[0, 137, true], [137, 631, null], [631, 1018, null], [1018, 1196, null], [1196, 1499, null], [1499, 1574, null], [1574, 1980, null], [1980, 2476, null], [2476, 2834, null], [2834, 3405, null], [3405, 3814, null], [3814, 4334, null], [4334, 5039, null], [5039, 5540, null], [5540, 5932, null], [5932, 6214, null], [6214, 6598, null], [6598, 7093, null], [7093, 7349, null], [7349, 7745, null], [7745, 8075, null], [8075, 8471, null], [8471, 8797, null], [8797, 9217, null], [9217, 9566, null], [9566, 9968, null], [9968, 10463, null], [10463, 10697, null], [10697, 10967, null], [10967, 11167, null], [11167, 11287, null], [11287, 11460, null], [11460, 11549, null], [11549, 11918, null], [11918, 12243, null], [12243, 12604, null], [12604, 12726, null], [12726, 13255, null], [13255, 13477, null], [13477, 13934, null], [13934, 14135, null], [14135, 14527, null], [14527, 14771, null], [14771, 15536, null], [15536, 16236, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16236, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16236, null]], "pdf_page_numbers": [[0, 137, 1], [137, 631, 2], [631, 1018, 3], [1018, 1196, 4], [1196, 1499, 5], [1499, 1574, 6], [1574, 1980, 7], [1980, 2476, 8], [2476, 2834, 9], [2834, 3405, 10], [3405, 3814, 11], [3814, 4334, 12], [4334, 5039, 13], [5039, 5540, 14], [5540, 5932, 15], [5932, 6214, 16], [6214, 6598, 17], [6598, 7093, 18], [7093, 7349, 19], [7349, 7745, 20], [7745, 8075, 21], [8075, 8471, 22], [8471, 8797, 23], [8797, 9217, 24], [9217, 9566, 25], [9566, 9968, 26], [9968, 10463, 27], [10463, 10697, 28], [10697, 10967, 29], [10967, 11167, 30], [11167, 11287, 31], [11287, 11460, 32], [11460, 11549, 33], [11549, 11918, 34], [11918, 12243, 35], [12243, 12604, 36], [12604, 12726, 37], [12726, 13255, 38], [13255, 13477, 39], [13477, 13934, 40], [13934, 14135, 41], [14135, 14527, 42], [14527, 14771, 43], [14771, 15536, 44], [15536, 16236, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16236, 0.03306]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
b60d81b1989b1ecf32e841f16972761f7c88f7a8
|
Parallel Programming
Project Notes
Oregon State
University
Mike Bailey
mjb@cs.oregonstate.edu
Why Are These Notes Here?
These notes are here to:
1. Help you setup and run your projects
2. Help you get the data you collect in the right format for submission
3. Help you get a **better grade** by doing all of this correctly!
- better grade!
- better grade!
- better grade!
- better grade!
- better grade!
Project Notes, I
- Feel free to run your projects on whatever systems you have access to.
- If you don’t have access to your own systems, then you can use what we have at OSU. On-campus users will have access to Windows and Linux systems here. Ecampus users will have remote access to our Linux systems, such as flip.
- Most of the projects will require timing to determine performance. Use the OpenMP timing functions. They give decent answers, and this will make the timing consistent across projects and across people. The OpenMP call:
\[ \text{double prec = omp_get_wtick();} \]
tells you the precision of the clock in seconds. I get 10^{-9} seconds on the systems I’ve been using. (I really doubt that this is true.) The OpenMP call:
\[ \text{double time0 = omp_get_wtime();} \]
samples the clock right now. It gives you wall-clock time in seconds. In parallel computing, memory latency and thread-idle time are part of the equation, so wall clock time is what you want.
How We Will Be Doing Timing
In this class, we don’t want to just implement – we want to characterize performance. What speed-ups do we see, and why do we see them? How do we generalize that to other types of problems? What insights does this give us?
So, as part of your project assignments, you will be doing a lot of timing to determine program speed-ups.
```c
#include <omp.h>
double time0 = omp_get_wtime(); // seconds
\[ \text{. . .} \]
\[ \text{double time1 = omp_get_wtime(); // seconds} \]
\[ \text{fprintf( stderr, \"Elapsed time \%10.2f\ micro\!seconds\!in\!", \ 1000000. * ( time1 – time0 ) );} \]
%10.2lf is a good way to print doubles ("long float")
```
How Reliable is the Timing?
This way of timing measures **wall-clock time**, which is really what we want to know in a parallel environment, not CPU time.
However, this puts you at the mercy of the other users on the system. If you are on one of our public systems (e.g., flip), I advise you to check the system load to see how much off your wall-clock time measurement will be due to the competition from other users. Use the Linux `uptime` command:
```
flip01 34% uptime
11:13:37 up 96 days, 11:52, 23 users, load average: 3.56, 3.08, 2.82
```
These three numbers represent total CPU load averages for the last 1, 5, and 15 minutes respectively. If the CPU load average is greater than the number of CPUs, then each CPU is over-burdened.
Clearly you want these numbers, especially the 1-minute one, to be as small as possible when you run your test. If they are "big", you might want to ssh to other systems (flip01, flip02, flip03, ...) to see if you can find a better place to run, or try again later.
---
How Reliable is the Timing? A Useful Trick!
I like to check the consistency of the timing by computing both peak speed and average speed and seeing how close they are:
```c
double maxmflops = 0.;
double summflops = 0.;
for( int t = 0; t < NUMTRIES; t++ )
{
double time0 = omp_get_wtime( );
#pragma omp parallel for
for( int i = 0; i < ARAYSIZE; i++ )
{
C[i] = A[i] * B[i];
}
double time1 = omp_get_wtime( );
double mflops = (double)ARRAYSIZE/(time1-time0)/1000000.;
summflops += mflops;
if( mflops > maxmflops )
maxmflops = mflops;
}
printf( " Peak Performance = %8.2lf MFLOPS
", maxmflops );
printf( "Average Performance = %8.2lf MFLOPS
", summflops/(double)NUMTRIES );
```
This is a reliable result:
- Peak Performance = 1183.31 MFLOPS
- Average Performance = 1141.41 MFLOPS
This is an unreliable result:
- Peak Performance = 627.39 MFLOPS
- Average Performance = 294.86 MFLOPS
You should record the peak performance value. This gives you as close to the best answer that you will get. But, compare that with the average performance value. That will tell you how reliable that peak value is.
Project Notes, II
If you are on Linux and have access to the Intel compiler, icpc, don’t use it unless we tell you to! (icpc is so good that it often does optimizations that undermine the very things you are testing.)
Use g++. The compilation sequences are:
On Linux, the typical compile sequence for files that use OpenMP is:
```bash
g++ -o proj proj.cpp -O3 -lm -fopenmp
icpc -o proj proj.cpp -O3 -lm -openmp -align -qopt-report=3 -qopt-report-phase=vec
```
Note that OpenMP should always be included because we are using OpenMP calls for timing.
Note that the second character in the 3-character sequence "-lm" is an ell, i.e., a lower-case L. This is how you link in the Math Library.
Project Notes, III
• Most of these projects will require you to submit graphs. You can prepare the graphs any way you want, except for drawing them by hand. (The Excel Scatter-with-Smooth-Lines-and-Markers works well.) So that we can easily look at each other’s graphs, please follow the convention that up is faster. That is, do not plot seconds on the Y axis because then “up” would mean “slower”. Instead, plot something like Speedup or MFLOPS or frames-per-second.
• I expect the graphs to show off your scientific literacy -- that is, I expect axes with numbers, labels, and units. If there are multiple curves on the same set of axes, I expect to be able to easily tell which curve goes with which quantity. After all, there is a reason this major is called Computer Science. Not doing this makes your project unacceptable for grading.
You lose points if you don’t do it this way.
In Excel, I have had the most success with creating tables that look like this:
<p>| | | | | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>10</td>
<td>100</td>
<td>1000</td>
<td>10000</td>
<td>100000</td>
<td>500000</td>
</tr>
<tr>
<td>1</td>
<td>1.44</td>
<td>3.99</td>
<td>8.07</td>
<td>9.13</td>
<td>22.4</td>
<td>25.13</td>
</tr>
<tr>
<td>2</td>
<td>0.23</td>
<td>4.62</td>
<td>19.20</td>
<td>17.91</td>
<td>34.34</td>
<td>49.83</td>
</tr>
<tr>
<td>4</td>
<td>0.34</td>
<td>0.259</td>
<td>16.7</td>
<td>38.66</td>
<td>82.39</td>
<td>91.09</td>
</tr>
<tr>
<td>8</td>
<td>0.26</td>
<td>3.35</td>
<td>15.21</td>
<td>48.49</td>
<td>137.59</td>
<td>166.17</td>
</tr>
</tbody>
</table>
where the 1,2,4,8 rows are holding the number of threads constant, and the 1, 10, 100, 1000, etc. columns are holding the dataset size constant. The cells are holding performance numbers, with higher numbers representing faster performance.
Sweep over the entire table, select Copy, and then insert it into one of the scatterplot options.
Transposing the Graph
To transpose the sense of the graph (which you also need to do), right-click on the border of the graph and then click on "Select Data".
Then click on "Switch Row/Column".
How Did this Monte Carlo Simulation Turn Out?
Monte Carlo Performance
It's the Same Data, but Each Graph Gives You a Different Insight into what the Data is Telling You
This Data is actually a 3D Surface Plot –
The 2D Graphs are actually sets of 2D slices through the 3D Surface
Making Graphs
When we plot, we will all put \textit{execution performance on the Y axis} (as opposed to putting elapsed time on the Y axis). Thus, as far as performance goes, up will mean “good”. So, for example:
As you can tell, these performance measurements will be far more intelligible when examined as a graph than as raw numbers. *Thus, you are expected to have access to a good automated graphing package.* If you don’t have one, or can’t get access to one – go get one!
Hand-drawn graphs, whether analog or digital, will not be accepted for your assignments.
*You will also need a word processor, with a way to import your tables and graphs, and with a way to turn that document into PDF.*
In our project assignments, you will run benchmarks, that is, you will try your application using several different combinations of parameters. Setting these combinations by hand inside your program one-by-one is a time-consuming pain.
Your time is more valuable than that. Try doing it from a script.
In most C and C++ compilers, there is some mechanism to set a `#define` from outside the program. Many of them use the `–D` construct on the command line:
```sh
#!/bin/csh
#number of threads:
foreach t ( 1 2 4 6 8 )
echo NUMT = $t
g++ -DNUMT=$t prog.cpp -o prog -lm -fopenmp
end
```
Then, in the C or C++ program, all you have to do is use NUMT. For example:
```c
omp_set_num_threads( NUMT );
```
This lets you automatically run your program 5 times with 1, 2, 4, 6, and 8 threads.
You can also test multiple parameters from the same script by nesting the loops. This one is done using C Shell (`csh`):
```sh
#!/bin/csh
# number of threads:
foreach t ( 1 2 4 6 8 )
echo NUMT = $t
# number of subdivisions:
foreach s ( 2 4 8 16 32 64 128 256 512 1024 2048 3072 4096 )
echo NUMS = $s
g++ -DNUMS=$s -DNUMT=$t prog.cpp -o prog -lm -fopenmp
./prog
end
end
```
This lets you automatically run your program 5 times with 1, 2, 4, 6, and 8 threads for each of the subdivison levels.
Or, in *bash* (Bourne-again Shell) ...
```bash
#!/bin/bash
# number of threads:
for t in 1 2 4 6 8
do
echo NUMT = $t
# number of subdivisions:
for s in 2 4 8 16 32 64 128 256 512 1024 2048 3072 4096
do
echo NUMS = $s
g++ -DNUMS=$s -DNUMT=$t prog.cpp -o prog -lm -fopenmp
./prog
done
done
```
Or, in *Python* ...
```python
import os
for t in [1, 2, 4, 6, 8]:
print "NUMT = \%d" % t
for s in [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 3072, 4096]:
print "NUMS = \%d" % s
cmd = "g++ -DNUMS=%d -DNUMT=%d prog.cpp -o prog -lm -fopenmp" % (s, t)
os.system(cmd)
cmd = "./prog"
os.system(cmd)
```
Setting up Your Benchmarks to run from Scripts:
#2 -- the Command Line Arguments Approach
Instead of this:
```c
#include <stdio.h>
#include <math.h>
#ifndef NUMT
#define NUMT 8
#endif
#ifndef NUMS
#define NUMS 32
#endif
```
Do this:
```c
#include <stdio.h>
#include <math.h>
int NUMT = 8;
int NUMS = 32
```
Then, in the C or C++ program, all you have to do is use NUMT to set the number of
threads, like this:
```c
omp_set_num_threads( NUMT );
```
But, the use of the #ifndef/#endif construct has other advantages. It lets you either run
this as a standalone program or run many occurrences of the program from a script.
argc and argv
When you write in C or C++, your `main` program, which is really a special function
call, looks like this:
```c
int main( int argc, char *argv[] )
{
...
}
```
These arguments describe what was entered on the command line used to run the
program.
The `argc` is the number of arguments (the arg Count)
The `argv` is a list of argc character strings that were typed (the arg Vector).
The name of the program counts as the 0th argv (i.e., argv[0])
So, for example, when you type
```
ls -l
```
in a shell, the `ls` program sees argc and argv filled like this:
```
argc = 2
argv[0] = "ls"
argv[1] = "-l"
```
argc and argv
So, if NUMT and NUMS are global int variables:
```c
int NUMT = 2;
int NUMS = 32;
```
and you want to set them from the command line, like this:
```
./prog 1 64
```
Then, inside your main program, you would say this:
```
if( argc >= 2 )
NUMT = atoi( argv[1] );
if( argc >= 3 )
NUMS = atoi( argv[2] );
```
The if-statements guarantee that nothing bad happens if you forget to type values on the command line.
The `atoi` function converts a string into an integer ("ascii-to-integer"). If you ever need it, there is also an `atof` function for floating-point.
shared( ) in the #pragma omp Line
Also, remember, since NUMS is a variable, it needs to be declared as shared in the #pragma omp line:
```
#pragma omp parallel for default(none) shared(NUMS,xcs,ycs,rs,tn) reduction(+:numHits)
```
NUMT does not need to be declared in this way because it is not used in the for-loop that has the #pragma omp in front of it.
In our project assignments, you will run benchmarks, that is, you will try your application using several different combinations of parameters. Setting these combinations by hand inside your program one-by-one is a time-consuming pain. Your time is more valuable than that. Try doing it from a script.
In most C and C++ compilers, there is some mechanism to set a `#define` from outside the program. Many of them use the `-D` construct on the command line:
```sh
#!/bin/csh
g++ prog.cpp -o prog -lm –fopenmp
#number of threads:
foreach t ( 1 2 4 6 8 )
echo NUMT = $t
./prog $t
end
```
Then, in the C or C++ program, all you have to do is use NUMT. For example:
```c
omp_set_num_threads( NUMT );
```
This lets you automatically run your program 5 times with 1, 2, 4, 6, and 8 threads.
You can also test multiple parameters from the same script by nesting the loops. This one is done using C Shell (`csh`):
```sh
#!/bin/csh
g++ prog.cpp -o prog -lm -fopenmp
# number of threads:
foreach t ( 1 2 4 6 8 )
echo NUMT = $t
# number of subdivisions:
foreach s ( 2 4 8 16 32 64 128 256 512 1024 2048 3072 4096 )
echo NUMS = $s
./prog $t $s
end
end
```
In our project assignments, you will run benchmarks, that is, you will try your application using several different combinations of parameters. Setting these combinations by hand inside your program one-by-one is a time-consuming pain. Your time is more valuable than that. Try doing it from a script.
In most C and C++ compilers, there is some mechanism to set a `#define` from outside the program. Many of them use the `-D` construct on the command line:
```sh
#!/bin/csh
g++ prog.cpp -o prog -lm –fopenmp
#number of threads:
foreach t ( 1 2 4 6 8 )
echo NUMT = $t
./prog $t
end
```
Then, in the C or C++ program, all you have to do is use NUMT. For example:
```c
omp_set_num_threads( NUMT );
```
This lets you automatically run your program 5 times with 1, 2, 4, 6, and 8 threads.
You can also test multiple parameters from the same script by nesting the loops. This one is done using C Shell (`csh`):
```sh
#!/bin/csh
g++ prog.cpp -o prog -lm -fopenmp
# number of threads:
foreach t ( 1 2 4 6 8 )
echo NUMT = $t
# number of subdivisions:
foreach s ( 2 4 8 16 32 64 128 256 512 1024 2048 3072 4096 )
echo NUMS = $s
./prog $t $s
end
end
```
Or, in **bash** (Bourne-again Shell) ...
```bash
#!/bin/bash
g++ prog.cpp -o prog -lm -fopenmp
# number of threads:
for t in 1 2 4 6 8
do
echo NUMT = $t
# number of subdivisions:
for s in 2 4 8 16 32 64 128 256 512 1024 2048 3072 4096
do
echo NUMS = $s
./prog $t $s
done
done
```
Or, in **Python**...
```python
import os
cmd = "g++ prog.cpp -o prog -lm -fopenmp"
os.system( cmd )
for t in [ 1, 2, 4, 6, 8 ]:
print "NUMT = %d" % t
for s in [ 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 3072, 4096 ]:
print "NUMS = %d" % s
cmd = "./prog %d %d" % ( s, t )
os.system( cmd )
```
I Don’t Recommend That You Put These Loops in the Program!
I know what you’re thinking.
You’re thinking:
*Those scripts are a pain, and I’ve never done them before. So, I’ll just build the iterations through all the parameters into for-loops in the program.*
Don’t! I have seen evidence that the first time OpenMP does anything, it also does some one-time setups. This will mess up your timing because your first test will seem slower than it should be and the others will seem faster than they should be.
I recommend you run the program *separately* for each combination of parameters. (The script code in the previous pages shows that.)
---
Computing Performance
When computing performance, be sure that the **numerator is amount of work done** and the **denominator is the amount of time it took to do that work**. For example, in the Bezier surface example, computing one height is the work done at each node and you have NUMS*NUMS total nodes, so (NUMS*NUMS)/dt is one good way to measure performance.
NUMS, NUMS*NUMS, 1./dt, and NUMS/dt are not good ways to measure performance as they don’t reflect the true amount of **work done per time**.
If you are using ridiculously high values for NUMS, the quantity NUMS*NUMS might overflow a normal 32-bit int. You can use a long int, or just float each one separately. Instead of (float)(NUMS*NUMS)/dt, you could say (float)NUMS*(float)NUMS/dt
If you are squaring a size number, and are using signed ints, the largest NUMS you can use is:
\[
\sqrt{2,147,483,647} = 46,340
\]
If you are squaring a size number, and are using unsigned ints, the largest NUMS you can use is:
\[
\sqrt{4,294,967,295} = 65,535
\]
Project Turn-in Procedures
Your project turnins will all be electronic.
Your project turnins will be done at http://engr.oregonstate.edu/teach and will consist of:
1. Source files of everything (.cpp, .cl, .cuda)
2. A Linux or Windows executable file, if needed.
3. A report in PDF format. You can .zip everything else if you want, but please leave the PDF as a separate file.
Electronic submissions are due at 23:59:59 on the listed due date.
Your PDF report will include:
1. A title area, including your name, email, project number, and project name.
2. Any tables or graphs to show your results.
3. An explanation of what you did and why it worked the way it did. Your submission will not be considered valid unless you at least attempt to explain why it works the way it does.
Your project will be graded and the score posted to the class web page. Any loss of points will be explained in a Canvas grade comment.
Bonus Days
Projects are due at 23:59:59 on the listed due date, with the following exception:
Each of you has been granted five Bonus Days, which are no-questions-asked one-day project extensions which may be applied to any project, subject to the following rules:
1. No more than 2 Bonus Days may be applied to any one project
2. Bonus Days cannot be applied to tests
3. Bonus Days cannot be applied such that they extend a project due date past the start of Test #2.
### Bonus Days
To use one or more Bonus Days on a given project:
- You don't need to let me know ahead of time.
- Turn-in promptness is measured by date. Don’t worry if teach tells you it’s late because it is between 23:30:01 and 23:59:59. But, after 23:59:59 on the posted due date, it’s late!
- teach has been instructed to accept your turn-in, no matter when you do it.
- I will run a script to identify the projects that will have Bonus Days deducted
- You can see how many Bonus Days you have Left by looking in the BDL column of the grading table on the class web site.
### A Warning About Virtual Machines
Virtual machines are, apparently, not automatically setup to do multithreading.
If you are running on your own virtual machine, and are getting performance numbers that make absolutely no sense, try using one of the OSU machines.
A Warning about Editing on Windows and Running on Linux
Some of you will end up having strange, unexplainable problems with your csh scripts or .cpp programs. This could be because you are typing your code in on Windows (using Notepad or Wordpad or Word) and then running it on Linux. Windows likes to insert an extra carriage return (‘\r’) at the end of each line, which Linux interprets as a garbage character.
You can test this by typing the Linux command:
```
od -c loop.csh
```
which will show you all the characters, even the ‘\r’ (carriage returns, which you don’t want) and the ‘\n’ (newlines, which you do want).
To get rid of the carriage returns, enter the Linux command:
```
tr -d ‘\r’ < loop.csh > loop1.csh
```
Then run loop1.csh
Or, on some systems, there is a utility called `dos2unix` which does this for you:
```
dos2unix < loop.csh > loop1.csh
```
Sorry about this. Unfortunately, this is a fact of life when you mix Windows and Linux.
|
{"Source-Url": "http://web.engr.oregonstate.edu/~mjb/cs575/Handouts/project.notes.2pp.pdf", "len_cl100k_base": 5859, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 35306, "total-output-tokens": 6969, "length": "2e12", "weborganizer": {"__label__adult": 0.0006136894226074219, "__label__art_design": 0.0010623931884765625, "__label__crime_law": 0.0007033348083496094, "__label__education_jobs": 0.09136962890625, "__label__entertainment": 0.00021564960479736328, "__label__fashion_beauty": 0.00037789344787597656, "__label__finance_business": 0.0005693435668945312, "__label__food_dining": 0.0008416175842285156, "__label__games": 0.0018482208251953125, "__label__hardware": 0.0030155181884765625, "__label__health": 0.0007238388061523438, "__label__history": 0.0008020401000976562, "__label__home_hobbies": 0.0004580020904541016, "__label__industrial": 0.0014829635620117188, "__label__literature": 0.0007038116455078125, "__label__politics": 0.0005030632019042969, "__label__religion": 0.0011548995971679688, "__label__science_tech": 0.079833984375, "__label__social_life": 0.0005054473876953125, "__label__software": 0.0166015625, "__label__software_dev": 0.7939453125, "__label__sports_fitness": 0.0007152557373046875, "__label__transportation": 0.0013837814331054688, "__label__travel": 0.0004892349243164062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19854, 0.03534]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19854, 0.61784]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19854, 0.87616]], "google_gemma-3-12b-it_contains_pii": [[0, 423, false], [423, 2088, null], [2088, 4260, null], [4260, 5850, null], [5850, 6588, null], [6588, 6784, null], [6784, 6955, null], [6955, 7280, null], [7280, 7768, null], [7768, 9101, null], [9101, 9731, null], [9731, 10988, null], [10988, 11934, null], [11934, 14335, null], [14335, 14972, null], [14972, 16644, null], [16644, 18038, null], [18038, 18890, null], [18890, 19854, null]], "google_gemma-3-12b-it_is_public_document": [[0, 423, true], [423, 2088, null], [2088, 4260, null], [4260, 5850, null], [5850, 6588, null], [6588, 6784, null], [6784, 6955, null], [6955, 7280, null], [7280, 7768, null], [7768, 9101, null], [9101, 9731, null], [9731, 10988, null], [10988, 11934, null], [11934, 14335, null], [14335, 14972, null], [14972, 16644, null], [16644, 18038, null], [18038, 18890, null], [18890, 19854, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19854, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19854, null]], "pdf_page_numbers": [[0, 423, 1], [423, 2088, 2], [2088, 4260, 3], [4260, 5850, 4], [5850, 6588, 5], [6588, 6784, 6], [6784, 6955, 7], [6955, 7280, 8], [7280, 7768, 9], [7768, 9101, 10], [9101, 9731, 11], [9731, 10988, 12], [10988, 11934, 13], [11934, 14335, 14], [14335, 14972, 15], [14972, 16644, 16], [16644, 18038, 17], [18038, 18890, 18], [18890, 19854, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19854, 0.0175]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
a1bed7c2476afa7b448703f6d2488b308954884a
|
Fast Rehashing in PRAM Emulations*
Jörg Keller
CWI
Postbus 94079
1090 GB Amsterdam, The Netherlands
Abstract
In PRAM emulations, universal hashing is a well-known method for distributing the address space among memory modules. However, if the memory access patterns of an application often result in high module congestion, it is necessary to rehash by choosing another hash function and redistributing data on the fly. For the case of linear hash functions \( h(x) = ax \mod m \) we present an algorithm to rehash an address space of size \( m \) on a \( p \) processor PRAM emulation in time \( O(m/p + \log p) \). The algorithm requires \( O(\log m) \) words of local storage per processor.
1 Introduction
Parallel machines give their users more and more the view of a global shared memory. This simplifies parallel program design because it frees the programmer from partitioning data and from programming communications in message-passing networks. As massively parallel machines with a physical shared memory are unrealistic, the shared address space is mapped onto distributed memory modules by a hash function and accessed via a packet-switching network, both invisible for the user. A hash function distributes almost every memory access pattern evenly among the memory modules. If a particular application, however, requests one memory module much more frequently than the others, it is necessary to choose a new hash function and redistribute data on the fly. This is called rehashing. Rehashing has often been neglected in theoretical investigations. However, if it can be done fast, it is an important technique to obtain the expected performance without restarting the application.
Rehashing is very simple if there is additional storage of size at least \( m \). Either a shadow memory or disk space of size \( m/p \) per processor is sufficient. The contents of the shared memory can be copied to this additional storage, and then written back in permuted order. This works in time \( O(m/p) \) but is either expensive in case of shadow memory or slow in case of disks. We are interested in rehashing without using secondary storage. We investigate the rehashing problem in the setting of PRAM emulations.
The PRAM (parallel random access machine) [8] is a widely used theoretical machine model for processors working synchronously on a shared memory, with unit memory access time. Many numerical and combinatorial parallel algorithms have been designed for the PRAM [4, 9, 11]. However, massively parallel computers normally consist of \( p = 2^t \) processors and memory modules connected by a packet-switching network, because a physical shared memory would become a bottleneck. Much effort has been put in emulating PRAMs on processor networks [10, 14, 15]. All these solutions are randomized; we omit the deterministic solutions because they use expander graphs and are therefore nonconstructive. A second approach for shared memory emulations uses caches to avoid using the network. An example is the DASH multiprocessor [12]. We do not consider that approach here.
To emulate a PRAM, the shared address space is mapped to the memory modules. Processors that want to access a memory cell send a request across the network to the appropriate module. Multiple threads are run per processor to mask the network latency [2, 5]. The mapping has to guarantee that the number of requests arriving at each memory module (denoted as module congestion) is small for almost all memory access patterns. Otherwise the performance of the emulation gets very poor. This is done by using classes of universal hash functions [6]. Each function of the class provides low module congestion for almost every access pattern. Before running an application, one function of the class is picked randomly. Hence,
the probability of an application using patterns that induce high module congestion is very small.
The emulations mentioned above use polynomials of degree $O(\log p)$. But already Ranade mentions that in his simulations linear functions $h(x) = ax \mod m$ are sufficient [16]. The size of the shared memory is denoted by $m = 2^k$, $a$ must be relatively prime to $m$. The most significant $\log p$ bits of the $u$-bit binary representation of $h(x)$ specify the memory module, the lower $u - \log(p)$ bits specify the location on that module. Our own detailed simulations support Ranade’s assessment of the usefulness of linear hash functions [7]. In contrast to polynomials, the linear functions bijectively map addresses to memory cells, which avoids secondary hashing at the modules and the waste of memory caused by it [15]. They also have a shorter evaluation time. We will therefore consider linear hash functions.
Unfortunately, if an application uses a memory access pattern that leads to high module congestion, it tends to use this pattern several times. Then it is better to rehash the address space: choose a new hash function $h'(x) = d'x \mod m$ and redistribute the address space according to the new hash function. If $h$ and $h'$ both are bijective, then the redistribution is a permutation of the contents of the memory cells. It can also be expressed as a permutation $\pi$ of the addresses while still using $h$. This allows to formulate the rehashing algorithm as a PRAM program to permute an array of items according to $\pi$.
The permutation problem on PRAMs was investigated by Aggarwal, Chandra and Snir [3]. However, their permutation must be fixed. If we consider the hash functions themselves as permutations of $\{0, \ldots, m-1\}$, then we could think of choosing a start hash function $h_0$ and a fixed permutation $\pi$ and generate other hash functions $h_i = \pi \circ h_{i-1} = \pi' \circ h_0$ when rehashing for the $i$-th time. As however the group of units in $\mathbb{Z}/m\mathbb{Z}$ is not cyclic if $m$ is a power of two [17, p. 124], the choice of new hash functions would be restricted. This argument even holds for arbitrary permutations, as the symmetric group $S_n$ is not cyclic for $n > 2$. Hence we must deal with a permutation $\pi$ that is not fixed.
We present an algorithm to permute $m$ data items on a PRAM emulation with $p$ processors and memory modules in time $O(m/p + \log p)$ if the permutation is a linear function. The algorithm does not require any global storage and can therefore be used to rehash the address space of the PRAM emulation.
In section 2 we provide facts and notations to be used later on. In section 3 we present the rehashing algorithm and analyze its runtime and space complexity. In section 4 we show how to decide when to invoke the rehashing algorithm. In section 5 we show that an obvious simplification of the rehashing algorithm will probably be slow due to long cycles.
2 Linear permutations
2.1 Form of permutation $\pi$
We want to express the rehashing problem as a permutation of addresses while still using the hash function $h$. If we do this, we can redistribute the address space by executing the PRAM program to permute the addresses, and then switch the hash function to $h'$. Consider an arbitrary address $x$. Before rehashing, $x$ is mapped to cell $h(x)$, after rehashing it will be mapped to cell $x' = h'(x)$. Before rehashing, address $y = h^{-1}(x')$ is mapped to cell $x'$. Hence, the redistribution can be expressed as permuting addresses according to $\pi(x) = y$.
In $\mathbb{Z}/m\mathbb{Z}$, the numbers relatively prime to $m$ form a multiplicative group, the group of units $[17, p. 119]$. It follows that $a$ and $a'$ can be inverted and that $h$ and $h'$ are bijective. Then
$$\pi(x) = h^{-1}(h'(x)) = a^{-1}a'x \mod m.$$ \hspace{1cm} (1)
As $a$ and $a'$ are units, $b = a^{-1}a' \mod m$ also is a unit and $\pi(x) = bx \mod m$ is injective. We investigate $m = 2^k$. The group of units here is the set of odd numbers between 1 and $m - 1$.
2.2 Structure of permutation $\pi$
We want to permute the addresses without using secondary storage. This can be done by splitting permutation $\pi$ into its cycles $C_1, C_2, \ldots$, distributing the cycles among the processors, and then having each processor permute its assigned cycles sequentially. A processor needs only local space to buffer one item if it permutes a cycle sequentially.
To follow this idea, we need to explore the structure of $\pi$. For each cycle, we need to know an entry element and its length. The length is necessary to schedule the cycles among the processors, as the time to permute a cycle is proportional to its length. Fortunately, the structure of linear permutations is very regular.
For $x$ in $\{0, \ldots, m - 1\}$ we define $j(x) = \max\{k | x \text{ can be divided by } 2^k\}$. Then every $x$ in $\{0, \ldots, m - 1\}$ has a unique representation $x = 2^{j(x)}x'$. For
where $0 \leq j < u$ and $x' < m/2^j(x)$ is odd. We can now partition the set $U(m) = \{0, \ldots, m-1\}$ into sets
$$U_k(m) = \{x \in U(m) | j(x) = k\} = \{x \in U(m) | x = 2^k x' \text{ and } x \text{ odd}\}.$$
We apply $\pi$ to an address $x \in U_k(m)$. $\pi(x) = kx \mod m = k2^j x' \mod m$. As $k$ and $x'$ are units, $x = kx' \mod m$ is also a unit and $k2^j x \mod m = 2^j (kx' - rm/2^j) \mod m$ (for some $r = \pi(x)$). Hence $\pi(x)$ is an element of $U_k(m)$, too. We conclude that each cycle of $\pi$ is contained completely in one of the $U_k(m)$. Furthermore $\phi(x) = x/2^k$ is a bijection from $U_k(m)$ to $U_k(m/2^k)$, $\pi(x') = kx' \mod m$ is a permutation on $U_k(m/2^k)$ and for $x \in U_k(m)$ we have $\pi(x) = \phi_k^{-1}(\pi_k(\phi_k(x)))$. We therefore restrict our attention to the problem of permuting odd numbers $(U_0(m/2^j))$ and then apply this method by using $\phi_k^{-1}$ to permute $U_k(m)$. Note that $U_0(m)$ is the set of units and hence a multiplicative group. Consider the cycles of $\pi$ when applied on $U_0(m)$. A cycle starting with an element $x$ has the form $x, bx, b^2 x, \ldots, b^{m} x, x$. Then $l$ is the order of $b$ in $U_0(m)$. We can conclude that all cycles have the same length, which must be a power of two because the order of $U_0(m)$ is a power of 2. The number of cycles $\sigma = |U_0(m)|/l$ then also is a power of two.
We call $x$ the entry element of the cycle and denote the cycle with entry element $x$ by $C(x)$. Note that each element of $\pi$ can be chosen as the entry element. We try to find a set of entry elements $c_i$, $i = 0, \ldots, \sigma - 1$, such that $C(c_i) \neq C(c_k)$ for $i \neq k$ and that all cycles together span $U_0(m)$. The following Lemma makes sure that there is such a set where the entry elements of the cycles have a very regular form.
**Lemma 1** If $b \neq -1$, then the elements $5^k$, and $(-1)5^k$, where $0 \leq k < \sigma/2$, are all on different cycles. If $b = -1$, then the elements $5^k$, where $0 \leq k < \sigma$, are all on different cycles.
**Proof:** $U_0(m)$ is generated by $-1$ and $5$ [17, p. 124]. Each $x \in U_0(m)$ thus has a unique representation $x = (-1)^{a_0} 5^{a_1}$ mod $m$, where $a \in \{0, 1\}$ and $a' \in \{0, \ldots, m/4-1\}$. Let $b = (-1)^{\beta} 5^{\beta'}. \text{ If } b = 1 \text{ or } b = -1, \text{ then the result is straightforward.}$
Let us now consider that $b \not\in \{-1, 1\}$ and therefore that $\beta' \neq 0$. We have to show that for every $k, v \in \{0, \ldots, \sigma/2 - 1\} \text{ and any } g \in \{0, \ldots, l - 1\}, 5^k \neq b^g 5^v$ if $k \neq v$ and $(-1)5^k \neq b^g 5^v$. The first inequality is equivalent to $5^k - v = \beta'$. With $b = (-1)^{\beta} 5^{\beta'}$, we obtain $5^k - v \neq (-1)^{\beta} 5^{\beta'}$. As $0 < |k - v| < \sigma/2$, we have the desired property if $\beta'$ is a multiple of $\sigma/2$.
The second inequality is equivalent to $(-1)^{5^k - v} = (-1)^{5^k 5^{\beta'}}$. In order to meet $(-1) = (-1)^{\beta}$, $g$ has to be odd, especially not equal to zero. But if $\beta'$ is a multiple of $\sigma/2$, then $5^k 5^{\beta'}$ can never equal $5^k - v$ because $0 < |k - v| < \sigma/2$.
We finish the proof by showing that $\beta' \neq 0$ is a multiple of $\sigma/2$. Consider $b'$, which equals $1 \mod m$ because $b$ is the order of $b$. With the above representation we obtain $(-1)^{b' 5^{\beta'}} \equiv 1 \mod m$. It follows that $|\beta' \mod m|/4$. This is equivalent to $\beta' \equiv 0 \mod m/4$, because $l$ is a power of two. As $l = |U_0(m)|/\sigma = (m/2)/\sigma$, we obtain $\beta' \equiv 0 \mod \sigma/2$. Therefore $\beta'$ must be a multiple of $\sigma/2$. □
### 2.3 Working with multi-threaded processors
Assume that the time to access a shared memory cell via the network is $L$. In order to hide this latency from the user, each processor runs $L$ threads. Each thread has its own register set. The threads are executed in a round-robin manner with one instruction per turn. The processors are pipelined with pipeline depth $L$. Hence every $L$ cycles, each thread has executed another instruction. We will call the $N = Lp$ threads of the emulation virtual processors. We assume $N$ to be a power of two.
Consider a problem with sequential time complexity $T$, which is also called work. If it can be completely parallelized on $N$ virtual processors, then it needs $T/N$ steps on a $p$-processor PRAM emulation, each taking $L$ cycles. Thus the runtime will be $T/p$. We will proceed in the same way with the rehashing problem.
### 3 Algorithm
We will now describe the permutation algorithm for a PRAM with $N$ processors. The algorithm works in rounds, in each round one $U_j(m)$ is permuted, as long as $|U_j(m)| \geq N$. All $U_j(m)$ with $|U_j(m)| < N$ are handled together in a final round. We will distinguish $l$ and $\sigma$ in different $U_j(m)$ by an index $j$.
To permute one $U_j(m)$, we have each processor permute $\sigma_j/N$ cycles sequentially if $\sigma_j \geq N$. If there are fewer than $N$ cycles, then $N/\sigma_j$ processors work together to permute one cycle. We split each cycle in pieces of size $N/\sigma_j$, each piece is permuted in one step. Permuting a cycle piece after piece is somewhat tricky, because the virtual processor that picked the last element of the piece may store it only if another processor has picked the first element of the next piece.
The precomputing phase works only on processors’ local memories. Therefore, we will not run multiple threads during the precomputing phase. We assume that physical processor $x$ will run virtual processors $x, x + p, \ldots, x + (L - 1)p$ during the rehashing phase.
The computation of $l_j$ and $\sigma_j$ has to be done once per physical processor and is identical for all processors. We compute a table of $b^i$ for $0 \leq i < u$ by successively computing $b^{i+1} = b^i \cdot b^2$. We obtain the $l_j$ by checking whether $b^j \mod m/2^j$ equals 1. As the $l_j$ are decreasing with increasing $j$, we have to traverse the table only once. The $\sigma_j$ are obtained as $|U_j(m)|/l_j$.
To compute entry elements, we build up a table of values $5^i$ similarly to the table of $b^i$. Each physical processor $x$ computes $5^r$ as $\prod_{a=1}^r 5^a$, if $(x_{\log p - 1}, \ldots, x_0)$ is the binary representation of $x$. With the help of this value and the table, the entry elements for each virtual processor run on this physical processor can be computed in constant time per entry element, for an appropriate assignment of cycles to processors.
For the final phase, we split each cycle completely and assign each processor one element to move. This can be done in constant time.
### 3.2 Analysis
The precomputing phase takes time $O(\log m + L)$. If we only consider bounded-degree networks, then $L = \Omega(log p)$. Moreover, there are emulations with $L = \Theta(log p)$ [2, 15]. For $m$ polynomial in $p$, $\log m = \Theta(log p)$ and hence the time for the precomputing phase is $\Theta(log p)$. The space needed for each physical processor also is $\Theta(log p)$.
The rehashing phase is completely parallelized. The total work $T = \Theta(m)$ is distributed evenly and hence the runtime is $O(m/p)$ due to subsection 2.3.
The rehashing phase needs $O(L) = O(log p)$ space per physical processor.
The total runtime is $O(m/p + log p)$. For $m \geq p log p$, this is $O(m/p)$, which is optimal.
### 4 Detection
When using the algorithm for rehashing in a PRAM emulation, we encounter the problem of automatically detecting the necessity to rehash. A complete solution to this problem would consist of predicting the address trace of the remaining program part, computing the distributions with and without rehashing and computing from this the runtimes $T_b$ and $T_a$, respectively. If the time to rehash the address space is $T_r$, then rehashing is useful if $T_b + T_r < T_a$.
However, this prediction is often impossible because of future input or it would take too much time to compute $T_b$ and $T_a$, even if we perform it only every $x$ cycles to predict the next $x$ cycles.
To avoid prediction, we take a advantage of the regular structure of programs. A lot of applications spend most of their time in loops. Hence, future performance can be guessed by observing current performance. A simple approach consists of counting the fraction of stalled cycles in the last $x$ cycles. If this fraction gets larger than a certain user-defined threshold $1/t$, then rehashing is initiated. This detection can be done by maintaining two counters $CO_{ST}$ and $CO_{TO}$ for the number of stalled and the number of total cycles, and a register for storing $t$. In the beginning, both counters are set to zero. If $CO_{TO}$ reaches $x$, we want to check whether
$$\frac{CO_{ST}}{CO_{TO}} > \frac{1}{t}.$$
To do this, we multiply $CO_{ST}$ with $t$ and subtract $CO_{TO}$ from it. If the result is positive, we initiate rehashing. Afterwards, the counters are set to zero again.
This allows the user to define a threshold in a wide range, and detection can be made without floating point operations or divisions. The value of $t$ might depend on the application and on the particular implementation of the rehashing algorithm.
5 Simplification of the algorithm
One might think about simplifying the algorithm for rounds where there are less than \( N \) cycles. Instead of having several processors permuting one cycle, one could use only \( \sigma_j \) processors. The runtime of this round then will increase from \( \sigma_j \) to \( \sum \). If this does not happen to often and \( \sigma_j \) is not too large, the loss in runtime would be quite small. However, Theorem 2 shows that the probability of a small loss of performance is quite small.
**Theorem 2** Let \( T_0 \) and \( T_1 \) be the runtimes of the original and the simplified algorithm for a randomly chosen \( b \). Then
\[
\text{Prob}(T_1 / T_0 \leq \delta) \leq 4\delta / N
\]
for any real number \( \delta \) with \( 1 \leq \delta \leq N/8 \).
After choosing an element \( b \), the quotient \( T_1 / T_0 \) can be computed in time \( O(\log m) \). One might think to increase \( \text{Prob}(T_1 / T_0 \leq \delta) \) by repeatedly choosing \( b \) until \( T_1 / T_0 \leq \delta \) or until a time bound, e.g., \( m/p \), is reached. However, this would affect the random choice of a new hash function and should not be done.
The proof of Theorem 2 relies on the distribution of orders of elements in \( U_0(m) \). This distribution is given in the following Lemma 3.
**Lemma 3** If we randomly choose an element \( b \) of \( U_0(m) \), then its order can be \( 2^j \), where \( 0 \leq j \leq u-2 \). Furthermore,
\[
\text{Prob}(\text{ord}(b) = 2^j) = \begin{cases}
1/2^{u-1} & \text{if } j \neq 1 \\
3/2^{u-1} & \text{if } j = 1.
\end{cases}
\]
**Proof:** As the order of \( U_0(m) \) is \( 2^{u-1} \), the order of an element \( b \) has to be a power of two because it has to divide the group’s order. As \( U_0(m) \) is not cyclic [17, p. 124], the order of \( b \) can be at most \( 2^{u-2} \).
The group \( U_0(m) \), which is the group of units in \( Z/2^u Z \), is isomorphic to the product \( U' \times U'' = \langle \{0, 1\}, + \mod 2 \rangle \times \langle \{0, \ldots, 2^{u-2}-1\}, + \mod 2^{u-2} \rangle \) by an isomorphism \( \psi \) [17, p. 124]. The order of an element \( b \) in \( U_0(m) \) with \( \psi(b) = (b_1, b_2) \) is determined by the order of \( b_1 \) in \( U'' \) if \( b_2 \neq 0 \), and by the order of \( b_1 \) in \( U' \) otherwise. \( U'' \) is cyclic and therefore the number of elements in \( U'' \) with order \( 2^j \) equals \( \phi(2^j) \) (the Euler function) [17, p. 119]. If \( b_2 \neq 0 \) and hence \( \text{ord}(b_2) \geq 2 \), there are two elements \( \psi^{-1}(0,b_2) \) and \( \psi^{-1}(1,b_2) \) in \( U_0(m) \) with order \( \text{ord}(b_2) \). If \( b_2 = 0 \) and hence \( \text{ord}(b_2) = 1 \), there are two elements \( \psi^{-1}(0,0) \) and \( \psi^{-1}(1,0) \) in \( U_0(m) \) with orders 1 and 2, respectively. It follows that the number of elements in \( U_0(m) \) with order \( 2^j \) is \( 2\phi(2^j) \) if \( j \geq 2 \), \( 2\phi(2) + 1 \) if \( j = 1 \), and 1 if \( j = 0 \).
For a randomly chosen element \( b \) in \( U_0(m) \) we can now define \( \text{Prob}(\text{ord}(b) = 2^j) \) as the quotient of the number of elements in \( U_0(m) \) with order \( 2^j \) and the order of \( U_0(m) \). With \( \phi(P) = (P-1)P^{r-1} \) for a prime \( P \) and an integer \( r \) [17, p. 120], Equation (3) follows.
**Proof of Theorem 2:** We will prove the Theorem by computing \( T_0 \), a lower bound \( B \) on \( T_1 \), and \( \text{Prob}(B/T_0 > \delta) \). Then we obtain
\[
\text{Prob}(T_1 / T_0 \leq \delta) = 1 - \text{Prob}(T_1 / T_0 > \delta) \leq 1 - \text{Prob}(B / T_0 > \delta). \tag{4}
\]
We measure the runtime in number of movements per processor. In the original algorithm, this is \( |U_0(m)| / N \) for all stages but the last one, where it is 1. Hence
\[
T_0 = 1 + \sum_{j=0}^{u-\log N} |U_j(m)| / N = 2^u / N.
\]
In the simplified algorithm, the runtime increases to \( l_j \) in stages where \( \sigma_j < N \). Hence
\[
T_1 = 1 + \sum_{j=0}^{u-\log N} \max(|U_j(m)| / N, l_j). \tag{5}
\]
From \( l_j+1 > l_j / 2 \), it follows that \( l_j > l_0 / 2^j \). We will assume that \( l_0 = 2^u \). We also know that \( |U_j(m)| = 2^{u-1} \). We bound \( T_1 \) from below by putting these facts into Equation (5).
\[
T_1 \geq 1 + \sum_{j=0}^{u-\log N} \max(2^{u-j-1-\log N}, 2^{u-j}).
\]
If \( x < u-1-\log N \), then the maximum always takes the left term’s value, and it follows that \( T_1 \geq T_0 \). If \( x \geq u-\log N \), then the maximum always takes the right term’s value, and
\[
T_1 \geq 1 + 2^{u+1} - 2^{u-\log N + 1}. \tag{6}
\]
If \( u \geq \log N + 1 \), then \( 2^{u-\log N + 1} \leq 2^x \) and we can simplify Equation (6) to \( T_1 \geq 2^x \).
With this we have a lower bound \( B \) on \( T_1 \) with
\[
B = \begin{cases}
2^x & \text{if } x \geq u-\log N \\
T_0 & \text{if } x \leq u-1-\log N.
\end{cases}
\]
We use $B$ to compute $\text{Prob}(B/T_\delta > \delta)$. $B/T_\delta > \delta$ can only happen if $x > u - \log N$, because $B = T_\delta$ otherwise. As $B/T_\delta = 2^x/2^{x-\log N}$, the condition $B/T_\delta > \delta$ is equivalent to $x > \log \delta + u - \log N = \kappa$. With $\text{ord}(b) = l_b = 2^x$, we get
$$\text{Prob}(B/T_\delta > \delta) = \text{Prob}(x > \kappa) = \sum_{j=\kappa+1}^{\infty} \text{Prob}(\text{ord}(b) = 2^j) = \begin{cases} 1 - 4\delta/N & \text{if } \delta \leq N/8 \\ 0 & \text{otherwise} \end{cases} \quad (7)$$
By combining Equations (4) and (7), we prove the claimed Equation (2) of the Theorem.
6 Conclusions
PRAM emulations that use linear hash functions can be rehashed in optimal time. The algorithm does not require secondary storage devices like hard disks. The computations only require multiplication and shifts of integers at instruction level. Only for the detection of rehashing two counters are needed. The counter $\text{COT}_O$ is normally present in the system as a timer, the counter $\text{CO}_{ST}$ can be realized in software. One can modify the compiler to increase a register $R$ by the number of executed instructions at the end of each basic block. This gives $\text{CO}_{ST} = \text{COT}_O - R$. Therefore the rehashing algorithm can be implemented without any hardware changes.
The practical usefulness of rehashing has not yet been tested, because there is no working prototype of a PRAM emulation. However, Lipton and Naughton [13] construct programs that use timers to measure emulation times of PRAM steps and base their future behaviour on these times. These programs are called “clocked adversaries” and they lead provably to bad distributions of requests and hence to long runtimes. This hints that rehashing will be needed in practice.
The concept of rehashing will be implemented in the SB-PRAM [1], the prototype of the PRAM emulation described in [2].
It is still an open problem whether on-line rehashing is possible. By on-line rehashing, we understand that $c$ steps of the PRAM application and $c$ steps of the rehashing procedure can be executed alternately for the time span of rehashing. Currently, the PRAM application has to be stopped while rehashing the address space.
Acknowledgements
I am very thankful to Dany Breslauer for suggestions about the choice of entry elements and to Martin Dietzfelbinger for many stimulating discussions. I also want to thank Stefan Ellwert and Volker Müller for providing some help in algebraic notation.
References
|
{"Source-Url": "https://www.fernuni-hagen.de/pv/docs/88-96/spdp93_f.pdf", "len_cl100k_base": 7451, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 30436, "total-output-tokens": 8974, "length": "2e12", "weborganizer": {"__label__adult": 0.0004661083221435547, "__label__art_design": 0.0005049705505371094, "__label__crime_law": 0.0004901885986328125, "__label__education_jobs": 0.0005087852478027344, "__label__entertainment": 0.00011909008026123048, "__label__fashion_beauty": 0.00023698806762695312, "__label__finance_business": 0.00038242340087890625, "__label__food_dining": 0.0004673004150390625, "__label__games": 0.0009565353393554688, "__label__hardware": 0.01163482666015625, "__label__health": 0.0008153915405273438, "__label__history": 0.0005369186401367188, "__label__home_hobbies": 0.00023543834686279297, "__label__industrial": 0.0012464523315429688, "__label__literature": 0.0002593994140625, "__label__politics": 0.0003991127014160156, "__label__religion": 0.0008845329284667969, "__label__science_tech": 0.302490234375, "__label__social_life": 7.963180541992188e-05, "__label__software": 0.0121917724609375, "__label__software_dev": 0.6630859375, "__label__sports_fitness": 0.0005173683166503906, "__label__transportation": 0.0012197494506835938, "__label__travel": 0.0003275871276855469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27954, 0.03294]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27954, 0.59839]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27954, 0.85019]], "google_gemma-3-12b-it_contains_pii": [[0, 3818, false], [3818, 8805, null], [8805, 14222, null], [14222, 18077, null], [18077, 22969, null], [22969, 26938, null], [26938, 27954, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3818, true], [3818, 8805, null], [8805, 14222, null], [14222, 18077, null], [18077, 22969, null], [22969, 26938, null], [26938, 27954, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27954, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27954, null]], "pdf_page_numbers": [[0, 3818, 1], [3818, 8805, 2], [8805, 14222, 3], [14222, 18077, 4], [18077, 22969, 5], [22969, 26938, 6], [26938, 27954, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27954, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f002228a726ff2ff4a730907459a4334283fd73c
|
Code Generation for High-Level Synthesis of Multiresolution Applications on FPGAs
Moritz Schmid, Oliver Reiche, Christian Schmitt, Frank Hannig, and Jürgen Teich
Hardware/Software Co-Design, Department of Computer Science
Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany.
Abstract— Multiresolution Analysis (MRA) is a mathematical method that is based on working on a problem at different scales. One of its applications is medical imaging where processing at multiple scales—based on the concept of Gaussian and Laplacian image pyramids—is a well-known technique. It is often applied to reduce noise while preserving image detail on different levels of granularity without modifying the filter kernel. In scientific computing, multigrid methods are a popular choice, as they are asymptotically optimal solvers for elliptic Partial Differential Equations (PDEs). As such algorithms have a very high computational complexity that would overwhelm CPUs in the presence of real-time constraints, application-specific processors come into consideration for implementation. Despite of huge advancements in leveraging productivity in the respective fields, designers are still required to have detailed knowledge about coding techniques and the targeted architecture to achieve efficient solutions. Recently, the HIPA framework was proposed as a means for automatic code generation of image processing algorithms, based on a Domain-Specific Language (DSL). From the same code base, it is possible to generate code for efficient implementations on several accelerator technologies including different types of Graphics Processing Units (GPUs) as well as reconfigurable logic (FPGAs). In this work, we demonstrate the ability of HIPA to generate code for the implementation of multiresolution applications on FPGAs and embedded GPUs.
I. INTRODUCTION
A few among numerous applications of MRA are signal detection, differential equation solving, information retrieval, computer vision, as well as signal and image processing. The algorithms used to solve problems in industry and scientific computing are becoming more and more complex and must deliver enough performance to process vast amounts of data often under rigid resource and energy constraints. Due to these requirements, hardware accelerators, such as embedded GPUs and Field Programmable Gate Arrays (FPGAs) are ideal targets for the implementation. Although there has been tremendous progress in making the respective programming models more approachable, a deep understanding of the algorithmic details and the hardware architecture are necessary to achieve good results. To ease the burden on developers, DSLs aim at combining architecture- and domain-specific knowledge, thereby delivering performance, productivity, and portability. So far, DSLs have been researched for a long time for Central Processing Units (CPUs) as well as GPUs, and recently have also targeted hardware design [15, 7], which has mostly been the prime domain for High-Level Synthesis (HLS). Over the past decades, C-based HLS focusing on FPGAs has become very sophisticated, producing designs that can rival hand-coded Register-Transfer Level (RTL). A drawback is that these frameworks must be very flexible and although being able to create an efficient hardware design from a C-based language can significantly shorten the development time, architectural knowledge and specific coding techniques are still a must. A remedy to this situation is to increase the level of abstraction even further and use a domain-specific framework to generate code for FPGA HLS. HIPACC is a publicly available framework for the automatic code generation of image processing algorithms on GPU accelerators. Starting from a C++ embedded DSL, HIPACC delivers tailored code variants for different target architectures, significantly improving the programmer’s productivity. Recently, HIPACC was extended to also be able to generate C++ code for the C-based HLS framework Vivado HLS from Xilinx [13]. The design flow of the approach is depicted in Figure 1. A recent addition to HIPACC is the support for multiresolution applications from image processing and scientific computing. The key contributions in this work are therefore, (a) we show how code for multiresolution structures can be automatically generated for C-based HLS, and (b) we demonstrate the versatility of the approach by presenting two case studies, involving applications from medical image processing and scientific computing. The generated target code is derived from a high-level description for image processing algorithms. Therefore, this work uses the high-level description presented in [9].
II. BACKGROUND
In multiresolution processing, a certain data set will be represented on different resolution levels. Starting at full resolution (base), for each consecutive level a more coarse-grained representation of the data set is created, as shown in Figure 2. On each level, the same computational operations can be applied, affecting a different relative region size, without modifying the filter kernel. The recursiveness in multiresolution methods and the high degree of parallelism makes these an ideal target for data streaming-oriented FPGA-acceleration. For HLS it is especially beneficial, that the basic construction steps to traverse multiresolution data and the processing function can be reused at every level and must therefore only be designed once.

As the granularity decreases by a factor of four at every stage from the bottom to the top, the accelerator can be designed for a single clock domain by appropriately reducing the throughput by a factor of four compared to the predecessor stage. An ideal method to achieve this in high-level synthesis is to adapt the pipeline initiation interval (II). Designing for a single clock domain also has the advantage that resource requirements can be reduced by relaxing the performance constraints on the coarse-grained higher levels. For example, a divider that has to process a new value in every clock cycle on the lowest level only has to process a new value in every fourth cycle on the next higher level. To save resources, it can either be adapted to operate at an II of four, or if the algorithm requires more than a single division, it can be shared between computations. A major concern for multiresolution systems on FPGAs are the limited memory resources. As data must be merged at the end of the processing cycle, large amounts of data must be buffered on the lowest level while waiting for data from the higher levels. If the buffers are not sufficiently large the design might not be able to complete the processing cycle. In contrast, the size of the buffer affects block RAM and logic resource usage and should therefore not be set too large. Current mid-range FPGAs, provide sufficient memory resources for data sets of up to one million samples in floating point representation. If larger data sets should be processed the buffers on the lower levels might need to be offloaded to external memory.
III. Programming Model
The Heterogeneous Image Processing Acceleration (HIPA®) framework consists of a DSL for image processing that is embedded into C++ and a source-to-source compiler. Exploiting the compiler, image filter descriptions written in DSL code can be translated into multiple target languages such as Compute Unified Device Architecture (CUDA), Open Computing Language (OpenCL), Renderscript as used on Android, and C++ code that can be further processed by Vivado HLS [13]. In the following, we will use the Gaussian filter as an example for describing image filters and briefly describe properties of the DSL and show how code generation is accomplished.
1) Domain-Specific Language: Embedded DSL code is written by using C++ template classes provided by the HIPA® framework. The most essential C++ template classes for writing 2D image processing DSL codes are: (a) an Image, which represents the data storage for pixel values; (b) an IterationSpace defining the Region Of Interest (ROI) for operating on the output image; (c) an Accessor defining the ROI of the input image and enabling filtering modes (e.g., nearest neighbor, bilinear interpolation, etc.) on mismatch of input and output region sizes; (d) a Kernel specifying the compute function executed by multiple threads, each spawned for a single iteration space point; (e) a Domain, which defines the iteration space of a sliding window within each kernel; and (f) a Mask, which is a more specific version of the Domain, additionally providing filter coefficients for that window. Image accesses within the kernel description are accomplished by providing relative coordinates. To avoid out-of-bound accesses, kernels can further be instructed to implement a certain boundary handling (e.g., clamp, mirror, repeat) by specifying an instance of class BoundaryCondition.
To describe the execution of a Gaussian filter, we need to define a Mask and load the Gaussian coefficients, defined as constants, see Listing 1 (lines 6–10). It is further necessary to create an input and an output image for storing pixel data and loading initial image data into the input image (lines 11–15). The input image is bound to an Accessor with enabled boundary handling mode clamping (lines 18–19). After defining the iteration space, the kernel can be instantiated (line 25) and executed (line 28). The actual Kernel implementation is defined elsewhere and not of further importance for the remainder of this work.
In order to describe multiresolution algorithms more efficiently, HIPA® recently introduced built-in support for image pyramids [12], a common representation of multiresolution data within the domain of image processing [2], which can also be used to describe the multiple scales of data in the multigrid method. To operate on image pyramids, multiple images for the different resolution levels are created, which are then processed by kernels to provide data exchange between these levels (downscaling and upscaling) and between images within the same level. The execution order of those kernels can typically be described in a recursive manner, meaning first downscaling is applied until a certain level, then some operations are executed on one or more of these levels before upscaling is applied to obtain the final image. HIPA®’s language support for image pyramids includes (a) Pyramid, a data structure for automatically creating and encapsulating multiple Images of different resolution; and (b) traverse(), a
recursive function embodying the necessary kernel calls for downsampling, upsampling and computations on the same resolution level.
For the multigrid method, data flow is different, as not the input data itself is sampled down (which is called restriction), but it is the residual that is calculated and then restricted. However, it is structurally comparable. W-cycles, where—in contrast to the V-cycle—the recursion to the coarser level is carried out twice, can be described by adding an argument to the \texttt{traverse()} function call.
2) Generating Code for Vivado HLS: The HI\textsuperscript{PC} compiler is based on the Clang/LLVM 3.4 compiler infrastructure\textsuperscript{1}. Utilizing the Clang front end, HI\textsuperscript{PC} parses C/C++ code and generates an internal Abstract Syntax Tree (AST) representation. Considering Vivado HLS as a target for code generation involves numerous challenges to overcome. Mismatching image sizes in between pyramid levels (e.g., down- and upsampling) need to be handled appropriately, in particular, when transforming the buffer-wise execution model, where kernels are issued one by one, into streaming buffers for pipelining. A pipelined structural description has to be inferred from the linear execution order of kernels. Hereafter, kernel implementations need appropriate placement of Vivado HLS \texttt{ pragmas} depending on the desired target optimization.
IV. STREAMING PIPELINE
High-level programs given in HI\textsuperscript{PC} DSL code process image filters buffer-wise. Each kernel reads from and writes to buffers sequentially, running one after another with buffers serving as synchronization points (so-called host barriers). Buffers can be read and written, copied, reused, or allocated only for the purpose of storing intermediate data.
The buffered concept is fundamentally different from streaming data through kernels and processing a computational step as soon as all input dependencies are available. Kernels are therefore interconnected with each other using stream objects implementing First In First Out (FIFO) semantics. This streaming concept requires a structural description, resolving direct data dependencies, unconstrained from the exact sequential ordering of kernel executions.
We can transform the buffer-wise execution model into a structural description suitable for streamed pipelining by analyzing the DSL host code, replacing memory transfers by stream objects, and generating appropriate kernel code. Vivado HLS can then be instructed to run all kernels in parallel, which can deliver a significantly shorter processing time.
A. Generating the Pipeline
DSL code is translated into an AST representation that is traversed by HI\textsuperscript{PC}. During this traversal process, we track the use of buffer allocations, memory transfers and kernel executions by detecting compiler-known classes. For each kernel, the direct buffer dependencies are analyzed and fed into a dependency graph.
Given this graph, we can build up our internal representation, a simplified AST-like structure based on a bipartite graph consisting of two vertex types: \textit{Space} representing buffers and \textit{process} marking kernel executions. By traversing the kernel executions in the sequential order, in which they are specified, writes to buffers are transferred to the internal representation in Static Single Assignment (SSA) manner. Hereby, reused buffers will form new \textit{space} vertices in the graph. Furthermore, when the inputs of multiple kernels depend on the same buffer and the same temporal instance of intermediate data, it is required to replace these dependencies by a \textit{process} for splitting the data, followed by multiple \textit{spaces}, one for each kernel. This way, it is guaranteed that streaming data later on will be copied before handing it over to the next computation steps.
Similar considerations need to be taken into account for filtering, which is applied on mismatch of IterationSpace size and Accessor size in order to match buffer accesses. For the Vivado HLS target, unfiltered access, nearest neighbor, and bilinear filtering are supported by HI\textsuperscript{PC}. The size discrepancy must be a factor based on a power of two, so that every value of the more coarse-grained levels matches exactly an integral number of input values. Considering multiple levels of multiresolution data, processing takes place within the same level and in between levels. Here as well, \textit{processes} for splitting data into multiple \textit{spaces} must be inserted in order to distribute data among kernels of the same and more coarse-grained levels. Furthermore where filtering is applied, an additional filtering \textit{process} needs to be inserted into the internal representation.
From the internal representation, we can infer the structural description for the streaming pipeline. Every \textit{process} vertex is translated to a kernel execution and every \textit{space} vertex marks the insertion of a unique Vivado HLS stream object for code generation. The resulting code embodies the structural description of the filter, which is written to a file serving as \textit{entry function}.
Kernels described within HI\textsuperscript{PC}’s language structures for multiresolution methods need to be generated appropriately for each level. This mainly implies reducing resolution and consequently reducing sizes of necessary line and window buffers. Furthermore, to match the latency of the lower levels (coarse-grained), which are processing much less data, with the latency of the higher levels (fine-grained), \texttt{ pragmas} must be inserted for instructing Vivado HLS to increase the target II depending on chosen resolution reduction. This leads to an II advance by factor 4 for a resolution reduction of 2 in each dimension. Therefore, even though a kernel in a multiresolution algorithm is only specified once in DSL code, there is the necessity to generate a separated version for each resolution level.
The resulting AST is transformed back to source code by utilizing Clang’s \texttt{pretty printer} and written to separate files for each kernel. These files will be included by the \texttt{entry function}, which already embeds all executions in a structural description.
B. Parallelization and Design Optimization
A central element of Vivado HLS for achieving different design goals are synthesis directives, which allow to specify how the input design is to be parallelized and optimized. Synthesis directives in Vivado HLS can either be inserted in the code directly as \texttt{ pragmas}, or collected in a script file which is applied during synthesis. Apart from several optimization techniques included by HI\textsuperscript{PC} to improve synthesis results, such as optimizing loop counter variables using assertions and keeping a unified iterations space throughout the designs (refer to [14]), multiresolution applications require to set synthesis directives to control the pipeline rates and the buffer sizes. To
\textsuperscript{1}http://clang.llvm.org
allow for changes during implementation, such as to decrease the throughput of the system to reduce resource requirements, the pipeline rate directives are assembled in a script file, as to not require the designer to search through the code. In contrast, appropriate buffer sizes must be defined manually.
V. CASE STUDIES
We evaluate our methodology on two different hardware target platforms, an embedded General Purpose GPU (GPGPU) (ARM Mali T604) and a mid-range FPGA (Xilinx Zynq 7045). The evaluated designs are compared in terms of performance. The implementations are generated by HIPA for each target, stemming from the same code base. The generated code for HLS is synthesized using Vivado HLS 2014.1. The resulting RTL description in VHDL is synthesized, placed and routed using Vivado 2014.1. Power values for the FPGA designs were obtained using Vivado power analysis with toggle information from PtnR netlist simulations using Mentor Graphics QuestaSim.
For the evaluation, we consider two typical multiresolution applications: First an image pyramid based on the Gaussian pyramid, performing a bilateral filtering on different resolution levels and second, a multigrid algorithm for High Dynamic Range (HDR) compression that has been used as a HIPA showcase in previous work [10]. These applications greatly demonstrate both the flexibility of HIPA’s expressiveness and the possibility of target-independent code generation for non-trivial algorithms. Although these algorithms are well known, their implementation details may differ significantly, thus we briefly clarify the algorithm specifics used for our evaluation.
A. Multiresolution Image Processing
1) Gaussian Pyramid: Image pyramids, as depicted in Figure 2, are a fundamental concept in multi-rate image processing. A well-known example is the Gaussian pyramid, which is made up of low-pass filtered, downsampled images of the preceding stage of the pyramid, where the base stage \( g_0 \) is defined as the original image \( g_0(x) = f(x) \). Higher stages are defined by \( g_i(x) = \sum_{\xi} g_{i-1}(x) w(x, \xi) \), where \( w(x, \xi) \) is a weighting function that is identical for all stages, termed the **generating kernel**. The weighting function closely resembles the Gaussian function, hence the name of the pyramid. Most practical approaches, however, stop before reaching the top of the pyramid. After processing the images at each stage, the output is reassembled by fusing together images of successive stages in a reconstruction step. For this, the smaller image is first increased in size to match the larger image then the two images are added together. The two basic building blocks for the Gaussian pyramid are down sampling, which we refer to as **decompose**, as well as upsampling and image fusion, which will be referred to as **reconstruct**.
2) Bilateral Filter: On the same level within the Gaussian pyramid, we apply the bilateral filter, a non-linear image filter for reducing noise and preserving edges at same time [17]. It is based on a local operator containing two Gaussian filter kernels, \( G_{\sigma_s} \) for taking spatial similarity into account, and \( G_{\sigma_r} \) for considering range similarity (intensity).
\[
I'(x) = \frac{1}{W_p} \sum_{x_i \in \Omega} G_{\sigma_r}(|I(x) - I(x_i)|) I(x_i)
\]
B. Multiresolution in Scientific Computing
1) Smoother: In numerical analysis, a smoother is a method to damp high frequencies. In the sense of multigrid methods, smoothers are used to reduce high-frequency components of the error that arises when approximating the solution of PDEs. Commonly used smoothers include the Jacobi method as well as the Gauss-Seidel method. Both methods per se are iterative solvers of linear systems of equations \( Ax = b \) and work by calculating a new approximation \( x^{(m+1)} \) from the previous approximation \( x^{(m)} \). The matrix \( A \) and the right-hand side \( b \) are the major differences. In case of the Jacobi smoother, the calculation of the new approximation of each component of \( x^{(m+1)} \) is independent from other components, whereas for the Gauss-Seidel method, calculation of components depends (in part) on components from the current iteration \( (m+1) \). By introducing a relaxation parameter \( \omega \) to improve convergence rates, the JOR (Jacobi over-relaxation), respectively SOR (Successive over-relaxation), methods are created.
Table I. PPNR Results of 6 Stage Multiresolution Applications for a Xilinx Zynq 7045.
<table>
<thead>
<tr>
<th>II</th>
<th>LAT</th>
<th>SLICE</th>
<th>LUT</th>
<th>FF</th>
<th>BRAM</th>
<th>DSP</th>
<th>F[MHz]</th>
<th>P[W]</th>
</tr>
</thead>
<tbody>
<tr>
<td>BF 3x3</td>
<td>1</td>
<td>270 533</td>
<td>20 419</td>
<td>56 978</td>
<td>63 768</td>
<td>70</td>
<td>368</td>
<td>188.1</td>
</tr>
<tr>
<td>BF 5x5</td>
<td>1</td>
<td>296 924</td>
<td>41 737</td>
<td>113 656</td>
<td>126 214</td>
<td>76</td>
<td>825</td>
<td>141.5</td>
</tr>
<tr>
<td>Jacobi</td>
<td>1</td>
<td>270 455</td>
<td>18 866</td>
<td>54 413</td>
<td>68 943</td>
<td>372</td>
<td>259</td>
<td>154.3</td>
</tr>
</tbody>
</table>
2) Multigrid Methods: In scientific computing, multigrid methods are a popular choice for the solution of large systems of linear equations that may stem from the discretization of Partial Differential Equations (PDEs). One of the most popular PDEs is the Poisson equation in order to model diffusion processes. It is similar to the equation to be solved in HDR compression, which is explained in detail in [10].
The V-cycle, a simple scheme of a multigrid method, is shown in Algorithm 1. In the pre- and post-smoothing steps, high-frequency components of the error are damped by smoothers such as the Jacobi or the Gauss-Seidel methods. In the algorithm, \( \nu_1 \) and \( \nu_2 \) denote the number of smoothing steps that are applied. Low-frequency components are transformed into high-frequency components by restricting them to a coarser level, thus making them good targets for the smoother.
On the coarsest level, a direct solution of the remaining linear system of equations is possible due to its low number of unknowns. However, it is also possible to apply a number of smoother iterations. In the case of a single unknown, one smoother iteration corresponds to the direct solution.
```plaintext
if coarsest level then
solve \( A^h u^h = f^h \) exactly or by many smoothing iterations
else
\( \tilde{u}^{(k)}_h = \mathcal{S}^{\nu_1}_h (u^{(k)}_h, A^h, f^h) \) \{pre-smoothing\}
\( r^h = f^h - A^h \tilde{u}^{(k)}_h \) \{compute residual\}
\( r^{H_h} = R r^h \) \{restrict residual\}
\( e^H = V_H (0, A^H, r^{H_h}, \nu_1, \nu_2) \) \{recursion\}
\( e^h = P e^H \) \{interpolate error\}
\( \tilde{u}^{(k)}_h = \tilde{u}^{(k)}_h + e^h \) \{coarse grid correction\}
\( u^{(k+1)}_h = \mathcal{S}^{\nu_2}_h (\tilde{u}^{(k)}_h, A^h, f^h) \) \{post-smoothing\}
end
```
Algorithm 1. Recursive V-cycle: \( u^{(k+1)}_h = V_h (u^{(k)}_h, A^h, f^h, \nu_1, \nu_2) \).
Figure 4 shows a structural representation of the FPGA implementation of the multigrid HDR compression generated by HIPAC\textsuperscript{e}. Here, restriction and prolongation are the building blocks for grid traversal. The smoother is implemented using the JOR method. Throughout the design, we use single precision floating point arithmetic. Hardware resource requirements as well as performance and power results for a design starting from a grid of size \( 5 \times 5 \) are given in Table I. Since the Jacobi smoothers require much less computational complexity than the bilateral filter, it is possible to instantiate many of these kernels for smoothing without stressing logic resource requirements. However, as these are local operators, they require multiple image lines as input before being able to produce output data. Thus, the required buffer sizes to interconnect modules on different sides of the V-cycle are much larger than for the Gaussian pyramid and having to store floating point values severely impacts the amount of required BRAM resources.
Figure 4. Structural representation of the HDR compression design.
C. Comparison
As HIPAC\textsuperscript{e} can generate code for several different hardware accelerators, we can use the same code base to compare the performance results of the FPGA implementations to an embedded GPGPU in terms of maximum achievable framerate in frames per second (FPS). As the results presented in Table II show, the high degree of parallelism of the FPGA can be fully exploited to achieve a very high framerate in comparison to the ARM Mali. Performance results for the ARM Mali are mainly restrained by memory bandwidth, and therefore strictly depend on the number of memory accesses, defined by the chosen window size. This is clearly demonstrated as the achievable framerates for the bilateral filter deviate approximately by a factor of \( 9/25 \).
Table II. Achievable Frame Rates in Frames per Second (FPS) for Multiresolution Applications Processing 6 Resolution Levels starting at 512 \( \times \) 512.
<table>
<thead>
<tr>
<th></th>
<th>Mali T604</th>
<th>Zynq 7045</th>
</tr>
</thead>
<tbody>
<tr>
<td>BF 3x3</td>
<td>54.35</td>
<td>695</td>
</tr>
<tr>
<td>BF 5x5</td>
<td>19.73</td>
<td>476</td>
</tr>
<tr>
<td>Jacobi</td>
<td>37.11</td>
<td>570</td>
</tr>
</tbody>
</table>
VI. RELATED WORK
Numerous HLS approaches, both in academia and industry, have been developed over the past decades. Most of them start from a simplified imperative programming language, e.g., a subset of C, which is translated by stepwise refinement into a synthesizable Hardware Description Language (HDL) description—Calypto’s Catapult, Forte’s Cynthesizer, the Symphony C Compiler from Synopsys, or Vivado HLS from Xilinx are the most well-known commercial approaches. For a specific field of application, it is often a challenge to bring together
different areas of expertise, for instance, mathematics, algorithm engineering, and parallel code or hardware generation. One way to productivity are programming abstractions, such as libraries or DSLs. In the domain of image processing, HLS frameworks sometimes include specific libraries to provide elemental architecture constructs and filtering implementations, as for example, the partial part of the OpenCV library [1] for Vivado HLS from Xilinx [18] or the smart buffer concept in ROCCC [4]. Extending such libraries might become quite a burden and lowers portability to new target architectures. In contrast, DSL-based approaches decouple the algorithm specification from the implementation and hardware details. They are much for flexible and can be easily extended to generate code for different platforms. PARO [5], for instance, is a HLS environment that provides domain-specific augmentations [15] for image processing (e.g., border treatment and reductions such as median filtering), it has been successfully used for adaptive multiresolution filtering in medical imaging [6]. Another recent approach that can emit parallel code for multi-core systems as well as generate hardware pipelines was proposed by Hegarty et al. [7]. But, none of the two aforementioned approaches offers support at language level for image pyramids. For stencil computations there exist several DSL-based approaches [3, 16, 8], however, they consider hardware specifics only to a limited extend, and target only multi-core systems and GPU but not FPGA accelerators. To the best of our knowledge, our approach is the first one that can generate performance-portable code for GPUs as well as HDL code for multiresolution applications.
VII. CONCLUSIONS
In this work, we have demonstrated how the DSL-based framework HIPA can be used to automatically generate highly optimized code for the HLS of multiresolution applications for implementation on FPGAs. In this way, the specification of the design requires significantly less programming effort from the developer and thus also poses less chances for coding errors. The presented case studies from medical image processing and scientific computing demonstrate that the approach is applicable to broad range of multiresolution problem scenarios. As HIPA also includes embedded GPGPUs as a hardware target [11], we have compared the proposed FPGA approach to a highly optimized GPU implementations, generated from the same code base. The assessment exposes the benefits of using a heterogeneous framework for algorithm development and can easily identify a suitable hardware target for efficient implementation.
ACKNOWLEDGMENT
We thank Richard Membarth for providing the tested multi-grid implementation in HIPA. This work was partly supported by the German Research Foundation (DFG) as part of the Research Training Group 1773 “Heterogeneous Image Systems” and as part of the Priority Programme 1648 “Software for Exascale Computing” in project under contract TE 163/17-1.
REFERENCES
|
{"Source-Url": "https://www12.cs.fau.de/publications/pub2014/SRSHT14.pdf", "len_cl100k_base": 6601, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22446, "total-output-tokens": 8424, "length": "2e12", "weborganizer": {"__label__adult": 0.0006628036499023438, "__label__art_design": 0.0010976791381835938, "__label__crime_law": 0.0006132125854492188, "__label__education_jobs": 0.0006861686706542969, "__label__entertainment": 0.0001277923583984375, "__label__fashion_beauty": 0.00033736228942871094, "__label__finance_business": 0.00029277801513671875, "__label__food_dining": 0.0005450248718261719, "__label__games": 0.0009665489196777344, "__label__hardware": 0.01186370849609375, "__label__health": 0.0014009475708007812, "__label__history": 0.0005922317504882812, "__label__home_hobbies": 0.00020885467529296875, "__label__industrial": 0.0014543533325195312, "__label__literature": 0.0002732276916503906, "__label__politics": 0.0004851818084716797, "__label__religion": 0.0011777877807617188, "__label__science_tech": 0.27978515625, "__label__social_life": 0.00010031461715698242, "__label__software": 0.00836944580078125, "__label__software_dev": 0.68701171875, "__label__sports_fitness": 0.000522613525390625, "__label__transportation": 0.0011110305786132812, "__label__travel": 0.00033926963806152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35027, 0.0352]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35027, 0.66004]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35027, 0.8791]], "google_gemma-3-12b-it_contains_pii": [[0, 5552, false], [5552, 10663, null], [10663, 17787, null], [17787, 22225, null], [22225, 27364, null], [27364, 35027, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5552, true], [5552, 10663, null], [10663, 17787, null], [17787, 22225, null], [22225, 27364, null], [27364, 35027, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35027, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35027, null]], "pdf_page_numbers": [[0, 5552, 1], [5552, 10663, 2], [10663, 17787, 3], [17787, 22225, 4], [22225, 27364, 5], [27364, 35027, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35027, 0.09615]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
ecb90624ffcf756332aaf7b87ce53e04a74ff919
|
Chapter 3 – Scanning
3.1 Kinds of Tokens
Scanning is the process of identifying tokens from the raw text source code of a program. At first glance, scanning might seem trivial – after all, identifying words in a natural language is as simple as looking for spaces between letters. However, identifying tokens in source code requires the language designer to clarify many fine details, so that it is clear what is permitted and what is not.
Most languages will have tokens in these categories:
- **Keywords** are words in the language structure itself, like `while` or `class` or `true`. Keywords must be chosen carefully to reflect the natural structure of the language, without interfering with the likely names of variables and other identifiers.
- **Identifiers** are the names of variables, functions, classes, and other code elements chosen by the programmer. Typically, identifiers are arbitrary sequences of letters and possibly numbers. Some languages require identifiers to be marked with a sentinel (like the dollar sign in Perl) to clearly distinguish identifiers from keywords.
- **Numbers** could be formatted as integers, or floating point values, or fractions, or in alternate bases such as binary, octal or hexadecimal. Each format should be clearly distinguished, so that the programmer does not confuse one with the other.
- **Strings** are literal character sequences that must be clearly distinguished from keywords or identifiers. Strings are typically quoted with single or double quotes, but also muts have some facility for containing quotations, newlines, and unprintable characters.
- **Comments** and **whitespace** are used to format a program to make it visually clear, and in some cases (like Python) are significant to the structure of a program.
When designing a new language, or designing a compiler for an existing language, the first job is to state precisely what characters are permitted in each type of token. Initially, this could be done informally by stating,
token_t scan_token( FILE *fp ) {
char c = fgetc(fp);
if(c=='*') {
return TOKEN_MULTIPLY;
} else if(c=='!') {
char d = fgetc(fp);
if(d=='=') {
return TOKEN_NOT_EQUAL;
} else {
ungetc(d,fp);
return TOKEN_NOT;
}
} else if(isalpha(c)) {
do {
char d = fgetc(fp);
} while(isalphanum(d));
ungetc(d,stdin);
return TOKEN_IDENTIFIER;
} else if ( . . . ) {
. . .
}
}
Figure 3.1: A Simple Hand Made Scanner
for example, “An identifier consists of a letter followed by any number of letters and numerals,”, and then assigning a symbolic constant (TOKEN_IDENTIFIER for that kind of token. As we will see, an informal approach is often ambiguous, and a more rigorous approach is needed.
3.2 A Hand-Made Scanner
Figure 3.1 shows how one might write a scanner by hand, using simple coding techniques. To keep things simple, we only consider just a few tokens: * for multiplication, ! for logical-not, != for not-equal, and sequences of letters and numbers for identifiers.
The basic approach is to read one character at a time from the input stream (fgetc(fp)) and then classify it. Some single-character tokens are easy: if the scanner reads a * character, it immediately returns TOKEN_MULTIPLY, and the same would be true for addition, subtraction, and so forth.
However, some characters are part of multiple tokens. If the scanner encounters !, that could represent a logical-not operation by itself, or it could be the first character in the != sequence representing not-equal-to. Upon reading !, the scanner must immediately read the next character. If
the next character is =, then it has matched the sequence != and returns
TOKEN_NOT_EQUAL.
But, if the character following ! is something else, then the non-matching
character needs to be put back on the input stream using ungetc, because
it is not part of the current token. The scanner returns TOKEN_NOT and will
consume the put-back character on the next call to scan_token.
In a similar way, once a letter has been identified by isalpha(c), then
the scanner keeps reading letters or numbers, until a non-matching char-
acter is found. The non-matching character is put back, and the scanner
returns TOKEN_IDENTIFIER.
(We will see this pattern come up in every stage of the compiler: an
unexpected item doesn’t match the current objective, so it must be put
back for later. This is known more generally as backtracking.)
As you can see, a hand-made scanner is rather verbose. As more to-
ken types are added, the code can become quite convoluted, particularly
if tokens share common sequences of characters. It can also be difficult
for a developer to be certain that the scanner code corresponds to the de-
sired definition of each token, which can result in unexpected behavior on
complex inputs. That said, for a small language with a limited number of
tokens, a hand-made scanner can be an appropriate solution.
For a complex language with a large number of tokens, we need a more
formalized approach to defining and scanning tokens. A formal approach
will allow us to have a greater confidence that token definitions do not
conflict and the scanner is implemented correctly. Further, a formalized
approach will allow us to make the scanner compact and high perfor-
ma nce – surprisingly, the scanner itself can be the performance bottleneck
in a compiler, since every single character must be individually consid-
ered.
The formal tools of regular expressions and finite automata, allow us
to state very precisely what may appear in a given token type. Then, auto-
mated tools can process these definitions, find errors or ambiguities, and
produce compact, high performance code.
3.3 Regular Expressions
Regular expressions (REs) are a language for expressing patterns. They
were first described in the 1950s by Stephen Kleene as an element of his
foundational work in automata theory and computability. Today, REs
are found in slightly different forms in programming languages (Perl),
standard libraries (PCRE), text editors (vi), command-line tools (grep),
and many other places. We can use regular expressions as a compact
and formal way of specifying the tokens accepted by the scanner of a
compiler, and then automatically translate those expressions into working
code. While easily explained, REs can be a bit tricky to use, and require
some practice in order to achieve the desired results.
DRAFT September 12, 2016
Let us define regular expressions precisely:
A **regular expression** $s$ is a string which denotes $L(s)$, a set of strings drawn from an alphabet $\Sigma$. $L(s)$ is known as the “language of $s$.”
$L(s)$ is defined inductively with the following base cases:
- If $a \in \Sigma$ then $a$ is a regular expression and $L(a) = a$.
- $\epsilon$ is a regular expression and $L(\epsilon)$ contains only the empty string.
Then, for any regular expressions $s$ and $t$:
1. $s \mid t$ is a RE such that $L(s \mid t) = L(s) \cup L(t)$.
2. $st$ is a RE such that $L(st) = L(s)$ followed by $L(t)$.
3. $s^*$ is a RE such that $L(s^*) = L(s)$ concatenated zero or more times.
Rule #3 is known as the **Kleene closure** and has the highest precedence. Rule #2 is known as **concatenation**. Rule #1 has the lowest precedence and is known as **alternation**. Parentheses can be added to adjust the order of operations in the usual way.
Here are a few examples using just the basic rules. (Note that a finite RE can indicate an infinite set.)
<table>
<thead>
<tr>
<th>Regular Expression $s$</th>
<th>Language $L(s)$</th>
</tr>
</thead>
<tbody>
<tr>
<td>hello</td>
<td>${ \text{hello} }$</td>
</tr>
<tr>
<td>d(o</td>
<td>i)g</td>
</tr>
<tr>
<td>moo*</td>
<td>${ \text{mo, moo, moo, ...} }$</td>
</tr>
<tr>
<td>(moo)*</td>
<td>${ \epsilon, \text{moo, moomoo, moomoomoo, ...} }$</td>
</tr>
<tr>
<td>a(b</td>
<td>a)*a</td>
</tr>
</tbody>
</table>
The syntax described on the previous page is entirely sufficient to write any regular expression. But, is it also handy to have a few helper operations built on top of the basic syntax:
- $s?$ indicates that $s$ is optional.
- $s?$ can be written as $(s \mid \epsilon)$
- $s+$ indicates that $s$ is repeated one or more times.
- $s+$ can be written as $ss^*$
- $[a-z]$ indicates any character in that range.
- $[a-z]$ can be written as $(a|b|\ldots|z)$
- $[^x]$ indicates any character except one.
- $[^x]$ can be written as $\Sigma - x$
Regular expressions also obey several algebraic properties, which make it possible to re-arrange them as needed for efficiency or clarity:
<table>
<thead>
<tr>
<th>Property</th>
<th>Expression</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Associativity</strong></td>
<td>$a</td>
</tr>
<tr>
<td><strong>Commutativity</strong></td>
<td>$a</td>
</tr>
<tr>
<td><strong>Distribution</strong></td>
<td>$a (b</td>
</tr>
<tr>
<td><strong>Idempotency</strong></td>
<td>$a** = a*$</td>
</tr>
</tbody>
</table>
Using regular expressions, we can precisely state what is permitted in a given token. Suppose we have a hypothetical programming language with the following informal definitions and regular expressions. For each token type, we show examples of strings that match (and do not match) the regular expression.
<table>
<thead>
<tr>
<th>Informal definition</th>
<th>Regular expression</th>
<th>Matches strings</th>
<th>Does not match</th>
</tr>
</thead>
<tbody>
<tr>
<td>An identifier is a sequence of capital letters and numbers, but a number must not</td>
<td>$[A-Z]+([A-Z]</td>
<td>[0-9])*$</td>
<td>PRINT, MODE5</td>
</tr>
<tr>
<td>come first.</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>A number is a sequence of digits with an optional decimal point. For clarity, the</td>
<td>$[0-9]+(.[0-9]+)?$</td>
<td>123, 3.14</td>
<td>.15, 30.</td>
</tr>
<tr>
<td>decimal point must have digits on both left and right sides.</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>A comment is any text (except a right angle bracket) surrounded by angle brackets.</td>
<td>$<[^>]*>$</td>
<td>trickyster part, look left</td>
<td>this is an illegal comment</td>
</tr>
</tbody>
</table>
### 3.4 Finite Automata
A finite automaton (FA) is an abstract machine that can be used to certain forms of computation. Graphically, an FA consists of a number of states (represented by numbered circles) and a number of edges (represented by labelled arrows) between those states. Each edge is labelled with one or more symbols drawn from an alphabet $\Sigma$.
The machine begins in a start state $S_0$. For each input symbol presented to the FA, it moves to the state indicated by the edge with the same label.
as the input symbol. Some states of the FA are known as **accepting states** and are indicated by a double circle. If the FA is in an accepting state after all input is consumed, then we say the the FA **accepts** the input. We say that the FA **rejects** the input string if it ends in a non-accepting state, or if there is no edge corresponding to the current input symbol.
Every RE can be written as an FA, and vice versa. For a simple regular expression, one can construct an FA by hand. For example, here is an FA for the keyword **for**:

Here is an FA for identifiers of the form `[a-z][a-z0-9]*`

And here is an FA for numbers of the form `([1-9][0-9]*)|0`

### 3.4.1 Deterministic Finite Automata
Each of these three examples is a **deterministic finite automaton** (DFA). A DFA is a special case of an FA where every state has no more than one outgoing edge for a given symbol. Put another way, a DFA has no ambiguity: for every combination of state and input symbol, there is exactly one choice of what to do next.
Because of this property, a DFA is very easy to implement in software or hardware. One integer \( c \) is needed to keep track of the current state. The transitions between states are represented by a matrix \( M[s, i] \) which encodes the next state, given the current state and input symbol. (If the
transition is not allowed, we mark it with $E$ to indicate an error.) For each symbol, we compute $c = M[s, i]$ until all the input is consumed, or an error state is reached.
### 3.4.2 Nondeterministic Finite Automata
The alternative to a DFA is a **nondeterministic finite automaton (NFA)**. An NFA is a perfectly valid FA, but it has an ambiguity that makes it somewhat more difficult to work with.
Consider the regular expression $[a-z]*ing$, which represents all lowercase words ending in the suffix *ing*. It can be represented with the following automaton:
![Automaton Diagram]
Now consider how this automaton would consume the word *sing*. It could proceed in two different ways. Once would be to move to state 0 on $s$, state 1 on $i$, state 2 on $n$, and state 3 on $g$. But the other, equally valid way would be to stay in state 0 the whole time, matching each letter to the $[a-z]$ transition. Both ways obey the transition rules, but one results in acceptance, while the other results in rejection.
The problem here is that state 0 allows for two different transitions on the symbol $i$. One is to stay in state 0 matching $[a-z]$ and the other is to move to state 1 matching $i$.
Moreover, there is no simple rule by which we can pick one path or another. If the input is *sing*, the right solution is to proceed immediately from state zero to state one on $i$. But if the input is *singing*, then we should state in state zero for the first *ing* and proceed to state one for the second *ing*.
An NFA can also have an $\epsilon$ (epsilon) transition, which represents the empty string. This transition can be taking without consuming any input symbols at all. For example, we could represent the regular expression $a*(ab|ac)$ with this NFA:
3.4. FINITE AUTOMATA
This particular NFA presents a variety of ambiguous choices. From state zero, it could consume \( a \) and stay in state zero. Or, it could take an \( \epsilon \) to state one or state four, and then consume an \( a \) either way.
There are two common ways to interpret this ambiguity:
- **The crystal ball interpretation** suggests that the NFA somehow “knows” what the best choice is, by some means external to the NFA itself. In the example above, the NFA would choose whether to proceed to state zero, one, or two before consuming the first character, and it would always make the right choice. Needless to say, this isn’t possible in a real implementation.
- **The many-worlds interpretation** suggests that that NFA exists in all allowable states *simultaneously*. When the input is complete, if any of those states are accepting states, then the NFA has accepted the input. This interpretation is more useful for constructing a working NFA, or converting it to a DFA.
Let us use the many-worlds interpretation on the example above. Suppose that the input string is \( aaac \). Initially the NFA is in state zero. Without consuming any input, it could take an epsilon transition to states one or four. So, we can consider its initial state to be all of those statuses simultaneously. Continuing on, the NFA would traverse these states until accepting the complete string \( aaac \):
<table>
<thead>
<tr>
<th>States</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>0, 1, 4</td>
<td>consume a</td>
</tr>
<tr>
<td>0, 1, 2, 4, 5</td>
<td>consume a</td>
</tr>
<tr>
<td>0, 1, 2, 4, 5</td>
<td>consume a</td>
</tr>
<tr>
<td>0, 1, 2, 4, 5</td>
<td>consume c</td>
</tr>
<tr>
<td>6</td>
<td>accept</td>
</tr>
</tbody>
</table>
In principle, one can implement an NFA in software or hardware by simply keeping track of all of the possible states. But this is inefficient. In the worst case, we would need to evaluate all states for all characters on each input transition. A better approach is to convert the NFA into an equivalent DFA, as we show below.
3.5 Conversion Algorithms
Regular expressions and finite automata are all equally powerful. For every RE, there is an FA, and vice versa. However, a DFA is by far the most straightforward of the three to implement in software. In this section, we will show how to convert an RE into an NFA, then an NFA into a DFA, and then to optimize the size of the DFA.

### 3.5.1 Converting REs to NFAs
To convert a regular expression to a nondeterministic finite automata, we can follow an algorithm given first by McNaughton and Yamada [?], and then by Ken Thompson [?].
We follow the same inductive definition of regular expression as given earlier. First, we define automata corresponding to the base cases of REs:
The NFA for any character $a$ is: The NFA for an $\epsilon$ transition is:
Now, suppose that we have already constructed NFAs for the regular expressions $A$ and $B$, indicated below by rectangles. Both $A$ and $B$ have a single start state (on the left) and accepting state (on the right). If we write the concatenation of $A$ and $B$ as $AB$, then the corresponding NFA is simply $A$ and $B$ connected by an $\epsilon$ transition. The start state of $A$ becomes the start state of the combination, and the accepting state of $B$ becomes the accepting state of the combination:
The NFA for the concatenation $AB$ is:
In a similar fashion, the alternation of $A$ and $B$ written as $A \mid B$ can be expressed as two automata joined by common starting and accepting nodes, all connected by $\epsilon$ transitions:
The NFA for the alternation $A \mid B$ is:
Finally, the Kleene closure $A^*$ is constructed by taking the automaton for $A$, adding starting and accepting nodes, then adding $\epsilon$ transitions to allow zero or more repetitions:
The NFA for the Kleene closure $A^*$ is:
3.5.2 Converting NFAs to DFAs
As noted above, it is possible, but unwieldy to execute an NFA directly. Instead, we can convert any NFA into an equivalent DFA using the technique of subset construction. The basic idea is to create a DFA such that each state in the DFA corresponds to multiple states in the NFA, according to the “many-worlds” interpretation.
Suppose that we begin with an NFA consisting of states $N$ and start state $N_0$. We wish to construct an equivalent DFA consisting of states $D$ and start state $D_0$. Each $D$ state will correspond to multiple $N$ states. First, we define a helper function known as the epsilon closure:
**Epsilon closure.**
\[ \epsilon\text{-closure}(n) \text{ is the set of NFA states reachable from NFA state } n \text{ by zero or more } \epsilon \text{ transitions.} \]
Now we define the subset construction algorithm. First, we create a start state $D_0$ corresponding to the $\epsilon\text{-closure}(N_0)$. Then, for each outgoing character $c$ from the states in $D_0$, we create a new state containing the epsilon closure of the states reachable by $c$. More precisely:
**Subset Construction Algorithm.**
Given an NFA with states $N$ and start state $N_0$, create an equivalent DFA with states $D$ and start state $D_0$.
Let $D_0 = \epsilon\text{-closure}(N_0)$.
Add $D_0$ to a list.
While items remain on the list:
- Let $d$ be the next DFA state removed from the list.
- For each character $c$ in $\Sigma$:
- Let $T$ contain all NFA states $N_k$ such that:
- $N_j \in d$ and $N_j \xrightarrow{c} N_k$
- Create new DFA state $D_i = \epsilon\text{-closure}(T)$
- Add $D_i$ to end of the list.
**Figure 3.3:** Subset Construction Algorithm
Let’s work out the algorithm on the example NFA shown in Figure 3.4.
1. Compute $D_0$ which is the $\epsilon\text{-closure}(N_0)$. Since $N_1$ is reachable by $\epsilon$ from $N_0$, we can see that $D_0 = \{N_0, N_1\}$. Add $D_0$ to the work list.
2. Remove $D_0$ from the work list. The character $a$ is an outgoing transition from both $N_0$ and $N_1$. We can see that $N_0 \xrightarrow{a} N_2$ and $N_1 \xrightarrow{a} \{N_3, N_4\}$, so new state $D_1 = \{N_2, N_3, N_4\}$. Add $D_1$ to the work list.
3. Remove $D_1$ from the work list. Both $a$ and $b$ are outgoing transitions from $N_2, N_3, N_4$. We can see that $N_2 \xrightarrow{b} N_3$ and $N_3 \xrightarrow{b} N_4$, so we create a new state $D_2 = \{N_3, N_4\}$ and add it to the work list. Also, $N_3 \xrightarrow{a} N_4$, so we create a new state $D_3 = \{N_4\}$ and add that to the work list.
4. Remove $D_2$ from the work list. Both $a$ and $b$ are outgoing transitions from $N_3, N_4$ via $N_3 \xrightarrow{a,b} N_4$. We already have a state $D_3$ that contains exactly $\{N_4\}$, so there is no need to create a new state, but simply add an $a, b$ transition between $D_2$ and $D_3$.
5. Remove $D_3$ from the work list. It has no outgoing transitions, so there is nothing to do.
6. The work list is empty, so we are done.
3.5.3 Minimizing DFAs with Hopcroft’s Algorithm
The subset construction algorithm will definitely generate a valid DFA, but the DFA may possibly be very large (especially if we began with a complex NFA generated from an RE.) A large DFA will have a large transition matrix that will consume a lot of memory. If it doesn’t fit in L1 cache, the scanner could run very slowly. To address this problem, we can apply Hopcroft’s algorithm to shrink a DFA into a smaller (but equivalent) DFA.
The general approach of the algorithm is to optimistically group together all possibly-equivalent states $S$ into super-states $T$. Initially, we place all non-accepting $S$ states into super-state $T_0$ and accepting states
CHAPTER 3. SCANNING 3.6. USING A SCANNER GENERATOR
**DFA Minimization Algorithm.**
Given a DFA with states $S$, create an equivalent DFA with an equal or fewer number of states $T$.
First partition $S$ into $T$ such that:
- $T_0 = \text{non-accepting states of } S$.
- $T_1 = \text{accepting states of } S$.
Repeat:
- $\forall T_i \in T$:
- $\forall c \in \Sigma$:
- if $T_i \xrightarrow{c} \{ \text{more than one } T \text{ state } \}$,
- then split $T_i$ into multiple $T$ states
such that $c$ has the same action in each.
Until no more states are split.
Figure 3.5: Hopcroft’s DFA Minimization Algorithm
into super-state $T_1$. Then, we examine the outgoing edges in each state $s \in T_i$. If, a given character $c$ has edges that begin in $T_i$ and end if different super-states, then we consider the super-state to be inconsistent with respect to $c$. (Consider an impermissible transition as if it were a transition to $T_E$, a super-state for errors.) The super-state must then be split into multiple states that are consistent with respect to $c$. Repeat this process for all super-states and all characters $c \in \Sigma$ until no more splits are required.
### 3.6 Using a Scanner Generator
Because a regular expression precisely describes all the allowable forms of a token, we can use a program to automatically transform a set of regular expressions into code for a scanner. Such a program is known as a scanner generator. The program Lex, developed at AT&T was one of the earliest examples of a scanner generator. Flex isthe GNU replacement of Lex and is widely used in Unix-like operating systems today to generate scanners implemented in C or C++.
To use Flex, we write a specification of the scanner that is a mixture of regular expressions, fragments of C code, and some specialized directives. The Flex program itself consumes the specification and produces regular C code that can then be compiled in the normal way.
Here is the overall structure of a Flex file:
The first section consists of arbitrary C code that will be placed at the beginning of `scanner.c`, like include files, type definitions, and similar things. Typically, this is used to include a file that contains the symbolic constants for tokens.
The second section states character classes, which are a symbolic short-
hand for commonly used regular expressions. For example, you might declare \texttt{DIGIT \[0-9\]}\]. This class can be referred to later as \{DIGIT\}.
The third section is the most important part. It states a regular expression for each type of token that you wish to match, followed by a fragment of C code that will be executed whenever the expression is matched. In the simplest case, this code returns the type of the token, but it can also be used to extract token values, display errors, or anything else appropriate.
The fourth section is arbitrary C code that will go at the end of the scanner, typically for additional helper functions. A peculiar requirement of Flex is that we must define a function \texttt{yywrap} which returns zero to indicate that the input is complete at the end of the file.
The regular expression language accepted by Flex is very similar to that of formal regular expressions discussed above. The main difference is that characters that have special meaning with a regular expression (like parenthesis, square brackets, and asterisk) must be escape with a backslash or surrounded with double quotes. Also, a period (\texttt{.}) can be used to match any character at all, which is helpful for catching error conditions.
Figure 3.6 shows a simple but complete example to get you started. This specification describes just a few tokens: a single character addition (which must be escaped with a backslash), the \texttt{while} keyword, an identifier consisting of one or more letters, and a number consisting of one or more digits. As is typical in a scanner, any other type of character is an error, and returns an explicit token type for that purpose.
Flex generates the scanner code, but not a complete program, so you must write a \texttt{main} function to go with it. Figure 3.7 shows a simple driver program that uses this scanner. First, the main program must declare as \texttt{extern} the symbols it expects to use in the generated scanner code: \texttt{yyin} is the file from which text will be read, \texttt{yylex} is the function that implements the scanner, and the array \texttt{yytext} contains the actual text of each token discovered.
Finally, we must have a consistent definition of the token types across the parts of the program, so into \texttt{token.h} we put an enumeration describing the new type \texttt{token_t}. This file is included in both \texttt{scanner.flex} and \texttt{main.c}.
Figure 3.9 shows how all the pieces come together. \texttt{scanner.flex} is
### Contents of File: scanner.flex
```flex
%
#include "token.h"
%
DIGIT [0-9]
LETTER [a-zA-Z]
%
("
) /* skip whitespace */
\+ { return TOKEN_ADD; }
while { return TOKEN_WHILE; }
{LETTER}+ { return TOKEN_IDENT; }
{DIGIT}+ { return TOKEN_NUMBER; }
. { return TOKEN_ERROR; }
%
int yywrap() { return 1; }
```
Figure 3.6: Example Flex Specification
### Contents of File: main.c
```c
#include "token.h"
#include <stdio.h>
extern FILE *yyin;
extern int yylex();
extern char *yytext;
int main() {
yyin = fopen("program.c","r");
if(!yyin) {
printf("could not open program.c\n");
return 1;
}
while(1) {
token_t t = yylex();
if(t==TOKEN_EOF) break;
printf("token: %d text: %s\n",t,yytext);
}
}
```
Figure 3.7: Example Main Program
*DRAFT September 12, 2016*
3.7. Practical Considerations
Handling keywords. - In many languages, keywords (such as `while` or `if`) would otherwise match the definitions of identifiers, unless specially handled. There are several solutions to this problem. One is to enter a regular expression for every single keyword into the Flex specification. (These must precede the definition of identifiers, since Flex will accept the first expression that matches.) Another is to maintain a single regular expression that matches all identifiers and keywords. The action associated with that rule can compare the token text with a separate list of keywords and return the appropriate type. Yet another approach is to treat all keywords and identifiers as a single token type, and allow the problem to be sorted out by the parser. (This is necessary in languages like PL/1, where
identifiers can have the same names as keywords, and are distinguished by context.)
**Tracking source locations.** In later stages of the compiler, it is useful for the parser or typechecker to know exactly what line and column number a token was located at, usually to print out a helpful error message. (“Undefined symbol spider at line 153.”) This is easily done by having the scanner match newline characters, and increase the line count (but not return a token) each time one is found.
**Cleaning tokens.** Strings, characters, and similar token types need to be cleaned up after they are matched. For example, "hello\n" needs to have its quotes removed and the backslash-n sequence converted to a literal newline character. Internally, the compiler only cares about the actual contents of the string. Typically, this is accomplished by writing a function `string_clean` in the postamble of the Flex specification. The function is invoked by the matching rule before returning the desired token type.
**Constraining tokens.** Although regular expressions can match tokens of arbitrary length, it does not follow that a compiler must be prepared accept them. There would be little point to accepting a 1000-letter identifier, or an integer larger than the machine’s word size. The typical approach is to set the maximum token length (`YYLMAX` in flex) to a very large value, then examine the token to see if it exceeds a logical limit in the action that matches the token. This allows you to emit an error message that describes the offending token as needed.
**Error Handling.** The easiest approach to handling errors or invalid input is simply to print a message and exit the program. However, this is unhelpful to users of your compiler – if there are multiple errors, it’s (usually) better to see them all at once. A good approach is to match the minimum amount of invalid text (using the dot rule) and return an explicit token type indicating an error. The code that invokes the scanner can then emit a suitable message, and then ask for the next token.
|
{"Source-Url": "http://www3.nd.edu/~dthain/courses/cse40243/fall2016/chapter3.pdf", "len_cl100k_base": 7555, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 37246, "total-output-tokens": 8217, "length": "2e12", "weborganizer": {"__label__adult": 0.00035309791564941406, "__label__art_design": 0.0003151893615722656, "__label__crime_law": 0.0002294778823852539, "__label__education_jobs": 0.00028967857360839844, "__label__entertainment": 5.823373794555664e-05, "__label__fashion_beauty": 0.0001304149627685547, "__label__finance_business": 0.00010329484939575197, "__label__food_dining": 0.0003483295440673828, "__label__games": 0.0005097389221191406, "__label__hardware": 0.0010156631469726562, "__label__health": 0.0002677440643310547, "__label__history": 0.00015115737915039062, "__label__home_hobbies": 8.684396743774414e-05, "__label__industrial": 0.00028705596923828125, "__label__literature": 0.00021970272064208984, "__label__politics": 0.00016951560974121094, "__label__religion": 0.00040793418884277344, "__label__science_tech": 0.00547027587890625, "__label__social_life": 5.775690078735352e-05, "__label__software": 0.0033969879150390625, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0002548694610595703, "__label__transportation": 0.00039458274841308594, "__label__travel": 0.00018024444580078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30575, 0.01372]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30575, 0.71901]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30575, 0.88195]], "google_gemma-3-12b-it_contains_pii": [[0, 2010, false], [2010, 3709, null], [3709, 6548, null], [6548, 8544, null], [8544, 11248, null], [11248, 12665, null], [12665, 14429, null], [14429, 16383, null], [16383, 17778, null], [17778, 18250, null], [18250, 20463, null], [20463, 21965, null], [21965, 24303, null], [24303, 26830, null], [26830, 27663, null], [27663, 28508, null], [28508, 30575, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2010, true], [2010, 3709, null], [3709, 6548, null], [6548, 8544, null], [8544, 11248, null], [11248, 12665, null], [12665, 14429, null], [14429, 16383, null], [16383, 17778, null], [17778, 18250, null], [18250, 20463, null], [20463, 21965, null], [21965, 24303, null], [24303, 26830, null], [26830, 27663, null], [27663, 28508, null], [28508, 30575, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30575, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30575, null]], "pdf_page_numbers": [[0, 2010, 1], [2010, 3709, 2], [3709, 6548, 3], [6548, 8544, 4], [8544, 11248, 5], [11248, 12665, 6], [12665, 14429, 7], [14429, 16383, 8], [16383, 17778, 9], [17778, 18250, 10], [18250, 20463, 11], [20463, 21965, 12], [21965, 24303, 13], [24303, 26830, 14], [26830, 27663, 15], [27663, 28508, 16], [28508, 30575, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30575, 0.09712]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
41d273757b9ba8636b768da632bce1a0e825820a
|
[REMOVED]
|
{"Source-Url": "http://dl.ifip.org/db/conf/ifip2/ceeset2007/Ambler07.pdf", "len_cl100k_base": 5332, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 25352, "total-output-tokens": 6653, "length": "2e12", "weborganizer": {"__label__adult": 0.0004189014434814453, "__label__art_design": 0.00028014183044433594, "__label__crime_law": 0.0003254413604736328, "__label__education_jobs": 0.0012426376342773438, "__label__entertainment": 4.273653030395508e-05, "__label__fashion_beauty": 0.00016582012176513672, "__label__finance_business": 0.0007505416870117188, "__label__food_dining": 0.00045418739318847656, "__label__games": 0.0003523826599121094, "__label__hardware": 0.00044655799865722656, "__label__health": 0.0003888607025146485, "__label__history": 0.00016891956329345703, "__label__home_hobbies": 7.790327072143555e-05, "__label__industrial": 0.00029659271240234375, "__label__literature": 0.0001766681671142578, "__label__politics": 0.0002796649932861328, "__label__religion": 0.0004072189331054687, "__label__science_tech": 0.001110076904296875, "__label__social_life": 9.012222290039062e-05, "__label__software": 0.0034008026123046875, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.0003552436828613281, "__label__transportation": 0.0003821849822998047, "__label__travel": 0.00023114681243896484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30549, 0.01357]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30549, 0.22198]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30549, 0.94664]], "google_gemma-3-12b-it_contains_pii": [[0, 2365, false], [2365, 5690, null], [5690, 7859, null], [7859, 9129, null], [9129, 10328, null], [10328, 13255, null], [13255, 14895, null], [14895, 18193, null], [18193, 21263, null], [21263, 24231, null], [24231, 27144, null], [27144, 30026, null], [30026, 30549, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2365, true], [2365, 5690, null], [5690, 7859, null], [7859, 9129, null], [9129, 10328, null], [10328, 13255, null], [13255, 14895, null], [14895, 18193, null], [18193, 21263, null], [21263, 24231, null], [24231, 27144, null], [27144, 30026, null], [30026, 30549, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30549, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30549, null]], "pdf_page_numbers": [[0, 2365, 1], [2365, 5690, 2], [5690, 7859, 3], [7859, 9129, 4], [9129, 10328, 5], [10328, 13255, 6], [13255, 14895, 7], [14895, 18193, 8], [18193, 21263, 9], [21263, 24231, 10], [24231, 27144, 11], [27144, 30026, 12], [30026, 30549, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30549, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
124898d4ab5bdead0cccb9fa29b896dfee8601c3
|
Energy-aware parallelization flow and toolset for C code
Original
Energy-aware parallelization flow and toolset for C code / Lazarescu, MIHAI TEODOR; Albert, Cohen; Adrien, Guatto; Nhat Minn, Lê; Lavagno, Luciano; Antoniu, Pop; Manuel, Prieto; Andrei, Terechko; Alexandru, Sutii. - ELETTRONICO. - (2014), pp. 79-88. (Intervento presentato al convegno 17th International Workshop on Software and Compilers for Embedded Systems - SCOPES ’14 tenutosi a New York (USA) nel 2014) [10.1145/2609248.2609264].
Availability:
This version is available at: 11583/2565955 since: 2020-10-22T14:39:14Z
Publisher:
ACM
Published
DOI:10.1145/2609248.2609264
Terms of use:
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
ACM postprint/Author's Accepted Manuscript, con Copyr. autore
(Article begins on next page)
Energy-aware parallelization toolset and flow for C code
The PHARAON FP7 project
Mihai T. Lazarescu
Politecnico di Torino, Turin, Italy
mihai.lazarescu@polito.it
Albert Cohen
INRIA and Ecole Normale Supérieure, Paris, France
albert.cohen@inria.fr
Luciano Lavagno
Politecnico di Torino, Turin, Italy
luciano.lavagno@polito.it
Antoniu Pop
INRIA and Ecole Normale Supérieure, Paris, France
antoniu.pop@inria.fr
Manuel Prieto
Tedesys Global S.L., Santander, Spain
mprieto@tedesys.com
Andrei Terechko
Vector Fabrics, Eindhoven, The Netherlands
andrei@vectorfabrics.com
ABSTRACT
Multicore architectures are increasingly used in embedded systems to achieve higher throughput with lower energy consumption. This trend accentuated the need to convert existing sequential code to effectively exploit the resources of these architectures.
We present the work-in-progress of the EU FP7 PHARAON project that aims to develop a complete set of techniques and tools to guide and assist the designer in the development process for heterogeneous parallel architectures. We focus on the legacy C code parallelization flow that includes a performance estimation tool, a parallelization tool, and a streaming-oriented parallelization framework. We demonstrate the effectiveness of the use of the toolset on a use case where we measure the quality and time for parallelization for inexperienced users and the parallelization flow and performance results for the parallelization of a practical example of a stereo vision application.
Categories and Subject Descriptors
D.1 [Programming Techniques]: Concurrent Programming—Parallel programming
General Terms
execution profiling, data dependency analysis, program parallelization, energy estimation
1. INTRODUCTION
Market evolution over the last years shows a significant increase of the use of multicore architectures in new projects [4]. Over the last decade, processors and systems refocus from the acceleration of the execution of a single-thread to the increase of the overall throughput by means of multi-processor architectures. Also, multicore architectures traditionally used in specific domains with very high processing needs gradually permeated to many embedded systems increasing the need to parallelize massive amounts of legacy sequential code [3, 9]. However, even when parallelism is taken into account from the start of a project, writing programs for efficient execution on parallel architectures is still considered a challenging task [10, 2].
The revolution in hardware architectures challenges the software development techniques to efficiently exploit the potential of the multicore architectures, including the performance-power trade-offs that are often important for portable embedded systems. Automated software parallelization has been extensively explored especially at the statement, basic block and loop levels, which are appropriate for VLIW and vector processors [20, 8]. By contrast, the tools for exploring the parallelization opportunities at the task level, which are best suited for modern multi-core processors, were less explored, with some notable exceptions [14, 6]. However, most of the latter techniques are so far restricted to specific types of loops and data access patterns.
The PHARAON project aims to enable the development of complex systems with high processing needs and low-power requirements. Figure 1 shows the techniques and tools developed to this end.
The first set addresses the design flow, starting from UML/MARTE specifications up to implementation on multicore platform. It assists the design space exploration for the best software architecture and for parallelization opportunities. The second set addresses the run-time adaptation of platform performance (e.g., frequency and voltage) to minimize the energy consumption.
1.1 Evolution beyond the state of the art
Although long studied, compilers can generally extract a limited level of parallelism unless they are used for special applications and, often, for specific coding styles [14, 13]. Efforts like MORPHEUS [24], CRISP [1] and MEGHA [22]
are usually tailored to the target architecture they produce parallel code for. Dominant industry players have also proposed several compilation and debugging tools. For example, OpenCL and CUDA extend the C language to generate efficient code for GPUs. In this project, OpenStream extended the OpenMP standard, better suited for CPUs, with streaming-oriented constructs.
UML is a common modeling language for high-level system design [19]. Previous works propose semi-automatic generation of HW/SW infrastructures [5] and a flow to dynamically reconfigurable SoCs [12]. Low-power run-time management and scheduling have also been proposed, most recent using dynamic voltage and frequency scaling [7, 11]. Design-time approaches use slow integer linear programming [23] and cannot be used at run time. PHARAON proposes a complete framework addressing heterogeneous multi-processor platforms for power consumption optimization.
Figure 1: PHARAON global approach and tools interactions
1.2 System design flow
Figure 2 shows the flow that extends from the high-level UML specifications to the programming of the target platform. UML specification allows to handle homogeneous, heterogeneous and distributed systems and explore the parallelization between components though automatic code generation. This code is used to perform performance and parallelization analysis, code synthesis and power management to optimize the use of target platform resources.
First stage uses the Pareon performance simulator to evaluate the timing and energy of the C code of the UML components. The second stage is driven by the interactive parallelization tool ParTools that is used to discover the parallelization opportunities. The optimized code is either simulated again or implemented and analyzed onto the physical platform in the third stage, in order to assess both the parallelization quality and to extract information for the run-time optimizations. These are used by the reconfiguration manager and low-power scheduler are deployed on the physical platform to provide the required application performance with reduced power consumption.
2. PHARAON WORKFLOW FOR PARALLELIZATION
Paralleling an existing sequential implementation guided only by a classical source code profiler is not trivial without prior knowledge of the software. For instance, a typical gprof profile shown in Listing 1
Listing 1: Typical execution profile (gprof) output
<table>
<thead>
<tr>
<th>% cumulative</th>
<th>self time</th>
<th>self calls</th>
<th>s/call</th>
<th>name</th>
</tr>
</thead>
<tbody>
<tr>
<td>16.61</td>
<td>2.79</td>
<td>2.79</td>
<td>788215425</td>
<td>Dot</td>
</tr>
<tr>
<td>13.78</td>
<td>5.12</td>
<td>2.32</td>
<td>141631877</td>
<td>IntersectQuad</td>
</tr>
<tr>
<td>8.26</td>
<td>6.50</td>
<td>1.39</td>
<td>281277610</td>
<td>IntersectObject</td>
</tr>
<tr>
<td>8.02</td>
<td>7.86</td>
<td>1.35</td>
<td>139645733</td>
<td>IntersectSphere</td>
</tr>
<tr>
<td>7.90</td>
<td>9.19</td>
<td>1.33</td>
<td>69361053</td>
<td>NormalizeVec3</td>
</tr>
</tbody>
</table>
shows clearly the most computation-intensive parts of the program, but does not provide any information on data flows and dependencies, which are well known as one of the most important parallelization inhibitors. More advanced tools provide more details, but still do not cover the data dependencies within the whole program.
For these reasons, the PHARAON workflow for parallelization collects program-wide data dependencies at run-time and presents them for analysis in an abstracted and intuitive way. The toolset flow does not make any specific assumptions on the developer skills, parallelization method, syntax or parallelization framework:
- it starts with run-time collection of execution profile and data dependencies of the serial program;
- performance analysis either simulated or from the (embedded) target system to collect energy consumption estimates and execution histograms;
- display in an intuitive and interactive graphical form of the execution profile, data dependencies, and performance estimations;
- manual analysis of the data and selection of the most promising parallelization opportunities and style;
- parallelization, test and debug of the parallel code, and measurement of performance enhancements;
- code refactoring to improve the parallelization performance of the algorithms.
The steps above can be iterated as needed until are achieved satisfactory results with the effort allocated to the project. In the following sections, the PHARAON toolset components that support the flow will be presented in more detail. Then, the effectiveness of the use of the toolset will be demonstrated, both in terms of simplification of the parallelization task for low skill users as well as the acceleration obtained on a stereo vision application of practical interest.
2.1 ParTools parallelization toolset
ParTools\textsuperscript{2} [16, 15] is a free software project designed to support the developers of various skill levels to parallelize legacy sequential C code that can include complex control structures, pointer operations, and dynamic memory allocation. ParTools was designed to facilitate the discovery of both task and data parallelization opportunities and can be used for any parallelization technique.
The toolset flow, shown in Figure 3, is divided in four stages: I source instrumentation, II run-time execution trace profile and data dependency collection and compaction, III graphical visualization and analysis of execution data, and IV source code parallelization. Its operation is controlled from the Code::Blocks IDE.
An automatic annotator instruments in stage I the sequential source for run-time data dependency collection. The data generated by the instrumentation are collected and compacted at run-time in stage II by a library, and saved in the project at the end of the execution. It is graphically displayed in stage III as a data dependency graph (DDG), with the nodes representing program control (e.g., statements, loops, function calls) and the edges representing the data dependencies. All elements are analyzed based on their execution call stacks, since their execution parameters can change with the context. The nodes for complex program structures (e.g., loops, function calls) fold all the execution call stacks rooted there. These can be unfolded progressively, as needed to discover good parallelization candidates, as we will show later. Stage IV supports manual program parallelization based on above exploration. The source code in the IDE is connected with the graph elements in the graph viewer. Also, the graph viewer provides several methods to temporarily hide graph sections that are not relevant for the parallelization, such as graph re-rooting to any given node.
ParTools analysis can complement automatic parallelization tools (e.g., that of Compaan Design\textsuperscript{3}) which can significantly benefit from the toolset-driven program-wide data dependency analysis. ParTools can show: where the compute intensive procedures are; if there are any data dependencies besides those through procedure arguments; whether the procedure inputs and outputs are truly unaliased; whether the procedure inputs are truly read-only and outputs are truly write-only. Also, ParTools can import data from external analysis tools that complement its analysis capabilities. For example, energy analysis and execution histograms from Pareon can be imported and displayed on the graph to provide the developers with a more comprehensive view on program execution to make better parallelization decisions.
Graphical visualization opens by abstracting all execution details under the call to \texttt{main}() function, as shown in Figure 4. The fold label shows the fold type, its estimated execution load and energy consumption (imported from the Pareon analyzer), the source file name and line, the function name followed by its unique call stack ID. The folds can be unfolded one level at a time to help the developer uncover
\textsuperscript{2}ParTools project: http://sf.net/projects/partools/
\textsuperscript{3}Compaan Design BV http://www.compaandesign.com/
Figure 4: ParTools initial view folds all execution and dependencies under the main() function.
Figure 5: Analysis of a stereo vision application. The two loop folds (square shape) with stronger colourization include 53% and 18% of program execution and may be good parallelization candidates with no strong data dependencies between them.
Figure 7: Data dependencies and OpenMP pragma template inserted as comments in the source code.
To further help the developer, the toolset can insert comments in the source code with the data dependencies and an OpenMP pragma template that can be adapted for the parallelization of the fold node of interest (see Figure 7).
2.2 Performance analysis in Pareon
2.2.1 Performance analysis toolflow
The Pareon tool-suite features leading-edge analysis and interactive parallelization capabilities. In the PHARAON project, it provides the analysis of the performance of C and C++ applications on the target hardware platforms (currently ARM Cortex A9 and an Intel Core 5), including energy consumption estimation. These data are then imported by the parallelization tool (ParTools) to provide the developer with a comprehensive view of the run-time effects of the program, to help them making effective parallelization decisions. The energy estimates are also used by the low power scheduler to select the most power-efficient operating mode of the system.
Pareon can also analyze parallel C and C++ programs that use POSIX threads or OpenMP pragmas (the latter are under test), which allows to check the parallelization decision effects and close the loop of the PHARAON toolset.
The internal Pareon flow for performance analysis is shown in Figure 8. Pareon offers both command line interface (CLI) tools and a GUI. The CLI tools are used to automate the interface with the PHARAON project toolset, while the GUI allows the developers to inspect the results of the modelling. The vfcc compiler is one of the most important CLI tools that translates the source code into a generic executable for a target-independent intermediate instruction set architecture. This code is then run by the Pareon simulator using the necessary test data, input files, environment variables, etc. to collect various statistics. These can then be converted into estimates for a particular hardware target platform using report commands.
Pareon performance analysis is only a few hundred times slower than native execution, which is much faster than the usual gate-level back-annotated timing and power modelling tools in the EDA industry. Due to architecture virtualization, Pareon can model configurations that do not exist (yet) as hardware components, e.g., using more processor cores, or easily perform design space exploration by changing the platform or the operating conditions for the simulation step.
4Extensive Pareon documentation is available online at http://www.vectorfabrics.com/docs/pareon/current/
2.2.2 Performance histograms
Pareon performance analysis can generate timing and invocation count statistic histograms for functions and loops at run-time. These are important for parallelization decisions, since the execution parameters of the same code may depend on the context (e.g., on function arguments). For example, the loop body in Listing 2 has a variable execution time depending on the function argument. Thus, parallelizing the loop in the `main()` function using a round-robin scheduling is inefficient, since the invocation time is not constant. This would lead to imbalanced load and low speedups with respect to a dynamic scheduling.
**Listing 2: Varying function timing**
```c
int foo(int n)
{
int s = 0, i;
for (i = 0; i < n; i++)
s += i * i;
return s;
}
int main()
{
int val[] = {5, 4, 2, 3, 4, 4, 3, 5};
int i;
for (i = 0; i < 8; i++)
val[i] = foo(val[i]);
return 0;
}
```
Pareon histograms can be explored using the tool GUI or can be exported for integration in other tools, e.g., to complement the call stack-based analysis of ParTools. E.g., Figure 9 shows the timing histogram for a loop in terms of...
number of times it has been executed and how much time took each execution (grouped in time bins). The time bins can be explored further as shown in Figure 10 for bin #25 to look for specific patterns as follows. If there are multiple spikes with few iterations per invocation, the speedup is limited by the parallelization overhead of the loops with few iterations. One spike with many iterations per invocation generally benefits from parallelization. Loops with non-constant body execution time may benefit most from a dynamic scheduling to avoid workload imbalanced.
2.3 OpenStream: OpenMP extension for data-flow and stream parallelism
OpenStream\(^5\) is a stream programming language, designed as an incremental extension to the OpenMP parallel programming language [21]. It allows expressing arbitrary task-level data flow dependence patterns through compiler annotations (pragmas) that dynamically generate a streaming program. The language supports nested task creation, modular composition, variable and unbounded sets of producers/consumers, and first-class streams. These features allow translating high-level parallel programming patterns into efficient data-flow code. OpenStream is provided as a tightly integrated collection of compilation, code generation, and concurrent runtime algorithms for task-level parallel programming, particularly effective on embedded multicore.
Data-flow execution is essential to reduce energy consumption, one of the primary focuses of the PHARAON project, by reducing the severity of the memory wall in two complementary ways: (1) thread-level data flow naturally hides latency and (2) decoupled producer-consumer pipelines favor on-chip communication, bypassing global memory. Furthermore, OpenStream exceeds the performance of state-of-the-art parallel programming environments like StarSs. Figure 11 shows comparatively that OpenStream speedups against sequential execution (solid) exceed those of StarSs (dashed) for a block-sparse matrix LU factorization on a dual-socket AMD Opteron Magny-Cours 6164HE machine with 2×12 cores at 1.7 GHz due to its optimized runtime for low-overhead synchronization and work-stealing scheduling that improves on Chase and Lev’s concurrent doubly-ended queue. It has been ported to x86 and ARM architectures, the latter being optimized for its weak memory model leveraging on recent progress in memory consistency formalization as a first proof of the relaxed double-ended queue [17]. Figure 12 shows that the optimized ARM code generally outperforms the original sequentially consistent Chase–Lev in a variety of benchmarks, including a selection of standard fine-grained task-parallel computations.
OpenStream efficiently addresses another critical concurrent data structure for parallel languages and embedded multiprocessors, the single-producer, single-consumer (SPSC) FIFO queues. These may arise from a variety of parallel design patterns and from the distribution of Kahn process networks over multiprocessor architectures. With WeakRB [17] we focus on portability and correctness through concurrent implementation in C11 and performance through advanced caching and batching extensions, relaxed hypotheses on memory ordering, and leveraging the low-level atomics in C11 with relaxed memory consistency. We validate its portability and performance on 3 architectures with diverse
\(^5\)OpenStream project http://www.di.ens.fr/OpenStream
Figure 13: Comparison between MCRB and WeakRB on Cortex A9
3. EXPERIMENTAL EVALUATION
3.1 Comparative use test
To illustrate the benefits of the PHARAON toolset flow in this respect, we present the results of a comparative use test. Its purpose is to show how the use of the toolset helps relatively inexperienced users to more effectively parallelize a previously unknown legacy application. We used students from a second year course for the electronics engineering master (5th year overall) that covers modelling languages, such as SystemC, Esterel and Kahn Process networks, and the associated synthesis and verification algorithms and tools. The course does not teach specifically how to parallelize software, the students had only used the SystemC language to model multiple threads communicating via signals (i.e., using the Moore synchronous reactive model). They were also exposed to the concept of Kahn Process Networks, but had never written code using this computation model. Hence, the students entered the experiment without any knowledge of writing parallel software.
The test assignment was to analyze and parallelize three real-life use cases: an MJPEG encoder, a ray tracing algorithm, and a cascade of two FIR filters. The groups were partitioned in two sets, of which one was required to use the PHARAON toolset in addition to the standard tools of the first set and its results were used as baseline to assess the effects of using the toolset. The other set was required to use the PHARAON toolset in addition to the standard tools of the first set and its results were evaluated against the results of the first set separately, for each parallelization candidate program.
The students were requested to spend at most a couple of days on the parallelization. Only 9 groups out of 11 completed the assignment and the results of the test are summarized in Figure 14. The X axis lists the test cases as follows: “mjpeg” is an MJPEG encoding algorithm with an acyclic data dependency graph at the top level; “FIR” is a couple of cascaded FIR filters; “raytracer” is a ray tracing application with a well known top-level data parallelism. The tools used for the parallelization are: only standard code analysis and development tools (such as gprof and OpenMP parallelization pragmas) and its results were used as baseline to assess the effects of using the toolset. The Y axis shows the time (in hours) needed to complete the various phases of the parallelization assignment, and the speedup obtained on a 4 core Intel architecture.
The graph indicates: the training time to get acquainted with the tools; the time to perform the first parallelization (discover parallelism, analyze the data dependencies, write
connected by important data transfers, they may be good candidates for task-parallelization, e.g., using OpenStream.
the parallel code using OpenMP pragmas, and debug the results so that the execution was correct); the time to further optimize the parallelized code to improve the speedup; and the final speedup to sequential execution.
The results of the “mjpeg” test show that using the toolset considerably reduced the parallelization time, but at the cost of more training time. Additionally, more time invested to learn the toolset appears to pay off by reducing the parallelization time later. The final speedup results are similar, with some variability that does not appear to depend on the use of the toolset. The “FIR” test shows that the group using the toolset was the only one obtaining any speedup. Learning how to use the toolset in this case took a long time. Also for the “raytracer” test, the use of the toolset reduced the parallelization time at the cost of more training time. The use of the toolset lead to slightly better speedup than without. However, neither group obtained a functionally correct parallelization since they missed some of the data dependencies due to the incompleteness of their code analysis. Up to a point this is unavoidable because of the “optimistic”, trace-based, manual parallelization approach used. However, this prompted us to extended the toolset after the experiment with the capability to insert the data dependencies as comments in the source code as shown in Figure 7.
3.2 Stereo vision use case
Stereo vision applications infer 3D scene geometry from two images with different viewpoints by calculating a dense disparity or depth map from a pair of images under known camera configuration. The parallelization flow of the PHARAON toolset described in Section 2 was used on the application code as presented in Section 2.4.
Unfolding the highest level of abstraction shown in Figure 4 revealed that the fold of function process() call holds almost all program execution. Unfolding this one shows that function process_disp() holds 99.85% of program execution. Unfolding it reveals visually right away that three folds hold most of the execution load, as can be seen in the center of Figure 15: computeMatches() 53.11% (top), computeDisparity() 22.61% (bottom), and mean() 17.61% (right). We also notice that the data dependencies between these are not very strong suggesting that these folds may be suited for data-parallelism. In the corresponding source code, we find out that in fold computeMatches() the function com-
computeMatchingDisparity() is called twice within the body of the innermost of two nested loops, as shown in Listing 3 (without the leading pragma).
Listing 3: Contents of computeMatches() fold
#pragma omp parallel for
for (u_can=1; u_can<D_can_width; u_can++) {
...
for (v_can=1; v_can<D_can_height; v_can++) {
...
d=computeMatchingDisparity(&pu,&pv, ...
if (d>=0) {
...
computeMatchingDisparity(&pdif,&pv, ...
...
}
}
}
Analyzing the data dependency view, the loop histogram and the source code we deduce that the two calls to computeMatchingDisparity() are independent and that the iterations of the outer loop do not show major unbalances. Thus, its execution can be sliced and executed in parallel using the OpenMP pragma shown on top of Listing 3.
A similar analysis shows that the best parallelization for the other two candidates in Figure 15 (computeDisparity() and mean()) can be where they are called in process_disp(), as shown in Listing 4 for the former.
Listing 4: Call of computeDisparity() function
computeDisparity(p_support,tri_1, ...)
computeDisparity(p_support,tri_2, ...)
Dependency analysis shows that the two calls are independent and can be executed in parallel as shown in Listing 5:
Listing 5: Parallelization of computeDisparity() call
#pragma omp parallel sections
{
#pragma omp section
{
computeDisparity(p_support,tri_1, ...)
}
#pragma omp section
{
computeDisparity(p_support,tri_2, ...)
}
}
The call to mean() is analyzed and parallelized analogously.
The speedup of these parallelizations was measured on the target architecture composed of two Intel i5 cores running at 1.20GHz (i5-3230M) processing a set of images of 1024×768 pixels. The results are reported in Table 1 and show the effectiveness of the analysis using the PHARAON toolset to find good parallelization opportunities.
OpenStream parallelizations followed similar patterns (data-parallel loop and parallel sections) since no inter-task dependencies with streams looked promising. However, unlike the OpenMP-based parallelization, OpenStream focused on lower granularity parts of the code to leverage the efficiency of its run-time. The results obtained are reported in Table 2.
4. CONCLUSION
This work presents the toolset and techniques developed in the PHARAON project with particular emphasis on the support for parallelization of legacy C code for multiprocessors platforms. These implement a complete flow, from UML modeling to final implementation, helping to reduce the development time, to increase the performance and to reduce the energy consumption.
The parallelization flow includes several tools. A performance estimation tool (Pareon) is used to extract timing and energy estimations for the code under analysis. The parallelization tool (ParTools) performs execution profiling and collects data dependencies program-wide at run-time. These, along with performance estimations, are shown in an interactive analysis interface at selectable levels of abstraction and analysis to help the developer decide on the best parallelization techniques and opportunities. The support for streaming-oriented parallelization is provided by OpenStream, an extension to the OpenMP standard.
The effectiveness of the parallelization toolset is demonstrated on two practical cases. One is a use case, involving unexperienced users, demonstrates the increment in parallelization quality and reduction of parallelization time due to the use of the toolset. The other demonstrates the use of the toolset for the parallelization of a stereo vision application of practical interest. The toolset helps to identify good parallelization candidates, at the proper level, and analyze their data dependencies and execution timings to define the best parallelization technique to achieve a significant speedup.
5. ACKNOWLEDGMENTS
This work is performed in the framework of the FP7-288307 funded project PHARAON. The stereo vision application described in this paper has been kindly provided by Tedesys within the PHARAON project.
6. ADDITIONAL AUTHORS
Alexandru Sutii (Vector Fabrics, Eindhoven, The Netherlands, email: alexandru@vectorfabrics.com).
7. REFERENCES
|
{"Source-Url": "https://iris.polito.it/retrieve/e384c42e-322b-d4b2-e053-9f05fe0a1d67/scopes.pdf", "len_cl100k_base": 6364, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30812, "total-output-tokens": 8843, "length": "2e12", "weborganizer": {"__label__adult": 0.00054931640625, "__label__art_design": 0.000598907470703125, "__label__crime_law": 0.0004563331604003906, "__label__education_jobs": 0.0006098747253417969, "__label__entertainment": 0.00011581182479858398, "__label__fashion_beauty": 0.00024700164794921875, "__label__finance_business": 0.0003657341003417969, "__label__food_dining": 0.00048732757568359375, "__label__games": 0.0009217262268066406, "__label__hardware": 0.0078582763671875, "__label__health": 0.0007805824279785156, "__label__history": 0.0004627704620361328, "__label__home_hobbies": 0.00019979476928710935, "__label__industrial": 0.00118255615234375, "__label__literature": 0.0002312660217285156, "__label__politics": 0.0004181861877441406, "__label__religion": 0.0008263587951660156, "__label__science_tech": 0.1400146484375, "__label__social_life": 8.916854858398438e-05, "__label__software": 0.0063018798828125, "__label__software_dev": 0.8349609375, "__label__sports_fitness": 0.0005040168762207031, "__label__transportation": 0.0015239715576171875, "__label__travel": 0.0003402233123779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35901, 0.03118]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35901, 0.24473]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35901, 0.86198]], "google_gemma-3-12b-it_contains_pii": [[0, 910, false], [910, 5015, null], [5015, 9755, null], [9755, 13062, null], [13062, 16003, null], [16003, 17182, null], [17182, 20620, null], [20620, 23472, null], [23472, 25937, null], [25937, 30432, null], [30432, 35901, null]], "google_gemma-3-12b-it_is_public_document": [[0, 910, true], [910, 5015, null], [5015, 9755, null], [9755, 13062, null], [13062, 16003, null], [16003, 17182, null], [17182, 20620, null], [20620, 23472, null], [23472, 25937, null], [25937, 30432, null], [30432, 35901, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35901, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35901, null]], "pdf_page_numbers": [[0, 910, 1], [910, 5015, 2], [5015, 9755, 3], [9755, 13062, 4], [13062, 16003, 5], [16003, 17182, 6], [17182, 20620, 7], [20620, 23472, 8], [23472, 25937, 9], [25937, 30432, 10], [30432, 35901, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35901, 0.03465]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
2724e0a0bcc6a47963e1223dfd251e76de1864f3
|
An Indexing Structure for Automatic Schema Matching
Fabien Duchateau, Zohra Bellahsene, Mark Roantree, Mathieu Roche
To cite this version:
HAL Id: lirmm-00138117
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00138117
Submitted on 1 Dec 2007
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Poster Session: An Indexing Structure for Automatic Schema Matching
Fabien Duchateau
LIRMM - UMR 5506
Université Montpellier 2
34392 Montpellier Cedex 5 - France
duchatea@lirmm.fr
Zohra Bellahsène
LIRMM - UMR 5506
Université Montpellier 2
34392 Montpellier Cedex 5 - France
bella@lirmm.fr
Mark Roantree
Interoperable Systems Group
Dublin City University, Ireland
mark.roantree@computing.dcu.ie
Mathieu Roche
LIRMM - UMR 5506
Université Montpellier 2
34392 Montpellier Cedex 5 - France
mroche@lirmm.fr
Abstract
Querying semantically related data sources depends on the ability to map between their schemas. Unfortunately, in most cases matching between schema is still largely performed manually or semi-automatically. Consequently, the issue of finding semantic mappings became the principal bottleneck in the deployment of the mediation systems in large scale where the number of ontologies and or schemata to be put in correspondence is very large. Currently the mapping tools employ techniques for mapping two schemas at a time with human intervention for ensuring a good quality of mappings. In the large-scale scenario such techniques are not suitable. Indeed, in such a scenario one requires an automated performance oriented solution. Moreover, the automated method should also provide acceptable quality of mappings. In this paper, we present an automatic schema matching approach dealing with two aspects: performance and quality of mappings. However, we will focus on the performance aspect. For this, our method uses a B-tree index structure. Furthermore, our approach has been implemented and the experiments with real sets of schema show that it is scalable and provides very good performance.
1. Introduction
Interoperability among applications in distributed environments, including today’s World-Wide Web and the emerging Semantic Web, depends critically on the ability to map between them. Unfortunately, automated data integration, and more precisely matching between schema, is still largely done by hand, in a labor-intensive and error-prone process. As a consequence, semantic integration issues have become a key bottleneck in the deployment of a wide variety of information management applications. The high cost of this bottleneck has motivated numerous research activities on methods for describing, manipulating and (semi-automatically) generating schema mappings.
The schema matching problem consists in identifying one or more terms in a schema that match terms in a target schema. The current semi-automatic matchers [7, 1, 9, 12, 5, 10, 14] calculate various similarities between elements and they keep the couples with a similarity above a certain threshold. The main drawback of such matching tools deals with the performances: although the matching quality provided at the end of the process is acceptable, the elapsed time to match implies a static and limited number of schema. Yet in many domain areas, a dynamic environment involving large sets of schema is required. Nowadays’ matching tools must combine both acceptable quality and good performances.
In a previous work [4], we presented the Approxivect method to calculate a semantic similarity between two elements from different XML schema. Contrary to similar works, this approach is automatic, it does not use any dictionary or ontology and is both language and domain independent. It consists in using both terminological algorithms and structural rules. Indeed the terminological approaches enable to discover similarity of elements represented by close character strings. On the other hand, the structural rules are used to define the notion of context of a node. This context includes some of its neighbors, each of them is associated a weight representing the importance it has when evaluating the contextual node. Vectors composed of neighbor nodes are compared with the cosine measure to detect any similarity. Finally the different measures are aggregated for all couples of nodes. We have shown in [4] that Approxivect provides a good quality of mappings regarding the existing tool [1].
Unfortunately our Approxivect approach, like most of the matchers, lacks to provide good performances in terms of time. The motivation of our work is to improve this aspect of our method by using an indexing structure to accelerate the schema matching process, while still ensuring an acceptable quality by using Approxivect. The B-tree structure has been chosen to reach this goal, as it aims at searching and finding efficiently an index among a large quantity of data. Indeed, we assume that two similar labels share at least a common token, so instead of parsing the whole schema, we just search for the tokens indexed in the B-tree. A prototype, BtreeMatch, has been designed. Furthermore, we performed some experiments based on large sets of schema and the results show that our approach is scalable. Our main contributions are:
- Indexing structure for matching, which provides good performance.
- Use of tokenisation, terminological measures and context matching of label tokens; thus clustering similar label.
- Experiments with large number of real XML schemas (OAGIS, XCBL) showing good performance applicable for a large scale scenario.
The rest of the paper is structured as follows: first we briefly explain some general concepts in Section 2; in Section 3, an outline of our method is described; in Section 4, we present the results of our experiments; an overview of related work is given in Section 5 and in Section 6, we conclude and outline some future work.
2 Preliminaries
We consider schemas as rooted, labeled trees. This provides us the benefit for computing contextual semantics of nodes in the schema hierarchy.
Definition 1: A schema $S = (V_S, E_S, r_S, label)$ where:
- $V_S$ is a set of nodes;
- $r_S$ is the root node;
- $E_S \subseteq V_S \times V_S$ is a set of edges;
- $label : V_S \rightarrow \Lambda$ where $\Lambda$ is a countable set of labels.
Definition 2: Let $V$ be the domain of schema nodes, the similarity measure, is a concept whereby two or more terms are assigned a metric value based on the likeness of their meaning / semantic content [6]. In the case of two schema nodes, this is a value $V \times V \rightarrow \mathbb{R}$, noted $sim(n, n')$, defined for two nodes $n$ and $n'$. Note that the semantic similarity depends on the method used to calculate it. In general, a zero value means a total dissimilarity whereas the 1 value stands for totally similar concepts.
Definition 3: A mapping is a non-defined relationship $rel$ between nodes of two schemas $V_S$ and $V'_S$:
$V_S \times V'_S \rightarrow rel$
The relationship between nodes can include synonyms, equality, hyperonyms, hyponyms, etc. The similarity measure between the two nodes may be compared with a certain threshold, defined by an expert, to determine if two elements should be mapped.
Example of schema matching: Consider the two following schemas used in [3]. They represent organization in universities from different country and have been widely used in the literature.
With those schemas, the ideal set of mappings given by an expert is {$\text{(CS Dept Australia, CS Dept U.S.)}$, $\text{(courses, undergrad courses)}$, $\text{(courses, grad courses)}$, $\text{(staff, people)}$, $\text{(academic staff, faculty)}$, $\text{(technical staff, staff)}$, $\text{(lecturer, assistant professor)}$, $\text{(senior lecturer, associate professor)}$, $\text{(professor, professor)}$.
3 Overview of our approach
In this section, at first, we introduce the basis of our approach: Approxivect component which focuses on the semantic aspect and the B-tree indexing structure component, which is dealing with the performance aspect. Then, we describe our method combining these two components.
3.1 Approxivect approach
Approxivect approach aims at discovering similarities between XML elements. One of its particular feature is that it enables to discover several relationships (synonyms, hyponyms, ...) but does not rely on any dictionary and is not domain specific. See [4] for more details.
Approxivect (Approximation of vectors) approach is based on two steps: first we replace labels when they have close character strings. This step uses the Levenshtein distance and 3-grams algorithms [6, 8]. In a second time, we calculate the cosine measure [13] between two vectors to determine if their context is close or not. By context we mean some important neighbor nodes like ancestors and descendants. A formula has been provided in [4] to calculate the importance of those neighbors for a given node.
Let us present our Approxivect algorithm. The two schemas are traversed in preorder traversal and all nodes labels are compared two by two with the Levenshtein distance and the 3-grams. Both measures are processed and according to the adopted strategy: in our previous experiments the maximum and average strategies reveals to be a good compromise. The obtained value is denoted SM for String Measure.
If SM is above a certain threshold, which is defined by an expert, then some replacements may occur. We decided to replace the label with the larger number of characters by the label with the smaller number of characters. Indeed we consider that the smaller-sized label is more general than the bigger-sized one. This assumption can be checked easily since some labels may be written singular or plural. So we finally obtain after this first step the initial schema that has possibly been modified with character string replacements.
In the second part of our algorithm, we traverse again the schemas - in which some string replacements may have occurred due to Approxivect step 1. And the context vector of a current element is extracted in each schema. The neighbor elements composing this vector may be ancestors, descendants, siblings or further nodes of the current element. However, each of them is assigned with a weight, illustrating the importance of this neighbour with regards to the current node. The two context vectors are compared using the cosine measure, in which we include the weight of the node. Indeed, when counting the number of occurrences of a label, we multiply this number by its weight. This process enables to calculate CM, the cosine measure between two context vectors, and thus the semantic similarity between the two nodes related to these contexts too.
The matching quality has already been compared to another schema matcher, COMA++ [1]. Experiments have been done on several couples of schemas to show that Approxivect offers an acceptable quality [4]. For example, consider schemas 1 and 2. An expert would find 9 relevant similarities: COMA++ finds only 5 of them while Approxivect is able to discover all the relevant similarities. But Approxivect suffers from the same drawback than the other matchers : slow performances. The next section presents an indexing structure to accelerate the matching.
3.2 An indexing structure: the B-tree
In our approach, we use the B-tree as the main structure to locate matches and create mappings between XML tree structures. The advantage of searching for mappings using the B-tree approach is that B-tree have indexes that significantly accelerate this process. Indeed, if you consider the schemas 1 and 2, they have respectively 8 and 9 elements, implying 72 matching possibilities with an algorithm that tries all combinations. And those schemas are small examples, but in some domains, schemas may contain up to 6000 elements. By indexing in a B-tree, we are able to reduce this number of matching possibilities, thus involving better performances.
As described in [2], B-trees have many features. A B-tree is composed of nodes, each of them having a list of indexes. A B-tree of order \( M \) means that each node can have up to \( M \) children nodes and contains a maximum of \( M-1 \) indexes. Another feature is that the B-tree is balanced, meaning all the leaves are at the same level - thus enabling fast insertion and fast retrieval since a search algorithm in a B-tree of \( n \) nodes visits only \( 1+\log_M n \) nodes to retrieve an index. This balancing involves some extra processing when adding new indexes into the B-tree, but its impact is limited when the B-tree order is high.
The B-tree is a structure widely used in databases due to its efficient capabilities of storing information. As schema matchers need to store and retrieve quickly a lot of data when matching, an indexing structure such as B-tree could improve the performances. The B-tree has been prefered to the B+tree (which is commonly used in databases systems) since we do not need the costly delete operation. Thus with this condition, the B-tree seems more efficient than the B+tree because it stores less indexes and it is able to find an index quicker. As most databases use a B+tree structure, we did not consider
3.3 Our BtreeMatch approach
By using both Approxivect and the B-tree structure, the objective is to combine their main advantage: an acceptable matching quality and good performances. Contrary to most of the other matching tools, BtreeMatch does not use a matrix to compute the similarity of each couple of elements. Instead, a B-tree, whose indexes represent tokens, is built and enriched as we parse new schemas, and the discovered mappings are also stored in this structure. The tokens reference all labels which contains it. For example, after parsing schemas 1 and 2, the courses token would hold three labels: courses from schema 1, grad courses and undergrad courses from schema 2. Note that the labels grad courses and undergrad courses are also stored respectively under the grad and the undergrad tokens.
For each input XML schema, the same algorithm is applied: the schema is parsed element by element by preorder traversal. This enables to compute the context vector of each element. The label is split into tokens. We then fetch each of those tokens in the B-tree, resulting in two possibilities:
- no token is found, so we just add it in the B-tree with a reference to the label.
- or the token already exists in the B-tree, in which case we try to find semantic similarities between the current label and the ones referenced by the existing token. We assume that in most cases, similar labels have a common token (and if not, they may be discovered with the context similarity).
Let us illustrate this case. When courses is parsed in schema 1, the label is first tokenized, resulting in the following set of tokens: courses. We search the B-tree for this single token, but it does not exist. Thus we create a token structure whose index is courses and which stores the current label courses and it is added into the B-tree. Later on, we parse grad courses in schema 2. After tokenization process, we obtain this set of tokens: grad, courses. We then search the B-tree for the first token of the set, but grad does not exist. A token structure with this grad token as index is inserted in the B-tree, and it stores the grad courses label. Then the second token, courses, is searched in the B-tree. As it already exists, we browse all the labels it contains (here only courses label is found) to calculate the Approxivect String Measure denoted SM between them and grad courses. Approxivect can replace one of the label by the other if they are considered similar (depending on the Approxivect parameters). Whatever happens, grad courses is added in the courses structure. The next parsed element is undergrad courses, which is composed of two tokens, undergrad and courses. The first one results in an unsuccessful search, implying an undergrad token structure to be created. The second token is already in the B-tree, and it contains the two labels.
previously added: courses and grad courses. The String Measures are computed between undergrad courses and the two labels, involving replacements if SM reaches a certain threshold. undergrad courses is added in the label list of the courses token structure. So the index enables to quickly find the common tokens between occurrences, and to limit the String Measure computation with only a few labels.
At this step, some string replacements might have occurred. Then the parser performs recursively the same action for the descendants nodes, thus enabling to add the children nodes to the context. Once all descendants have been processed, similarities might be discovered by comparing the label with tokens’ references using the cosine and the terminological measures. A parameter can be set to extend the search to the whole B-tree if no mappings has been discovered.
Let us carry on our example. After processing undergrad courses, we should go on with its children elements. As it is a leaf, we then search the B-tree again for all the tokens which compose the label undergrad courses. Under the undergrad token, we find only one label, itself so nothing happens. Under the courses token, only one of the three existing labels namely courses, is interesting (one is itself and the other, grad courses, is in the same schema). The String Measure is thus applied between courses and undergrad courses. The Cosine Measure is also performed between their respective context, and the aggregation of these two measures results in the semantic measure between those labels. If this semantic measure reaches a certain threshold, then a mapping may be discovered.
4 Experiments
To evaluate the benefit provided by the index structure, we made comparison between Approxivect alone and the whole method (i.e., Approxivect + B-tree indexing structure). In order to properly evaluate our work, we developed a prototype system of BtreeMatch in Java, using the SAX parser. As some experiments about the matching quality have already been done in previous work [4] between Approxivect and COMA++, we do not deal with the quality here. We only focus on performances, namely the time spent to match a large number of schemas. For those experiments we used a 2 Ghz Pentium 4 laptop running Windows XP, with 2 Gb RAM. Java Virtual Machine 1.5 is the current version. The context of a node is limited to its parent and its children nodes.
Although this constraint could be removed, it has been shown in the Approxivect experiments that the context should not include too many further nodes which could have a bad impact on the quality.
The table 1 shows the different features of the sets of schemas we used in our experiments. Two large scale scenarios are presented: the first one involves a thousand of average-sized schemas about business-to-business e-commerce, taken from the XCBL standards. In the second case, we deal with OASIS schemas which are also business domain related. We use only several hundreds of those schemas because they are quite large, with an average of 2000 nodes per schema.
4.1 First scenario: XCBL
Here we compare the performances of Approxivect and BtreeMatch on a large set of average schemas. The results are illustrated by the graph depicted in Figure 3. We can see that Approxivect is efficient when the number of schemas is not very large (less than 1600). BtreeMatch method provides good performances with a larger number of schemas, since two thousands schemas are matched in 200 seconds.
4.2 Second scenario: OASIS
In this scenario, we are interested by matching large schemas, with an average of 2000 nodes. The graph depicted in Figure 4 shows that Approxivect is not suited for large schemas. On the contrary, BtreeMatch is able to match an important number of large schemas in less than one minute. The graph also shows that BtreeMatch is quite linear. Indeed, for 900 schemas, BtreeMatch needs around 130 seconds to perform the matching.
5 Related Work
In the literature, many schema matching approaches [7, 1, 9, 12, 5, 10, 14] have been studied at length.
<table>
<thead>
<tr>
<th>Table 1. Characterization of the schema sets</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
</tr>
<tr>
<td>Average number of nodes per schema</td>
</tr>
<tr>
<td>Largest / smallest schema size</td>
</tr>
<tr>
<td>Maximum depth</td>
</tr>
</tbody>
</table>
1[www.xcbl.org](http://www.xcbl.org)
2[www.oagi.org](http://www.oagi.org)
Most of them have been designed to demonstrate their benefit in different scenarios. However, the currently mapping tools employ techniques for mapping two schemas with human intervention. To the best of our knowledge, this is the only one previous work dealing with large schemas [11], using COMA++ tool [1]. In this work, first, the user divides the schema into fragments and then each fragment from source schema is mapped to target schema fragments, to find inter-fragment matching. Next, these fragment mappings are merged to compute the schema level mappings. Thus, the tool is not able to process directly large schemas. Another issue of this approach [11] is which criteria is the best for fragmenting the large schemas.
To conclude, the existing tools are semi-automatic and are designed for small schemas. Moreover they, did not focus on the performance aspect, while our method has the following properties:
- automatic
- designed for processing large schemas
- scalable
### 6 Concluding Remarks
In this paper, we presented our BtreeMatch approach to improve the time elapsed on schema matching. Our method is based on a B-tree structure which includes an index mechanism and module for discovering semantic similarity between elements of schemas. To evaluate the benefit provided by the index structure, we made comparison between Approxivect alone and BtreeMatch (i.e., Approxivect + B-tree indexing structure). The experiments have shown that BtreeMatch is faster than Approxivect in most cases, especially when the number of information that needs to be stored becomes important. An indexing structure could be needed when the schemas are either very large or numerous. Note that the B-tree can directly store the mappings into memory, whereas Approxivect needs another means of storage.
The result of our experiments are very interesting and showing that our method is scalable and provide good performance and quality of mappings. We are planning to seek for schemas involving more heterogeneity, thus we need to enhance Approxivect by adding specific parsers for each format file. Currently, the corpus of schemas available on the web are normalized, i.e. there is no synonyms, tokens have the same delimiters. For that, one of our ongoing work is to establish a benchmark involving a large corpus of non-normalized schemas for evaluating schema matching tools.
### References
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00138117/file/113_duchateau_cameraReady_SMDB07.pdf", "len_cl100k_base": 4926, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23435, "total-output-tokens": 6160, "length": "2e12", "weborganizer": {"__label__adult": 0.0003521442413330078, "__label__art_design": 0.0007996559143066406, "__label__crime_law": 0.0005540847778320312, "__label__education_jobs": 0.002376556396484375, "__label__entertainment": 0.00016987323760986328, "__label__fashion_beauty": 0.0002636909484863281, "__label__finance_business": 0.0007114410400390625, "__label__food_dining": 0.0003790855407714844, "__label__games": 0.0006146430969238281, "__label__hardware": 0.00081634521484375, "__label__health": 0.0007500648498535156, "__label__history": 0.0005269050598144531, "__label__home_hobbies": 0.00015366077423095703, "__label__industrial": 0.0005197525024414062, "__label__literature": 0.0009098052978515624, "__label__politics": 0.00040268898010253906, "__label__religion": 0.0005736351013183594, "__label__science_tech": 0.228271484375, "__label__social_life": 0.00023984909057617188, "__label__software": 0.05908203125, "__label__software_dev": 0.70068359375, "__label__sports_fitness": 0.0002366304397583008, "__label__transportation": 0.00046539306640625, "__label__travel": 0.00026345252990722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25844, 0.03901]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25844, 0.42721]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25844, 0.9006]], "google_gemma-3-12b-it_contains_pii": [[0, 1000, false], [1000, 4129, null], [4129, 8545, null], [8545, 12008, null], [12008, 16826, null], [16826, 21338, null], [21338, 24026, null], [24026, 25844, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1000, true], [1000, 4129, null], [4129, 8545, null], [8545, 12008, null], [12008, 16826, null], [16826, 21338, null], [21338, 24026, null], [24026, 25844, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25844, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25844, null]], "pdf_page_numbers": [[0, 1000, 1], [1000, 4129, 2], [4129, 8545, 3], [8545, 12008, 4], [12008, 16826, 5], [16826, 21338, 6], [21338, 24026, 7], [24026, 25844, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25844, 0.05128]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9ff106fee98281f36139538d6c75f1973e33bad7
|
Modeling Mixed-critical Systems in Real-time BIP
Dario Socci, Peter Poplavko, Saddek Bensalem, Marius Bozga
To cite this version:
Dario Socci, Peter Poplavko, Saddek Bensalem, Marius Bozga. Modeling Mixed-critical Systems in Real-time BIP. 1st workshop on Real-Time Mixed Criticality Systems, Aug 2013, Taipei, Taiwan. <hal-00867465>
HAL Id: hal-00867465
https://hal.archives-ouvertes.fr/hal-00867465
Submitted on 30 Sep 2013
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Modeling Mixed-critical Systems in Real-time BIP
Dario Socci, Peter Poplavko, Saddek Bensalem and Marius Bozga
CNRS VERIMAG UMR 5104,
Grenoble, F-38041, France
Abstract—The proliferation of multi- and manycores creates an important design problem: the design and verification for mixed-criticality constraints in timing and safety, taking into account the resource sharing and hardware faults. In our work, we aim to contribute towards the solution of these problems by using a formal design language — the real time BIP, to model both hardware and software, functionality and scheduling. In this paper we present the initial experiments of modeling mixed-criticality systems in BIP.
I. INTRODUCTION
The introduction of many-cores and multi-cores is leading to an increasing trend in embedded systems towards implementing multiple subsystems upon a single shared platform. However, in most applications, not all the subsystems are equally critical. Especially this observation is important when human lives depend on correct functionality, e.g. in avionics systems. In mixed criticality systems different degrees of failures, from minor, hazardous to major, need to be distinguished [1]. The previous work mostly assumes time or space isolation of subsystems having different levels of criticality. However, when integrating different subsystems on a single multi-core die there is a need to share hardware resources (processors, on-chip memory, and global interconnect) between different subsystems. Also, handling safely the hardware failure is another design problem, that will only increase as multi-core die there is a need to share hardware resources (processors, on-chip memory, and global interconnect) between different subsystems. Also, handling safely the hardware failure is another design problem, that will only increase as multi-core will become more and more commonplace. This problem already manifested itself in popular many-core systems – the GPUs – so it is relatively well studied how to manage the resource sharing and safety when all subsystems have the same level of criticality. However, adding the mixed criticality assumption may easily boost the complexity from tractable to intractable [2], and a general lack of design methodology can be stated.
A popular language for programming safety-critical systems is Ada, and a great interest exists today to express multi-core and especially mixed-critical applications in this language [3], [4]. However, for verification of safety properties (i.e., automatic check for absence of bugs), any program has to be translated to a formal model, which is a non-trivial task even for Ada, which was designed to facilitate an easier static analysis of code [5]. Formalization is required not only for analysis of safety properties, but also for timing [6]. Formal models play important role in the analysis of hardware faults and fault correction [7] and of the shared resource conflicts in multicores [8]. Therefore in our mixed-criticality project we target many-core systems addressing the technological challenges by using a formal design language – BIP. The design input can be either provided in BIP or obtained by translation from other languages.
A wide range of formal design languages exist, but most of them, referred to as models of computation, enable tractable analysis in exchange of lack of expressiveness. The software-based embedded systems would ideally be designed similarly as hardware, i.e., using a language such as Verilog/VHDL, for which all important physical properties like timing, consumed energy, occupied space can be formally imposed and/or derived in a fully automated design process. This is unfortunately very often not the case for the software and it requires a significant effort up to even a change in mentality of software developers to write programs that are ‘aware’ of non-functional constraints [9]. Synchronous languages, such as Lustre [10], are an important step in the direction of solving this problem, and they are actively developing in the direction of multi-core mapping [11] and are becoming an important subject of research for mixed critical systems [12]. Also application written in widely used data-flow languages as Simulink can be translated into synchronous languages [13]. However, unlike their hardware-language ‘brothers’, Verilog/VHDL, for software the synchronous languages by far do not present a ‘one-size-fits-all’ solution, because they assume very specific properties of the system behavior, and it can be very difficult and costly to tailor a given software project to fit these properties [14]. Therefore, rigorous embedded system design frameworks, such as BIP, do not restrict themselves to synchronous languages and offer themselves more openly and in more general way to the functionality to be implemented in various safety-critical systems. At the same time they share with synchronous languages the ability to reason on the behavior formally and the potential to achieve full automation for the given physical constraints in terms of timing, energy and space/weight.
The BIP framework is expressive enough to model various models of computations. Due to its unique expressiveness, it takes a very special role in our design methodology. The same language is used to express both the application and the hardware, timing and functionality, scheduling and mapping. The paradigm of updating and analyzing a homogeneous intermediate formal model of a real design object to support the design decisions is a well-recognized paradigm in the field of electronic design automation in hardware design. The tools for logic synthesis and physical synthesis exploit so-called timing graphs, which provide an intermediate timing model of the digital logic design, being updated in conjunction to the modifications made in the design by the design flow and being used to guide the decisions made in the flow. The idea to use
some sort of timing graph to express the application, mapping and scheduling for the multicore applications is less widely known, but there are such examples, e.g., [15].
In this paper we first present the BIP framework in general, and then present our current work on modeling the mixed-criticality systems in BIP.
II. BIP COMPONENT FRAMEWORK
A. General BIP
The BIP (Behavior Interactions Priorities) framework [16] builds around a component-based language. This language enjoys a simple syntax with clear and expressive semantics. At the heart of BIP lies an idea to use as few as possible different kinds of building blocks. It is well-known that so-called finite-state machines are ‘bricks’ used to construct the operational semantics of many other more complex programming models and are widely used for formal validation of software and hardware. Therefore, BIP directly uses this basic concept in its language.
BIP supports a component-based modeling methodology based on the assumption that components are obtained as the superposition of three independent layers, that is:
1. Behavior, specified as a set of finite-state machines (basic components)
2. Interactions, used to coordinate the actions of behavior
3. Priorities, used to schedule among multiple enabled interactions
The states inside the components denote control locations where the components wait for interactions. A transition is an execution step from one control location to another. Each transition has an associated condition that enables this transition and an action that is executed at this transition. In BIP, all actions executed by transitions are written in C/C++, a popular and efficient programming language supported by most of the mature professional embedded systems.
Multiple components run concurrently and execute interactions with each other. A transition in every component is only executed when some interaction for this transition is enabled. An interaction can occur in two situations: when all involved components are ready to participate (strong synchronization) or when a component triggers the interaction without waiting for other components (broadcast). Every interaction can result in data transfer between the components. The valid interactions are formally defined by algebraic expressions, enabling a small but provably complete and powerful mechanism to define various synchronous and asynchronous communication protocols [17].
To filter amongst possible interactions, the designer can specify priorities between simultaneously enabled interactions. Interactions and priorities define a clean and abstract concept of composition glue. The glue in BIP is thus a first class concept with well-defined semantics that can be analyzed and transformed. Moreover, it enables expressiveness unmatched by any other existing programming model for concurrent systems [18].
BIP supports the construction of composite, hierarchically structured (sub-)systems. It lets developers compose systems by layered application of interactions and priorities. There is a clear separation between behavior (the finite-state machines) and composition glue (stateless interactions and priorities).
The BIP and its real-time extension RT-BIP are currently supported by an extensible toolset including a concrete modeling language together with associated analysis and implementation. The BIP language leverages on C++ style variables and data type declarations, expressions and statements, and provides additional structural syntactic constructs for defining component behavior, interactions and priorities.
The BIP framework provides constructs for dealing with parametric and hierarchical descriptions as well as for expressing timing constraints associated with behavior. The toolset allows functional validation, model transformation and code generation features. See the illustration of the toolset in Figure 1. In particular, code generation targets both simulation and execution (e.g., distributed, multi-threaded, real-time, etc.). It is important to note that both of them are driven by specific middleware, the so-called engines, available for both BIP and RT-BIP, for a single platform and for distributed set of platforms. This allows to run, explore and inspect execution traces corresponding to systems. Also, the BIP toolset supports translations from various input languages.
B. Real-time BIP
Correct deployment of systems where multiple real-time applications run on a given hardware platform remains by far an open problem. A key challenge is meeting safety and timing constraints, whose satisfaction depends on the features of the execution platform, in particular its speed and the run-time variability of hardware characteristics such as probability of hardware faults. Existing rigorous deployment techniques are applicable to specific classes of systems e.g., with periodic...
tasks (e.g., rate-monotonic analysis) and deterministic systems (e.g., synchronous dataflow). When mixing different criticality levels on a single platform, different deployment setups should be combined, and one cannot consider them in isolation or otherwise the isolation mechanisms themselves should be rigorously modeled and verified.
Real-time (RT) BIP [19] is an extension of the BIP component-based design language to continuous time model closely related to timed automata [6]. In addition to offering syntax and semantics for the timing-aware modeling of concurrent systems, the real-time BIP also envisions a general model-based implementation method for safety-critical multicore systems. This method is based on the use of two models: (1) an abstract model representing the behavior of real-time software with user-defined timing constraints; (2) a physical model representing the behavior of the real-time software running on a given platform. The former is obtained directly from the specification provided by the user. The latter is derived from augmenting the software model with the detailed models of the processor and memory hardware blocks, services provided by on-chip communication networks, and the runtime software libraries/kernels/schedulers. A necessary condition for a correct deployment is time-safety, that is, any timed execution sequence of the physical model is also an execution sequence of the abstract model, thus meeting all the deadlines. The time safety means that the platform is fast enough to meet the timing requirements [19]. Also, if the time safety property is preserved while reducing the execution times, then the system is said to be time robust. It is the physical model that is used for the final validation of the given design for time safety. For a time robust system a simple simulation with worst-case execution delays of actions is enough to validate the time-safety, due to monotonic dependency of all system timing on the delays of system components [19]. Sufficient static analysis conditions are given in [19] to check whether the system is time robust.
In Section III by means of an example, we show how modeling and analyzing the mixed-critical systems in RT-BIP. This example is composed manually, but according to a particular architecture pattern for mixed-critical systems. We use this example to introduce the elements of the BIP language on the fly as we construct the example. Having constructed the BIP model, we use the available RT-BIP engine to simulate all possible simple scenarios after which the system comes back into the original state. Because each scenario in this example is time deterministic, this simulation is sufficient to prove the time safety, showing the usefulness of the RT-BIP engine for at least partial validation of mixed-critical systems. Moreover, for non-preemptive variant of the considered systems, the engine can be also used for their implementation.
III. Mixed-Critical Systems in BIP
In this section, we describe our current approach to model mixed criticality systems in BIP. This approach follows the architectural pattern shown in Figure 2. There are one or more applications, plugged to the environment via environment interfaces. To provide some fallback possibilities in the case of errors, the applications support different modes, corresponding to different levels of quality of service. The failure of an application means that it produces wrong results to its environment, so they should be all detectable at the environment interfaces, which therefore explicitly signal the failures. To avoid the failures, the system includes so-called observers, which monitor the state of the application at run time and report violations, i.e., situations that are leading to failure conditions, to run-time manager (RT Manager). The function of the latter is granting the hardware resources to different applications with proper scheduling policy and for updating the mode of the applications such that when violations occur the failures are prevented. This is done by acceptable degradation of service of the low-criticality applications which will prevent the failures and degradation of the high-criticality applications.
A. Modelling Task Systems
Following the architecture pattern in Figure 2, we implemented in BIP the class of mixed-criticality systems originally proposed by [20]. This class is a generalization of the classical real-time scheduling of periodic tasks on a single-core machine. [20] proposes a mixed-critical scheduling approach that ensures certification by certification authorities according to currently existing procedures, hence this approach is called certification-cognizant. The main idea is that the schedule should be such that the high-criticality tasks conform to the requirements of the certification agency, while at the same time all tasks should also conform to (less demanding) conventional requirements posed by system engineers. Finding a feasible schedule in this setup becomes a hard problem to tackle from the point of view of computer science theory. In particular, it is an NP-complete problem when the number of criticality levels is a fixed constant [2], which is always the case in every practical application domain avionics, automotive, etc. (think of five SIL levels in the IEC 61508 standard for safety of industrial system). The bad news from this fact is that it is hard, in general, to find a feasible schedule. The good news is, nevertheless, that when a feasible schedule is found then it can be verified in polynomial time with respect to number of tasks and this can be done by simulating a (polynomial-size) set of alternative scenarios (ideally, with some backtracking to avoid re-exploration of the common parts of similar scenarios). This is exactly the strategy (although, without backtracking yet) we currently follow to verify the physical model which we build in RT BIP for this class of the systems. The notions of (basic) scenarios and the scenario-based verification of mixed-critical schedule are defined formally in [2].
The scheduling problem is defined as a set of periodic tasks, partitioned into two criticality levels: low (2) and high (1). Each task has two values of WCET: WCET_{LO} for the lowest criticality level and WCET_{HI} for the highest level (we assume only two criticality levels to simplify explanation). The former corresponds to the WCET level obtained with conventional WCET estimation tools, and the latter to the WCET obtained by (sometimes much more pessimistic) tools used by the certification authorities. In addition WCET_{HI} can also model problems related to hardware faults. For instance a software module may mask such errors implementing error checking at the end of the execution. If an error is detected the task is executed again. This (exceptional) event will, of course, increase the execution time. Every task is also assigned a criticality level that is ‘own’ for this task: either level 1 (HI=high) or level 2 (LO=low). Next to this, as in the usual scheduling model, each task has a period and a relative deadline. A schedule is feasible if the following conditions are met:
**Condition 1:** (Normal mode) If all jobs run at most for their LO WCET, then both critical (HI) and non-critical (LO) jobs must complete before their deadline.
**Condition 2:** (Degraded mode) If at least one job runs for more than its LO WCET, than all critical (HI) jobs must complete before their deadline, whereas non-critical (LO) jobs may be even dropped.
Figure 3 shows the structure of a BIP model for a simple instance of this model that consists of two tasks. In BIP component communicate via ‘ports’ (circles and triangles in the figure), that are the transaction of the automata that participate in interactions. To illustrate ports that take part into the same interaction, we connected them with lines.
Every periodic task is placed in the context of an ‘application’ consisting of three components: the Source, the (relative-deadline) Sink and the task itself. The Source has a clock variable that increments automatically with the passage of time (like any clock in timed automata). We do not show the internal details of the Source, but the behavior is simple. When the value of his clock reaches the period of the corresponding task, the Source executes an interaction at port ‘Job_in’ and resets the clock. As a result, it executes ‘Job_in’ periodically. The meaning of ‘Job_in’ is that a new job starts and it has to be executed by the corresponding task and finished by the task’s relative deadline with the respect to the time of arrival of ‘Job_in’, and the deadline is checked at the Sink. Thus, both the task and the Sink have to be notified on ‘Job_in’ and we see that they both have a port connected to ‘Job_in’.
Figure 4 zooms into the BIP model of a task which we use in this example. The task has three groups of ports (see them on the left). The first group is for starting and finishing a job. It also includes one port that controls the mode in which the current job is running. The second group consists of two ports: Read and Write to read the input data in the beginning of the job and write the output data in the end. The third group is for obtaining and releasing the CPU resource from the scheduler of the core where the task is running. A triangle next to the port corresponds to the fact that the given component (in this case task) takes initiative to do an interaction and the other components connected to it are supposed to be ready
to participate in this interaction at any moment when it can arrive. For example, it is the task that takes initiative to ask for a CPU. Otherwise the port is marked by a thick dot.
A task initially starts in a state where it is ready to receive a ‘Job_in’ immediately. The task asks the scheduler for a CPU. Having obtained the CPU, the task goes into state ‘begin’ and clock ‘x’ is reset to 0. This clock is used to model the execution cycles consumed by this task on the CPU. At any time when the given task is preempted by another task, the clock is frozen. The task asks for CPU and waits again, and when it gets the CPU back then clock ‘x’ is resumed. To facilitate modeling
the shared resource conflicts in future work, the job execution between the ‘begin’ and the ‘end’ states follows the so-called ‘superblock’ model, as described in [21]. In line with this work, we split the process execution into subsequent phases that may have different access patterns to the shared resources. In the typical case described here we assume three phases. First the task reads all input data from shared to local memory (the ‘Read’ transition), then it executes (‘Execute’) and then it writes the output data from local to shared memory (‘Write’).
Let us now examine the ‘Execute’ phase. Note that only this phase is assumed in this example to consume non-negligible amount of execution cycles, to support mode changes and preemption. The number of execution cycles is bounded by function $WCET_{(scenario)}$, which can take one of two possible values, $WCET_{LO}$ or $WCET_{HI}$, depending on the execution scenario selected during the schedulability analysis (see the next sub-section). When clock ‘x’ reaches the $WCET$ value of the current scenario the task releases the CPU. It can also be forced to release CPU if the mode, set through interaction UpdateMode, implies that the task should be dropped (i.e. enforced to complete). As we see in the figure, UpdateMode changes the variable called ‘drop’ depending on the variable ‘mode’ communicated from outside this task at this interaction. This is in fact a Boolean data variable. If its value is set to true, the task is forced to finish the current job urgently in the next interaction. This is required to allow that the LO tasks can be dropped to free the CPU for the HI tasks so that the latter do not miss their deadlines. When a task is dropped, the mode of its Sink component is updated so that it does not expect a timely Job_out from the task.
The task component illustrates the basic elements of real-time BIP used to construct the BIP components. Those are the clocks (e.g. ‘x’), the states (e.g. ‘ini’), the interactions enabled depending on conditions (e.g. ‘Release_CPU’ depends on ‘drop’), and the data variables (e.g. ‘drop’). The other components, such as Source, Sink, Scheduler, etc., are also composed of these elements, we skip their details for space reasons.
Let us come back to the BIP implementation of our two-task scheduling example, given Figure 5. Let us explain how the mixed-critical scheduling is reflected in this BIP model. Every task is equipped with a component, called Observer, which has an own analogue of task’s clock ‘x’ and verifies whether the clock goes beyond $WCET_{LO}$. According to Condition 2, in this case Task 2, which is a LO task, is not obliged to meet its deadline and in fact can be dropped. When one of the two Observers in the figure report to the Resource Manager component that $WCET_{LO}$ budget is violated, the Resource Manager goes from state NORMAL to DEGRADED. In this state, Resource Manager updates the mode of Task 2 such that it immediately drops.
The scheduler in this example is a fixed priority scheduler, which uses the fixed priority per tasks computed by applying so-called Audsley’s approach to the mixed-critical set of tasks [20]. According to this approach, one task has a higher priority than the other and the priority assignment is done taking into account the periods, deadlines, WCETs and criticality levels of the tasks. The scheduler also monitors the current (work-)load of the processor. When the processor is idle (i.e. waiting for jobs to arrive without any task requesting for the CPU), this is reported to the Resource Manager so that it can go back to the NORMAL mode if the mode is DEGRADED. Thus, we can stop dropping the low-criticality Task 2 when the conditions permit this.
### B. Schedulability Analysis
In Figure 5 we show the WCETs for one of our experiments. Here T1 is a HI task, T2 is a LO task and D denote the relative deadline.

Fig. 5. WCET of our example
The tasks in this example have different deadlines but, for simplicity of presentation, they have the same period and are perfectly synchronized with each other. By the beginning of each period, the CPU is idle, so the system comes back at the exactly same state as at the start. Thus, to exhaustively study all possible scenarios of this example it is enough to only consider what happens in one period. We use this fact and the verification of schedulability proposed in [2] to demonstrate verification by simulation, to show that the simulation tools (already available in the BIP framework) form an important first step towards verification with more general and realistic assumptions.
Consider two scenarios where the high-criticality task chooses to use execution time in interval $[0, WCET_{LO}]$ or $(WCET_{LO}, WCET_{HI}]$. The low criticality task may only choose from interval $[0, WCET_{LO})$ because when it exceeds this interval violation is reported and it is dropped. In every scenario one can show that the fixed-priority schedule is time robust (in the classical schedule, it is known to be such, but the mixed-critical systems deviate from classical schedules by the fact that the LO jobs may be dynamically dropped). From this it follows that to verify the time safety (i.e. the schedulability) of this example it is enough to use upper bounds of each scenario ($WCET_{LO}$ and $WCET_{HI}$) to simulate the schedule (see also Lemma 1 from [2]).
Thus, to verify the example of Figure 5 it is enough to run the BIP simulation for the duration of two periods, trying a different scenario in each period. With our BIP model, we did experiments with different parameter settings of tasks, including the cases where there is no failure or, due to an error in deployment (wrong priority assigned to tasks), a failure occurs (meaning a deadline miss of a HI task or unexpected miss of a LO task). If T2 is assigned a higher priority, it will execute first and meet its deadline 8, because it takes at most 4 units to execute. However, in the scenario where T1 runs at its $WCET_{HI}$ it will miss its deadline (completing at time $4+8=12$), which is a failure (wrong priority assignment done by the deployment algorithms). Nevertheless, if T1 is assigned a higher priority, the worst thing that can happen is that in the scenario where T1 runs at its $WCET_{HI}$ it will thus violate
---
1 different deadlines make this problem NP-hard
its WCET_{LO}, task T2 will be dropped (by RT Manager, as explained in the following session) and hence will not complete by the deadline. But this is acceptable degradation of a LO task that is allowed in this scheduling problem.
The scalability of our methodology is guaranteed by the theoretical results of [2], where is shown that the scheduling problem is time robust and only a polynomial number of experiments is necessary to ensure the schedulability of all the possible scenarios.
IV. DISCUSSION AND FUTURE WORK
A possible direction of future work is to generalize the ‘time-robust per scenario’ schedulability analysis from this example to more general cases. We have explained this reasoning for two tasks of equal periods and two levels of criticality, but in fact it can be generalized to more tasks with different periods and more criticality levels. In this case, one can still formulate verification by (possibly very lengthy) simulation using the current scheduling theory. However if we upgrade the tasks to have unknown relative arrival time inside each period or to be sporadic then it is not trivial. Example of mixed-critical scheduling policies for this case are EDF-VD [22] and demand-based [23]. However they provide schedulability conditions for their specific scheduling algorithm, and we are not aware of any generic schedulability verification procedure. Modeling in BIP potentially opens possibilities for using formal verification methods of time automata for such cases. Figuring out how this verification can be done in BIP is an interesting direction for future work.
Also, because the goal of our project is to study multicore system and resource conflicts, we will not always deal with time-robust systems, so more advanced methodology than simulations are required, such as compositional verification and (for lower criticality levels) statistical model checking.
Extending the corresponding BIP verification methodologies to the real-time BIP is an important direction of work. Also we are interested in extending our scheduling algorithm [24] in the same way as the schedulability analysis.
In order to prove the effectiveness of our approach, we are also planning to apply the proposed methodology to model a real-life avionic application as a case study.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00867465/document", "len_cl100k_base": 6247, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22092, "total-output-tokens": 8435, "length": "2e12", "weborganizer": {"__label__adult": 0.0005660057067871094, "__label__art_design": 0.0006475448608398438, "__label__crime_law": 0.0005192756652832031, "__label__education_jobs": 0.0006375312805175781, "__label__entertainment": 0.0001461505889892578, "__label__fashion_beauty": 0.0002701282501220703, "__label__finance_business": 0.0004014968872070313, "__label__food_dining": 0.0005259513854980469, "__label__games": 0.0011720657348632812, "__label__hardware": 0.00701904296875, "__label__health": 0.0008563995361328125, "__label__history": 0.0005564689636230469, "__label__home_hobbies": 0.0001901388168334961, "__label__industrial": 0.0013904571533203125, "__label__literature": 0.00030994415283203125, "__label__politics": 0.00051116943359375, "__label__religion": 0.00098419189453125, "__label__science_tech": 0.26708984375, "__label__social_life": 0.00010514259338378906, "__label__software": 0.007427215576171875, "__label__software_dev": 0.7060546875, "__label__sports_fitness": 0.0004563331604003906, "__label__transportation": 0.0017518997192382812, "__label__travel": 0.0003294944763183594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36086, 0.03251]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36086, 0.51781]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36086, 0.9028]], "google_gemma-3-12b-it_contains_pii": [[0, 972, false], [972, 7008, null], [7008, 11903, null], [11903, 18010, null], [18010, 22175, null], [22175, 28604, null], [28604, 36086, null]], "google_gemma-3-12b-it_is_public_document": [[0, 972, true], [972, 7008, null], [7008, 11903, null], [11903, 18010, null], [18010, 22175, null], [22175, 28604, null], [28604, 36086, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36086, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36086, null]], "pdf_page_numbers": [[0, 972, 1], [972, 7008, 2], [7008, 11903, 3], [11903, 18010, 4], [18010, 22175, 5], [22175, 28604, 6], [28604, 36086, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36086, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
64b9ae631cb2caaa390241f692abbfc0e6712d5f
|
[REMOVED]
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00763634/file/sgai2012.pdf", "len_cl100k_base": 7238, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 37769, "total-output-tokens": 9013, "length": "2e12", "weborganizer": {"__label__adult": 0.000408172607421875, "__label__art_design": 0.0006089210510253906, "__label__crime_law": 0.0007925033569335938, "__label__education_jobs": 0.0021610260009765625, "__label__entertainment": 0.00017523765563964844, "__label__fashion_beauty": 0.0002353191375732422, "__label__finance_business": 0.0006017684936523438, "__label__food_dining": 0.00048422813415527344, "__label__games": 0.000850677490234375, "__label__hardware": 0.0007953643798828125, "__label__health": 0.0008563995361328125, "__label__history": 0.0004973411560058594, "__label__home_hobbies": 0.00015485286712646484, "__label__industrial": 0.0006585121154785156, "__label__literature": 0.0010528564453125, "__label__politics": 0.0004534721374511719, "__label__religion": 0.0007686614990234375, "__label__science_tech": 0.26220703125, "__label__social_life": 0.00023448467254638672, "__label__software": 0.0394287109375, "__label__software_dev": 0.685546875, "__label__sports_fitness": 0.0002853870391845703, "__label__transportation": 0.0006561279296875, "__label__travel": 0.0002562999725341797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33554, 0.05173]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33554, 0.51655]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33554, 0.8698]], "google_gemma-3-12b-it_contains_pii": [[0, 1022, false], [1022, 3105, null], [3105, 6267, null], [6267, 8791, null], [8791, 11869, null], [11869, 13906, null], [13906, 15693, null], [15693, 17999, null], [17999, 19663, null], [19663, 21750, null], [21750, 23899, null], [23899, 25528, null], [25528, 26991, null], [26991, 29997, null], [29997, 33554, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1022, true], [1022, 3105, null], [3105, 6267, null], [6267, 8791, null], [8791, 11869, null], [11869, 13906, null], [13906, 15693, null], [15693, 17999, null], [17999, 19663, null], [19663, 21750, null], [21750, 23899, null], [23899, 25528, null], [25528, 26991, null], [26991, 29997, null], [29997, 33554, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33554, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33554, null]], "pdf_page_numbers": [[0, 1022, 1], [1022, 3105, 2], [3105, 6267, 3], [6267, 8791, 4], [8791, 11869, 5], [11869, 13906, 6], [13906, 15693, 7], [15693, 17999, 8], [17999, 19663, 9], [19663, 21750, 10], [21750, 23899, 11], [23899, 25528, 12], [25528, 26991, 13], [26991, 29997, 14], [29997, 33554, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33554, 0.19892]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
533a65fa8fae8a5e6b96682802b29f4bd3961c56
|
Semantic Interoperability of Heterogeneous Semantic Resources
Catarina Ferreira da Silva, Lionel Médini, Samer Abdul Ghafour, Patrick Hoffmann, Parisa Ghodous, Celson Lima
▶ To cite this version:
HAL Id: hal-01503829
https://hal.archives-ouvertes.fr/hal-01503829
Submitted on 19 Nov 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Semantic Interoperability of Heterogeneous Semantic Resources
Catarina Ferreira Da Silva, Lionel Médini, Samer Abdul Ghafour, Patrick Hoffmann, Parisa Ghodous
Lyon Research Center for Images and Intelligent Information Systems
Claude Bernard Lyon 1 University,
Villeurbanne, France
Celson Lima
Centre Scientifique et Technique du Bâtiment Département TIDS – Technologies de l’Information et Diffusion du Savoir CSTB BP209, 06904 Sophia Antipolis, France
Abstract
This paper presents a three-step approach to enhance interoperability between heterogeneous semantic resources. Firstly, we construct homogeneous representations of these resources in a pivot format, namely OWL DL, with respect to the semantics expressed by the original representation languages. Secondly, mappings are established between concepts of these standardised resources and stored in a so-called articulation ontology. Thirdly, an approach for ranking those mappings is suggested in order to best fit users’ needs. This approach is currently being implemented in the Semantic Resources “Interoperabilisation” and Linking System (SRILS). The mapping results as well as the work to be done are discussed.
Keywords: heterogeneous semantic resources, ontologies, interoperability, mapping, ranking, description logics, subsumption algorithms, OWL.
1 Introduction
The growing use of the Web for collaborative and distributed work, associated to the standardisation of the languages and tools related to the Semantic Web, have led to the availability of multiple terminological and syntactical resources
1 {cferreir,lmedini, sabdulgh, phoffman, ghodous}@liris.cnrs.fr
in numerous professional domains using the Internet. Different types of resources such as taxonomies, vocabularies, thesauri or ontologies, have been elaborated according to different existing standards. They treat different topics and approach them from different viewpoints, techniques and objectives. The existence, availability and complementariness of these resources on the Web initiated new usages which could benefit from their simultaneous use, for as various applications as electronic commerce, information retrieval or collaborative design.
The simultaneous use of those resources requires them to be interoperable. Interoperability – i.e. “the ability to exchange information and use the information which has been exchanged” – can be treated at several levels such as technical and semantic. Basically, achieving technical interoperability is a matter of enabling distributed applications to work, taking into account syntactical and structural heterogeneity issues between different resources. However, this can cause serious “misinterpretations” of data and lead to misunderstandings between users, calculation errors, or even system failure, depending on the regarded application. At an upper level, semantic interoperability consists in preventing these problems from happening, while taking into account the semantics associated to the data, and ensuring exchanged information share the same meaning. To achieve both syntactical and semantic interoperability, we herein suggest an “interoperabilisation” approach for heterogeneous semantic resources (SR), based on three steps. Firstly, we make the SR representation formats homogeneous, considering the expressiveness of the source and target languages to preserve the meanings of the resources for the further steps. Secondly we align those resources by mapping SR entities. Thirdly, to rank by relevance the mappings obtained in the previous stage, we suggest a personalized and contextualized measure of these mappings.
This paper first presents a state of the art of different existing interoperability approaches between heterogeneous SR. Then, our approach is presented and its different stages are detailed. The architecture and implementation of the SRILS system, which partially implements this approach, are described. An application example taken from the construction field is presented. Finally, the mapping results, as well as the work to be done are discussed.
---
2 Existing interoperability approaches between semantic resources
Heterogeneity of SR can be considered at different levels, such as representation format level (syntax), data organization level (structure) and different points of view level (semantics). Some authors have treated these levels separately, even if they are not easy to split. Chaudri et al [9] search solutions for resolving the semantic ambiguity at syntax level. Mitra et al [22] describe several types of structural semantics ambiguity. Bowers et al [6] show that using different syntax or structure formalisations is the source of different errors when transforming a representation into another. In the rest of this section, we summarise background work regarding standardisation of representation formats and alignment of SR.
2.1 Standardisation of representation formats
In order to make heterogeneous SR interoperable, Kalfoglou et al [17] consider standardisation by translation of resources in a common representation language is often a necessary preliminary stage. The choice of the target language is essential because it should be expressive enough in order to represent explicit and precise knowledge. On the other hand, it is important to compromise between expressiveness and complexity. Jannink et al [16] propose an algebra based on mapping rules in order to achieve that normalization. The OntoMorph system [8] enables to specify syntactic transformations using syntax rewriting mechanisms, rules based on pattern matching and models describing how to apply those syntactic rules. Normalisation treats the syntactic heterogeneity without loosing the expressiveness of the representation languages.
Databases and SR are both knowledge representations. However, databases use fewer modelling primitives. Databases researchers face the problem of finding correspondences between different schemas, just as with SR. In relational databases a large number of embedded SR were developed since the 80’s. The logic model of databases allows to organise vast data and formally express relationships among data. The semantics of these resources are implied from the tables structure and integrity constraints. However, a semantically well-defined format representing all integrity constraints in the context of an automatic transformation does not exist. Consequently, a conversion from a database to an OWL ontology is still an ad hoc transformation, dependent of each database schema [4]. Calvanese et al [7] suggested an approach to transform a database into a set of description logics (DL) axioms. This work is a preliminary stage for developing a generic method to convert a relational
database into an OWL ontology.
2.2 Alignment of semantic resources
An alignment can be defined [5] as a set of mappings expressing a correspondence\(^3\) between two entities of different ontologies. Schema matching [25] as well as SR alignment require finding related entities, (i.e. concepts and properties), in two or more knowledge structures [23]. For this, several methods have been used, among which terminological, structural, extensional (i.e. based on instances) or semantic methods. Those methods come from different disciplines such as data analysis, machine learning, language engineering, statistics or knowledge engineering. Their applicability both depends on the type of SR features (e.g. labels, structures, instances, semantics) to be compared and on the expected types of results. Techniques to find inter-ontology relations lean most frequently on instances and schema matching, concept matching and matching of structural elements of the source ontologies [13]. Kalfoglou and Schorlemmer [17] survey a set of frameworks, methods and tools related to ontology mapping, such as the ones shown in the next paragraphs.
In the machine-learning discipline, GLUE [11] searches mappings over instances of classes. Given two ontologies, for each concept in one ontology, GLUE finds the most similar concept in the other ontology using probabilistic definitions of several practical similarity measures. It exploits information about the concept instances – such as the words frequency in the text value of instances, the instance names, the value formats, or the characteristics of value distributions – and the taxonomic structure of the ontologies. Nevertheless, machine-learning techniques work better when large sets of instances are available.
The OBSERVER [21] system helps users define mappings between concepts of two ontologies by finding pairs of related concepts. It uses the data structures underlying the domain-specific ontologies and the synonymy, hyponymy and hyperonymy relations to detect linguistic matches between concepts. Once the mappings are defined, users ask DL-formatted queries about terms of one of the ontologies, and the system expands the query to the terms of the other ontology. For this, users have to be familiar with DL constructors.
Euzenat [12] proposes an alignment API that supplies OWL-compliant functions for helping programmers automatically map ontologies. This API currently uses term-matching techniques that cannot guarantee valid alignments. For instance, matched terms can be homonyms (and have different meanings) or be semantically close without being complete synonyms.
\(^3\) A correspondence is constituted of a relation and a trust assessment.
2.3 Semantic methods for ontologies alignment
Notwithstanding these efforts, we think that these approaches could benefit from techniques that take into account the semantics associated to concepts definitions. DL-based techniques [2] are appropriate for that, since they rely on the explicit and formal semantics represented by ontologies. When used for comparing ontologies, they ensure the original semantics of the SR entities is preserved and provide an explicit and formal interpretations of both entities being compared and the produced relations. We present in this section a basic DL algorithm and a tool currently used in these methods. More details about the method we used in this work are provided in section 3.2.
Standard DL techniques apply subsumption algorithms to establish relationships between concepts. The tableau algorithms are a class of subsumption algorithms that firstly expand each ontology: each occurrence of a name of concept on the right side of a definition is replaced by the concept it stands for. This recursive process of dependency-eliminating substitutions (known as unfolding) is done until no cycle in the set of definitions exist. Thus an unfoldable ontology implies that all axioms are unique and acyclic definitions. Even if the expanded ontology size can increase exponentially compared to its original size, the unfolding process enables to reduce the reasoning process to the computation of subsumption and satisfiability.
A tableau is a graph which represents a model, with nodes corresponding to individuals (elements of the domain of interpretation $\Delta$). Tableau algorithms try to prove the satisfiability of a concept $D$ by constructing an interpretation model $I$ in which $D^I$ is not empty (i.e. constructing a tableau starting with a single individual and then inferring the existence of additional individuals or additional constraints on individuals). This kind of reasoning uses a refutation-style proof [15]: $C$ is subsumed by $D$ if it can be shown that the existence of an individual $x$ that is an instance of $C$ ($x \in C^I$) but is not an instance of $D$ ($x \notin D^I$) is logically inconsistent. This corresponds to testing the logical (un)satisfiability of the concept $C \sqcap \neg D$ (i.e. $C \sqsubseteq D \iff C \sqcap \neg D$ is not satisfiable). The inference mechanism consists in applying a set of consistency-preserving transformation rules for $\mathcal{ALC}$, known as $\sqcap$-rule, $\sqcup$-rule, $\exists$-rule and $\forall$-rule until no more rules apply; see [15] for details. The algorithm terminates when the graph is complete (no further inference is possible) or when contradictions have been revealed. A concept is unsatisfiable if the graph obtained this way contains a contradiction (called a “clash”) and satisfiable (the graph is consistent) otherwise.
---
4 A concept $C$ is satisfiable with respect to a Tbox, if it exists an interpretation model $D^I$ of the Tbox such that $C^I$ is nonempty
The FaCT\(^5\) inference engine \([14]\) applies DL techniques and allows reasoning over concepts, roles and attributes descriptions, as well as managing concepts hierarchies based on the subsumption relation. FaCT uses a very expressive DL language \(SHIQ = \{ALC + \text{number restriction} + \text{role hierarchy} + \text{inverse role} + \text{transitive role}\}\)\(^6\) able to capture a large part of the semantics of knowledge models. This language is equipped with effective reasoning techniques that are sound and complete with respect to the semantics.
The ONDIL system \([18]\) reuses the inference mechanisms of FaCT, and also supports the management of several Tboxes\(^7\).
ONDIL includes three modules, namely ontology management, mediator, and inference engine. The latter uses two kinds of subsumption algorithms: tableau-based algorithms for standard inferences and structural algorithms for non-standard inferences \([28]\). The inference engine uses pairs of ontologies to deduce new knowledge that essentially consists in relations between ontological concepts.
3 Interoperabilisation approach
This stage consists in converting SR to a common format and is prior to any other processing stage. The next stage is to identify correspondences enabling to align the SR. For this, we suggest a hybrid approach combining mapping and contextualisation of relations between entities (concepts and roles).
3.1 Conversion
Standardisation of the representation formats enables to solve syntactical issues, as well as part of structural heterogeneities and issues that come from different expressivenesses in the encoding languages. The purpose of this step is to standardise dissimilar formats of SR representations, while maintaining their semantics. We mainly deal with three types of SR: taxonomies, non-tree based graphs, and ontologies. The conversion procedure is based on an explicit distinction between different levels of knowledge representation, namely meta-model, model, and data. A meta-model specifies the structure of a knowledge representation language and clarifies its expressiveness. The model level rep-
---
\(^5\) Fast Classification of Terminologies (FaCT) has been developed to assess the feasibility of using optimal subsumption algorithms based on tableau technique \([2]\).
\(^6\) \(ALC\) description logic language, (\(ALC\) stands for Attribute Language), has been introduced by Schmidt-Schaub et al \([28]\). The other languages of this family are extensions of \(AL\). The \(ALC\) language allows to express the negation of concepts \([2]\).
\(^7\) A Tbox or Terminology is a finite set of definitions, if for every atomic concept \(C\) there is at most one axiom in the Tbox whose left side of the definition is \(C\).
represents the data structure for each application. This level contains classes, attributes attached to classes, and possible relations between various classes. The instance level gathers the data.
In order not to develop specific tools for each of the different representation formats as well as to express the mapping rules in an appropriate one, we chose to use a common representation language, sufficiently expressive to represent all types of given SR. We chose OWL\(^8\), and more precisely its sub-language based on Description Logics, OWL DL, for two main reasons. Firstly, it is part of the W3C recommendations related to the Semantic Web. It takes advantages of previous work on XML syntax, RDF and RDF Schema (RDFS) formal semantics. Secondly, OWL DL allows maximum expressiveness while retaining computational completeness (all conclusions are guaranteed to be computable) and decidability (all computations will end in finite time)\(^{[10]}\). An OWL (Lite or DL) ontology corresponds to a DL Tbox together with a role hierarchy, describing the domain in terms of classes – corresponding to concepts – and properties – corresponding to roles\(^{[3]}\).
The conversion process aims at converting different SR into OWL while maintaining their intrinsic semantics. It consists in (1) elaborating a meta-model representing the constructions and expressiveness of the pivot language OWL\(^9\); (2) designing meta-models of the available SR representation languages\(^{[10]}\), defined as restrictions of the OWL DL meta-model; (3) converting the SR contents into OWL DL. The latter step is described in the following paragraphs for the three general SR categories considered herein.
**Taxonomies (XML):** (1) Concepts of a taxonomy are considered as OWL classes (syntax: \texttt{owl:Class}). (2) The attributes attached to these concepts are treated as properties in the OWL ontology (\texttt{owl:Property}). The values of the attributes can either be literals (represented with \texttt{rdfs:Literal}) or resources (represented as OWL classes). The latter is tied to the previously defined property by another particular property (\texttt{rdfs:range}). (3) The subsumption relationship between concepts is represented in OWL using the \texttt{rdfs:subClassOf} constructor.
**Non-tree based graphs (RDF):** Graphs represent knowledge by linking concepts with complex relations. RDF is a graph-based language based on statements (Subject, Predicate, Object). The conversion of a graph into OWL is achieved as follows: (1) the subject is represented in OWL by defining it as a class. (2) An object can either be represented as a class or a literal element.
---
8 Ontology Web Language: \url{http://www.w3.org/2004/OWL/}
9 The OWL meta-model is represented in two UML classes schemas: a view of the classes and one of the properties. It is presented in \cite{1}
10 Schemas of these meta-models are as well available in \cite{1}
depending on its type. (3) A predicate is represented as an object property (\texttt{owl:ObjectProperty}) if the related object is a resource, or as data type property (\texttt{owl:DatatypeProperty}) when its related object is a literal. \texttt{rdfs:domain} and \texttt{rdfs:range} are used to specify a property. The former specifies the subject, and the latter defines the object of the statement.
\textbf{Ontologies (RDFS and DAML + OIL):} Ontologies extend graph based formalisms by providing high level constructions that make the semantics of the graphs explicit (for instance, \texttt{owl:disjointWith} specifies two disjoint classes). As OWL is an extension of RDFS, it inherits characteristics from RDFS, and any document which is valid according to an RDFS schema is also valid as an OWL Full ontology. However, it could not be seen as an OWL DL document. Consequently, converting an RDFS document to OWL DL requires to distinguish the differences between OWL Full and OWL DL \cite{24}. As OWL, DAML+OIL is an ontology language built on RDF and RDFS and it is based on Description Logics, compared to the three sublanguages of OWL, DAML+OIL semantics is the closest to the OWL DL semantics.
3.2 Alignment
This section explains how we apply a semantic method based on DL techniques to discover mappings between concepts of the different homogeneous SR. For this, we use the ONDIL system that can process several ontologies at a time, as well as axioms\footnote{Axioms are previously defined relationships between entities of the two ontologies, that the inference engine can use during the satisfiability test step.} between their respective concepts.
Firstly, in order to ensure that ontologies are consistent, they are separately unfolded. This is done by using the ONDIL standard inference services based on a \textit{tableau} algorithm and on the \texttt{ALC} language. The mapping search process takes as inputs expanded concepts definitions.
Let \( o \) and \( o' \) be a pair of ontologies and \( A(x) \) a set of axioms given as inputs. The ontology reasoning services of ONDIL use a tableau algorithm to identify subsumption relationships between concepts of \( o \) and \( o' \), as shown in the following generic example. Let \( C_1 := \forall r. A \sqcap \exists r. B \) be a concept of \( o \), and \( C_2 := \exists r. B \) be a concept of \( o' \). We are now going to test if \( C_1 \sqsubseteq C_2 \iff C_1 \sqcap \neg C_2 \sqsubseteq \bot \).
\[
C_1 \sqcap \neg C_2 \equiv \forall r. A \sqcap \exists r. B \sqcap \neg \exists r. B
\]
applying the De Morgan’s law \( \neg \exists r. B \iff (\forall r. \neg B) \)
\[
C_1 \sqcap \neg C_2 \equiv \forall r. A \sqcap \exists r. B \sqcap \forall r. \neg B
\]
\[
C_1 \sqcap \neg C_2 \equiv \forall r. (A \sqcap \neg B) \sqcap \exists r. B
\]
ONDIL was modified to accept as inputs two ontologies and the set of axioms. These inputs constitute the knowledge processed by the inference engine module of ONDIL to construct the graph of the above definition. This is done by applying transformation rules as follows.
Let the graph be a direct graph in which each node $x$ is labelled with a set of concepts ($\mathcal{L}(x) = \{C, \ldots, C^n\}$) and each edge $(x, y)$ is labelled with a role ($\mathcal{L}(x, y) = r$). When a concept $C$ is in the label of a node $x$ ($C \in \mathcal{L}(x)$), it represents a model in which the individual corresponding to $x$ is in the interpretation of $C$. When an edge $(x, y)$ is labelled $r$, it represents a model in which the tuple corresponding with $(x, y)$ is in the interpretation of $r$.
From the $\sqcap$-rule we add to the graph the instances of the concepts $A$ and $B$ that compose the definition $\forall r . (A \sqcap \neg B)$, i.e. $A(x)$ and $\neg B(x)$. From the $\exists$-rule we add the following instances to the graph: $B(x), r(x, y)$. We do not need to further apply the rules because a clash is found: $\{B(x), \neg B(x)\} \subseteq \mathcal{L}(x)$ results in a contradiction. Thus $C_1 \sqcap \neg C_2 \subseteq \bot$ (this means that $C_1 \sqcap \neg C_2$ is not satisfiable) and $C_1 \subseteq C_2$. So a subsumption relation was detected between the concepts $C_1$ and $C_2$.
Retrieved correspondences can be equivalences and non-equivalences. If it happens that $C_1 \subseteq C_2$ and $C_2 \subseteq C_1$, then both concept definitions are equivalent. Equivalences enable to state that the interpretation of two concepts from two different SR is 100% equal. We name non-equivalences “semantic proximities”. These refer to mappings in which only a part of the concepts of the SR is common. This is the case of subsumption and conjunction. Conjunction mappings are consequences of the subsumption ones. Therefore, from input ontologies $o$ and $o'$, the axioms $C \subseteq C'$ and $C \subseteq C_1$ with $C, C_1 \in o$ and $C' \in o'$ allow to state: $(C \sqcap C') \subseteq (C_1 \sqcap C')$.
3.3 Ranking
Identified mappings are delivered to users through client applications. Users are specialised in specific activities, and their requests may concern only a limited field of knowledge and particular tasks: their different contexts of work involve different intentions and needs [27]. A measure that would consistently interpret the mappings according to the context of work should improve the reliability of mappings ranking relevance. Though this work is not yet implemented, we present it as it is part of our approach.
In order to take context into account when comparing mappings, we need reliable information about the users’ works and environments as well as information about why SR have been developed, and what fields they cover. We intend to take advantage of a context modelling for representing domains, tasks, etc. It will serve as reference for situating all considered SR as well as
users’ profiles.
We define a fragment of a SR as a concept of this resource and all the concepts of this resource it subsumes. Each fragment is associated with zero, one or more domains, and is allotted as many “contextual vectors” (CV): these are sets of normalized weights depending on the relevance of the SR fragment content for the domain-specific criteria and tasks.
Similarly, we associate to a user as many “user domain vectors” (UDV) as there are domains she/he is interested in. These vectors she/he instantiates once depending on the significance she/he assigns to each criterion, and on the importance of each task in her/his activity.
Let a user submit a query on a concept $C$ to retrieve all the concepts semantically related. Each concept is included in a SR fragment and is being attached its CV. We compare the CV of $C$ with every other CV by applying a specific measure for each criterion or task that appears in both CV, and storing the result in a “fragment comparison vector” (FCV).
Then we interpret these FCV according to the user’s domains: we ponder them by calculating all concepts “User Domain Interpretations” (UDI) constituted of the UDV-pondered FCV and of a computed relevance of the concept for the corresponding domain. Each concept UDI is then ranked depending on these valuations. Concepts are sorted depending on each relevant interpretation of $C$. Outputs are the sorted list of these relevant interpretations with the corresponding concept rankings, and the remaining concepts for which no accepted interpretation held.
4 The SRILS system
The development of SRILS has been motivated by an industrial need expressed by the Centre Scientifique et Technique du Bâtiment (CSTB) located in Sophia-Antipolis (France). We have used three different SR from the building and construction (B&C) domain. bcXML is a multilingual taxonomy of concepts, products and services, developed in the eConstruct project [20]. This resource holds 3000 terms, in 6 different european languages. The Edibatec dictionary covers several B&C domains, such as electrical or ventilation equipments. The e-Cognos ontology [19], developed at CSTB, contains 17 000 concepts and relations and covers several parts of the B&C domain. CSTB has also developed an ontology server, named e-COSer [19], that processes queries regarding concepts and relations of different SR and supplies high-level services to different users. In the context of the SRILS system,
\[12\] EDIBATEC dictionary, see http://www.edibatec.org/Accueil/Default4.htm
\[13\] eCOSer – eCognos Ontology Server, http://195.83.41.67/eCOSer/Login.jsp
we consider e-COSer as the client application.
SRILS relies on four modules and several types of resources (see Fig. 1). The external interface of the system is provided by the queries processing module and targets the integration within a services-oriented architecture. The conversion module is in charge of give back heterogenous SR. The alignment module performs mapping search. The contextual ranking module, still in development, ranks the mappings by relevance. The modular architecture of SRILS enables to emulate non-developed modules in order to supply the expected services to the upper layer. The different kinds of SR used in SRILS are the ones containing data to be aligned (original and converted SR), an “articulation ontology”, where mappings are stored and the specific resources needed for the contextual ranking stage. Detailed descriptions of the different modules are not in the scope of this paper. The next section presents an example of mappings search in the B&C SR presented above.
5 Alignment tests using the mapping search module
This section presents the first tests of the mapping search module, using the inference services of ONDIL (Sect. 3.2). The search is performed between two ontologies at a time. Retrieved correspondences can be equivalence relations or semantic proximities (mainly subsumption and conjunction, but also transitivity, which is implied by the subsumption relation). As inference is a time-consuming task that can take several minutes when we deal with large ontologies, mappings search is carried out a priori in order to optimise processing time. After mappings validation by domain experts, the mappings constitute the articulation ontology, which is queried by the queries handling module, and will not change, unless SR are modified and mappings search reprocessed.
In order to first prove the correctness of the mapping method, we mapped each SR with itself. Obviously, mapping a SR with itself produces equivalences between the same concepts and only that kind of equivalences. In addition, results for subsumption and conjunction are also obtained, but giving only redundant information, since if $A$ is equivalent to $B$, $A \sqsubseteq B$ and $B \sqsubseteq A$.
A typical usage scenario is the following: a user submits a product-centred query about a concept of the bcXML taxonomy, and wants to retrieve documents related to that product. The articulation ontology is queried by the system, since it stores retrieved and validated mappings between the bcBuildingDefinition taxonomy (where the products are really defined) and the e-Cognos ontology (where the concepts that represent the products and that are used to index the documents are defined). We present hereafter a fragment of
the articulation ontology showing three examples of subsumption mappings retrieved by ONDIL between eCognos and bcXML SR.
<owl:SRILS-ArticulationOntology rdf:about="">
<rdfs:label> mappings between eCognos and bcXML<rdfs:label>
<owl:imports rdf:resource="http://195.83.41.67/eCognos"/>
<owl:imports rdf:resource="http://195.83.41.68:8080/bcBuildingDefinition"/>
Subsumption mappings are more numerous than equivalences. It is worth noticing that subsumption mappings can depend on the ontology order of mapping calculation. This means that the subsumption mappings of \((O_1, O_2)\) may be different from the subsumption mappings of \((O_2, O_1)\). In other words, this difference comes from the asymmetry of the subsumption relationship between two concepts. More precisely, a subsumption mapping \(C_1 \sqsubseteq C_2\) (where \(C_1 \in O_1\) and \(C_2 \in O_2\)) belongs to the set of mapping of \((O_1, O_2)\) while the set of mapping of \((O_2, O_1)\) may not contain the subsumption mapping \(C_2 \sqsubseteq C_1\). However, equivalence mappings are preserved.
6 Conclusion
This paper presents an approach to facilitate interoperability between heterogeneous SR, based on three heterogeneity levels: syntactic, structural and semantic. We apply different “interoperabilisation” approaches to tackle these heterogeneity levels. This approach is partially implemented in the SRILS middleware system: the two former levels are automatically processed. This system is likely to convert taxonomies, graphs and ontologies into OWL DL format, keeping the semantic and expressive power of the original encoding languages. Semantic “interoperabilisation” of SR is done by retrieving mappings between entities of the produced ontologies, using an inference engine and description Logics-based techniques.
We briefly present an approach of contextualisation of the retrieved mappings in order to establish a ranking of the mappings relevant to the users. This last stage is being implemented. The use of SRILS is showed by an application in the building and construction domain. We also consider testing other methods for discovering semantic alignments, by using linguistic cor-
pus to help find new mappings. Regarding measuring semantic proximity, we consider using fuzzy logic and probabilistic methods.
Acknowledgement
The authors wish to acknowledge that part of the work presented here is largely due to an intense teamwork, carried out in the eContent European FUNSIEC project, that is a Feasibility study for a UNified Semantic Infrastructure in the European Construction sector, see www.funsiec.org. Special thanks to Celson Lima and Chan Le Duc at CSTB.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01503829/file/2006Elsevier-ENTCS.pdf", "len_cl100k_base": 7141, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 40752, "total-output-tokens": 9955, "length": "2e12", "weborganizer": {"__label__adult": 0.0003478527069091797, "__label__art_design": 0.0009021759033203124, "__label__crime_law": 0.0007491111755371094, "__label__education_jobs": 0.002368927001953125, "__label__entertainment": 0.00022518634796142575, "__label__fashion_beauty": 0.00024628639221191406, "__label__finance_business": 0.0010232925415039062, "__label__food_dining": 0.0004427433013916016, "__label__games": 0.0008187294006347656, "__label__hardware": 0.0007991790771484375, "__label__health": 0.00072479248046875, "__label__history": 0.0006518363952636719, "__label__home_hobbies": 0.00017762184143066406, "__label__industrial": 0.0010137557983398438, "__label__literature": 0.0014705657958984375, "__label__politics": 0.0005464553833007812, "__label__religion": 0.0007905960083007812, "__label__science_tech": 0.459228515625, "__label__social_life": 0.0002536773681640625, "__label__software": 0.05780029296875, "__label__software_dev": 0.46826171875, "__label__sports_fitness": 0.00026345252990722656, "__label__transportation": 0.0007615089416503906, "__label__travel": 0.0002758502960205078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39172, 0.03345]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39172, 0.68078]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39172, 0.86365]], "google_gemma-3-12b-it_contains_pii": [[0, 1137, false], [1137, 2782, null], [2782, 5306, null], [5306, 7978, null], [7978, 10697, null], [10697, 13699, null], [13699, 16468, null], [16468, 19409, null], [19409, 22231, null], [22231, 25263, null], [25263, 27890, null], [27890, 30647, null], [30647, 31010, null], [31010, 32826, null], [32826, 35741, null], [35741, 39172, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1137, true], [1137, 2782, null], [2782, 5306, null], [5306, 7978, null], [7978, 10697, null], [10697, 13699, null], [13699, 16468, null], [16468, 19409, null], [19409, 22231, null], [22231, 25263, null], [25263, 27890, null], [27890, 30647, null], [30647, 31010, null], [31010, 32826, null], [32826, 35741, null], [35741, 39172, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39172, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39172, null]], "pdf_page_numbers": [[0, 1137, 1], [1137, 2782, 2], [2782, 5306, 3], [5306, 7978, 4], [7978, 10697, 5], [10697, 13699, 6], [13699, 16468, 7], [16468, 19409, 8], [19409, 22231, 9], [22231, 25263, 10], [25263, 27890, 11], [27890, 30647, 12], [30647, 31010, 13], [31010, 32826, 14], [32826, 35741, 15], [35741, 39172, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39172, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
41fbd38322b4c682d5ae4c1be3b7226a23b9c29d
|
ICASE REPORT NO. 86-67
PS: A NONPROCEDURAL LANGUAGE WITH
DATA TYPES AND MODULES
Maya B. Gokhale
Contract No. NAS1-18107
October 1986
INSTITUTE FOR COMPUTER APPLICATIONS IN SCIENCE AND ENGINEERING
NASA Langley Research Center, Hampton, Virginia 23665
Operated by the Universities Space Research Association
NASA
National Aeronautics and
Space Administration
Langley Research Center
Hampton, Virginia 23665
PS: A NONPROCEDURAL LANGUAGE WITH DATA TYPES AND MODULES
Maya B. Gokhale
Department of Computer and Information Sciences
University of Delaware
Newark, DE 19716
ABSTRACT
The Problem Specification (PS) nonprocedural language is a very high level language for algorithm specification. PS is suitable for nonprogrammers, who can specify a problem using mathematically-oriented equations; for expert programmers, who can prototype different versions of a software system for evaluation; and for those who wish to use specifications for portions (if not all) of a program. PS has data types and modules similar to Modula-2. The compiler generates C code.
In this paper, we first show PS by example, and then discuss efficiency issues in scheduling and code generation.
This research was supported in part by the National Aeronautics and Space Administration under NASA Contract Number NAS1-18107 while the author was in residence at the Institute for Computer Applications in Science and Engineering (ICASE), NASA Langley Research Center, Hampton, VA 23665, and in part by UDRF Grant LTR860114.
PS: A Nonprocedural Language with Data Types and Modules
Maya B. Gokhale
Keywords and Concepts: very high level language, equational specification, automatic program generation
1 Introduction
In this paper we introduce a very high level language for algorithm specification. In the Problem Specification (PS) language, the user expresses the algorithm as a set of equations. The PS compiler analyses the specification to determine an execution ordering, and generates a procedural program in the C language (under Berkeley Unix).
PS is intended for a wide user community, ranging from domain expert to expert programmer. For the domain expert, PS offers a mathematically oriented language in which to express the problem. For example, notation used to describe algorithms in numerical analysis can be transcribed with very slight syntactic modification directly into PS. In this paper, we show a PS module to do Gaussian Elimination and compare the PS equations with the algorithm description in a standard numerical analysis text.
At the other extreme, PS is a useful tool for the expert programmer in that it facilitates rapid prototyping. “Exploratory programming” is a technique recognized as essential to gaining an understanding of a new application area. In hardware this is referred to as breadboarding. PS is useful in breadboarding alternate approaches to a problem. Experience may be gained from using different versions, each generated from a different specification in PS. Use of procedural languages to actually code alternate approaches would be prohibitive in time. Use of AI languages would necessitate a complete rethinking of the problem for the production version.
Since data structures in PS are almost identical to Pascal or Modula data structures, it is easier to shift from the PS specification to an equivalent production program than from Lisp or Prolog.
In the continuum between nonprogrammer and expert, PS can be used to generate component modules of a system. We recognize that one language cannot solve all problems for all users. Certain classes of problems can be solved concisely and easily in PS. Others, which perhaps depend on idiosyncracies of the operating environment, are better expressed in other languages. For example, a stream of characters can be converted to a stream of tokens by the *Lex* tool (or even *scanf*). Each token can then be used by a PS module in a computation.
The novel features of PS in comparison to similar nonprocedural languages [2, 4, 7-10, 12] are as follows:
- The language is strongly typed. Declarations follow most syntactic conventions of Pascal or Modula. The compiler enforces type checking and reports inconsistencies or incompleteness in type usage.
- The language is modular. We feel this feature is essential to the step-wise refinement of a problem solution. The basic program unit is the module, which is semantically equivalent to a side-effect-free function.
- Our system is compatible with the external environment. It is easy to compose a program of PS modules intermixed with modules in a procedural language. In our current implementation, the PS compiler generates a C function from each module description. A module may invoke other modules written in PS or C. Conversely, a C function may invoke a PS module.
In this paper we hope to impart a feel for PS through example. The next section gives examples ranging from simple program fragments to complete PS modules. Finally we discuss the problem of efficiency, especially as related to storage.
2 PS by Example
A PS program consists of a sequence of module descriptions. Within the module, a programmer can describe the structure of the data items and the relationship among data items. The PS compiler then analyzes the data dependencies; synthesizes a schedule; an ordering to the generation of data
type I = 1 .. n;
var A: array [I] of int; { declare a variable }
define A[I] = I; { define a value for each element A }
Figure 1: A Simple Example
items; and produces a C program, complete with type and variable declarations and control structure. Since the PS language is nonprocedural, it contains no control constructs such as "goto" or "for" or "while". There is not even the implicit control from one statement to the next in sequence. This implies that a PS module is unordered. The lexical sequence of equations is unrelated to computation sequence of the generated program. Since the programmer does not control the sequence of program execution, the language is, of necessity, single assignment. The value of a data item may be defined exactly once. For this reason, we refer to PS as a dataflow language. In a PS equation of the form left hand side = expression, the left hand side can be considered the name of an arc (value) on a dataflow graph and the expression is a node (computation) producing the value.
The examples below show how aggregate data items can be given values without control structures such as loops or recursion; how recurrences are defined; and how to create structured data types.
2.1 Index Sets
As in Modula-2 or Pascal, the type statement describes a data structure which is then used in variable declarations and in equations. Figure 1 illustrates a simple use of a subrange type. Each element of A is given a value by the "define" statement because the type identifier I is used to index the array. Use of a subrange type identifier as a subscript denotes universal quantification over the subrange. Thus, the subrange type declaration is used to establish an index set. Use of the index set to subscript an array indicates that the equation is true for each element of A indexed by I. This can be contrasted to the use of iteration control constructs to define multiple occurrences of an action in procedural languages. The equivalent procedural code reads
type $I, J = 1 \ldots n$;
var $A, B : \text{array} \ [I, J] \ \text{of real}$;
Figure 2: Transpose of a Matrix
type $I = 1 \ldots N$;
var $\text{Fib} : \text{array} \ [I] \ \text{of int}$;
define
$\text{Fib}[I] = \text{if } I = 1 \text{ then } 1$
\hspace{1cm} else if $I = 2 \text{ then } 1$
\hspace{1cm} else $\text{Fib}[I-1] + \text{Fib}[I-2]$;
Figure 3: Fibonacci Sequence
for $I = 1$ to $n$ do
$A[I] = [I]$
Another example is shown in Figure 2, where $A$ is defined as the transpose of $B$. ($B$ must be defined elsewhere, either by another equation or as an input parameter). The equation is equivalent to the first order logic equation
$$\forall I,J (A[I,J] = B[J,I])$$
Use of the same subscript in $A$ and $B$ denotes the same instance of that subscript, so that the first dimension of $A$ corresponds to the second dimension of $B$ and vice versa.
2.2 Recurrences
Recurrence relations may be expressed in a PS equation. For example, computation of the first $N$ Fibonacci numbers is shown in Figure 3.
Here again we use the subrange type $I$ to establish an index set. The conditional equation defines each element of an array $\text{Fib}$. Notice that we use an array because of the single assignment rule: a variable can receive a
type I = 0 .. N;
var X, Y: array [I] of int;
define
X[I] = if I=0 then 2 else X[I-1]**2;
Y[I] = if I=N then X[0] else Y[I+1] + X[N-1];
value exactly once. Rather than reassign to the same variable, we give values to successive elements of the vector Fib. The vector records the history of the computation of the N'th Fibonacci number as in Lucid [2]. Adherence to the single assignment rule allows us to define Fibonacci as a specification. We shall see in the next section that the compiler converts the specification into an iteration, and reduces the vector in size. In this case only three elements are needed regardless of N.
This example illustrates a common pattern of recurrences—definition of one or more base cases followed by the recurrence relation for the general case. Use of the −1 and −2 seem to indicate “previous” elements of the array Fib. However, a “previous” element need not be lexicographically lower in index than a successor. Figure 4 illustrates this point.
In this example, the array X is defined through a recurrence as in Figure 3. However, in the equation for Y, the previous element of the sequence is of higher index than the successor. The scheduling component of the PS compiler can generate iterations of either increasing or decreasing index values.
2.3 Record Structures
We have shown vectors and matrices being declared and then used in equations. PS also has a rich facility for user-defined types. Figure 5 shows the declaration of a record xint which is used to store an arbitrary precision integer. The field len holds the size of the integer. val holds each segment of the integer. As the example demonstrates, PS supports dynamically sized arrays. In this record, the size of the array val depends on the value of the field len.
\footnote{It cannot however, generate arbitrary orderings.}
type
xint = record
len: int;
val: array[len-1] of int;
end;
Figure 5: A Record Structure
2.4 The xadd Module
With this introduction, let us now compose a module in PS. This module uses the xint data type shown above. The module xadd, shown in Figure 6 adds two arbitrary precision positive integers. Each item of interest has been numbered on the left. These parenthesized numbers are not part of the input. On line (1) is shown the module header. The module name xadd appears first, followed by the keyword module, the module's input parameters (in parentheses), and the output results (in square brackets). This module has just one output, the item c. The parameters a, b, and c are of type xint, which is defined in the module. It is not necessary in PS to declare a type before it can be used. The parameter BETA is related to the wordsize. It is $2^{\text{wordsize}-2}$, so that $a\.val\[i\] + b\.val\[i\]$ does not overflow a word.
Line (2) declares three subrange types, i, t and subi, which are used in the equations. Line (3) shows an alternate way to declaring a two dimensional array from Figure 2. The two methods are interchangeable (as in some Pascals).
Line 4 begins the equations. First N, the upper bound for i, t, and the two arrays sum and carry is defined in terms of the input parameters. The function max must be defined by the user as another module. Line 5 shows the definition of carry, which is a recurrence. The base case is for carry\[0\], which gives the initial carry in a value of 0. The recursive case is given in the second arm of the conditional. It is defined to be the previous sum plus the previous carry divided by the wordsize parameter BETA. Thus if an overflow is going to occur, the carry will be set to 1, otherwise to 0. Next is the definition of sum. The initial value sum\[0\] is the result of adding the two input values a and b. This sum cannot be expressed simply as $a\.val\[i\] + b\.val\[i\]$ because the lengths of the two arrays might be different and also, the range of i is one greater than the larger of a and b. Use of a function
Figure 6: The xadd Module
add aids in modularity. Rather than expressing the sum inline as a large conditional expression, we can postpone definition of the add function to a later stage. In fact, we do not show add here for the sake of brevity. Each digit of the sum may exceed BETA. If a digit is too large, we must do modulo arithmetic and generate a carry to the next digit.
Subsequent versions (sum[i] for i > 0) ripple the carry across the integer. New values of sum are defined only along the diagonal (for the arm of the conditional i = t). The other values are copied to the t'th row of sum from the t-1st row. For the t'th row and t'th column, the result of the t'th carry is factored into the result.
Lines (7) and (8) define a value for the output parameter c. The val array is simply the last row of sum. The index sub1 is used rather than 1 because, the c vector may be one digit smaller than sum. The length is either N, if there was carry, or N - 1 otherwise.
Let us trace the behavior of the xadd module when called with the parameters a.val = (1,7,3,7), b.val = (2,6,5,5) and BETA = 8, so that each val[i] can hold a number in the range -8 .. 7 if two's complement representation is used for integers. The numbers are stored with least significant digit first, so that with a = 7371, a.val[0] = 1. Figure 7 shows a tabular representation of carry and sum for each value of t and i in the range.
2.5 The Gauss Module
From arbitrary precision addition, let us turn to a problem in linear algebra. Gauss elimination is a popular technique for solving a set of n equations in n unknowns. The non-pivoting version of the algorithm is adapted from [3] as follows:
Given the \( n \times (n + 1) \) matrix \( A \) containing a square matrix of order \( n \) in its first \( n \) columns and the \( n \)-vector of righthand sides in its last column, we perform the elimination in \( (n - 1) \) steps, \( k = 1, 2, \ldots, n - 1 \). "In step \( k \), the elements \( a_{ij}^{(k)} \) with \( i, j > k \) are transformed according to
\[
m_{ik} = \frac{a_{ik}^{(k)}}{a_{kk}^{(k)}}, \quad a_{ij}^{(k+1)} = a_{ij}^{(k)} - m_{ik}a_{kj}^{(k)},
\]
\( i = k + 1, k + 2, \ldots, n, \quad j = k + 1, \ldots, n, n + 1. \)
We transcribe these formulas into PS. Figure 8 shows the Gauss module. Notice that in this module, we have put the definitions first and the declarations after. PS allows statements to be given in arbitrary order. The \texttt{define} section shows the PS form of the two equations. \texttt{m} is the vector of successive multipliers. \texttt{aOut} holds intermediate forms of the matrix \( a \).
In the module, there are three equations, one for each local variable and one for the output matrix \( G \). The definition of \( m \) uses a conditional expression. For each \( i \) and \( k \) such that \( i > k \), a multiplier element is defined in terms of the current iteration of \texttt{aOut}. All other multiplier elements are 0.
The first version of the \texttt{aOut} matrix takes its value from the input matrix \( a \). Subsequent versions are defined in terms of previous \texttt{aOuts} and previous \texttt{ms}. Thus the two arrays \texttt{m} and \texttt{aOut} are defined by a mutual recurrence. The output from the module is the last iteration of \texttt{aOut}. The module has inputs \( a \) and \( n \) which are respectively the original matrix of simultaneous equations and the number of rows or columns. \( a \) contains an additional column for the right hand sides of the equations. Output is the array \( G \) in upper triangular form. Values for the unknowns may be derived from \( G \) by back substitution.
The declarations consist of type and local variable declarations. The \texttt{type} section shows that three index sets are used. \( k \) is an iteration index. \( i \) and \( j \) are used to index the matrix \( a \). \( i \) is also used to index the array of multipliers.
We have tried to show with the \texttt{Gauss} and \texttt{xadd} modules representative problems and their solutions in PS. In the next section we address a question which readily comes to mind when examining the PS modules: what is the relationship between data structures declared in PS and those generated in the C program. If the relationship is one-to-one, the generated program is so storage inefficient as to be unusable.
Gauss: module(a: array [i,j] of real; N: int):
[G: array [i,j] of real];
define
m[k,i] = if (i>k) then aOut[k,i,k] / aOut[k,k,k]
else 0;
aOut[k,i,j] = if (k=1) then a[i,j]
else
if ((i>k-1) and (j>k-1)) then
aOut[k-1,i,j] - m[k-1,i]*aOut[k-1,k-1,j]
else if (i<=k-1) (* and all j *) then
aOut[k-1,i,j]
else 0;
G = aOut[N];
type
k, i = 1 .. N;
j = 1 .. N+1;
var
m: array [k, i] of real; (* multipliers *)
aOut: array [k,i,j] of real; (* each successive generation
of a[i,j] is represented
by k'th dimension *)
end Gauss;
Figure 8: Gauss Module
10
3 Storage Reuse in the Generated Code
The single assignment property of variables in PS makes it possible for the compiler to schedule the equations using only dataflow analysis. However, it also results in a plethora of variables in the PS program, and therefore, if a simpleminded storage allocation is implemented, in the generated program. Indeed, excessive use of storage has long been a criticism of applicative languages in general. Our goal is to have significantly fewer storage locations allocated in the generated program than are declared in the PS program. In this chapter we discuss some techniques to minimize the storage requirements of the generated program. Some of the issues discussed below occur in the reuse of temporaries and in register allocation in compiler optimization [1, 11].
We would like to have the structure of the storage allocated in the generated program resemble the PS data structures as much as possible. Thus we reject a Lisp-like heap in which to store data as linked lists. If the PS structure is an array, we would like the C structure to bear some resemblance to an array; if the PS structure is a record, we would like to generate a C structure declaration. This constraint is imposed to make the interface between PS modules and C modules as simple as possible.
In PS, every variable is local to exactly one module (since there are no global variables and since modules may not be nested). A variable can be an input parameter, an output result, or a local variable. We will first consider storage reuse of local variables and then the problem of efficient parameter passing.
3.1 Virtual Dimensions
Because we are concerned with the large-scale reuse of storage, we will not attempt to reuse scalars within a module. Instead we will concentrate on arrays, in particular, on locating virtual array dimensions. (Structures containing arrays are also amenable, with some additional analysis, to these techniques). If an array dimension is physical, that dimension will have the same number of elements in the generated program as in the PS module. If a dimension is virtual, there will be fewer elements in the generated program than in the PS program. The number of representative elements required in the generated program is called the window of that dimension. Analysis of the expressions used to subscript an array on the right hand side of an assertion help us locate virtual dimensions and determine the size of the
3.2 Virtual Dimensions in Recursive Equations
Example:
Consider the specification of factorial:
```plaintext
factorial: module(n: int):[facout: int];
type i = 0 .. n;
var fac: array [1] of int;
define
fac[i] = if i = 0 then 1 else
fac[i-1]*i;
facout = fac[n];
end factorial;
```
`fac` must be declared as a one dimensional array. However, only one element of `fac` is needed, `fac[n]`, and to compute any `fac[i]`, at most one other element of `fac` is needed, `fac[i-1]`. This suggests that we need only reserve two storage locations for `fac`, one for the current element and one for the new element being computed. Thus the i dimension of `fac` is virtual with window size two. The generated program need only have a vector of two elements, regardless of the n. Notice that we have taken a formal specification of factorial, and constructed an iterative program which reuses storage. Although this is a relatively simple transformation from tail recursion to iteration, the same optimization can be applied to non-tail recursive equations (such as the specification of `xadd` in Figure 6). [5] gives a full discussion of locating virtual dimensions in a set of recursive equations. In his technique, the scheduling component of the compiler looks in recursive array definitions for the pattern `i - k` on the the i'th dimension of the array reference on the right hand side of the definition. Here, `k` stands for a manifest constant positive integer.
In the PS compiler, a different technique is used. To locate a virtual dimension, we form the dependency vector for each recursive occurrence in an array definition. For each dimension j, the j’th component of the dependency vector is defined as the difference
\[ Lh_{ij} - Rh_{ij} \]
where `Lhs` and `Rhs` are the subscript expressions used to index the j’th dimension. For factorial, the dependency vector is
(i - (i - 1)) = (1)
from which we can derive the window size. Notice that by using the dependency vector, we are freed from looking for a specific pattern such as \( i - k \). Exactly the same dependency vector is obtained if the recursive equation reads
type \( i = 0 \ldots n-1; \)
define
\[
fac[i+1] = \text{if } i = 0 \text{ then } 1 \text{ else } fac[i]*(i+1);
\]
[6] discusses the use of dependency vectors to locate virtual dimensions in the context of scheduling for parallel execution.
The C program generated from the original formulation of the factorial module reads as follows:
```c
int factorial(n)
int factorial(n)
int n;
{
int fac[2];
int facout;
int i;
for (i=0; i <= n; i++)
{
fac[1] = (i==0) ? 1 :
fac[0]*i;
fac[0] = fac[1];
}
facout = fac[1];
return facout;
}
```
Additional optimizations are possible (for example, eliminating facout). Since these can be done by fairly standard optimizing compilers, we will not consider them further here.
3.3 Virtual Dimensions in Nonrecursive Equations
The dependency graph technique is used in the presence of recursive equations. However, it is also possible to have a virtual dimension for an array used in a nonrecursive example.
Example:
\[
x[i] = a[i] + b[i];
\]
\[
c[i] = x[i] \times x[i];
\]
The two equations are not directly or mutually recursive. Depending on the loop structure of the generated code, however, the local variable \( x \) can either be a scalar or a vector. If a separate loop is generated for each equation, \( x \) must be represented by a vector:
\[
\begin{align*}
\text{for } (i=&\text{start}_i; i<=\text{stop}_i; i++) \\
x[i] &= a[i] + b[i];
\end{align*}
\]
\[
\begin{align*}
\text{for } (i=&\text{start}_i; i<=\text{stop}_i; i++) \\
c[i] &= x[i] \times x[i];
\end{align*}
\]
This schedule can be improved by putting the two equations into a single loop. Not only is there reduced loop set up and evaluate overhead, but the dimensionality of \( x \) can be reduced:
\[
\begin{align*}
\text{for } (i=&\text{start}_i; i<=\text{stop}_i; i++) \\
\{ \\
x &= a[i] + b[i]; \\
c[i] &= x + x;
\}
\end{align*}
\]
Thus a second form of memory optimization in local variables involves maximizing the scope of loops, so that interim variables can have one or more dimension become virtual.
3.4 Parameter Passing
As a single assignment language, PS has copy-in copy-out semantics on parameters to modules. If this mode of parameter transmission is used in the
generated program, serious inefficiency is incurred in the amount of storage used. In fact, other nonprocedural languages [9] have not allowed modules because of these inefficiencies. We deem the module to be indispensible, and therefore make special effort to reduce the overhead of parameter passing.
We again limit our discussion to arrays. Although it also applies to arrays within structures, the latter case is more complicated in details, and adds little to the basic criteria for parameter space reuse.
We would like to replace whenever possible a strict pass-by-value of an array by a pass by reference. However, the parameter passing mode must be consistent. We cannot with one call to module M pass an array A by reference and with another call, pass by value. Therefore, we opt always to pass by reference. For each array A passed to a module in a reference such as
\[ (* \text{Assertion } q * ) \quad X = M(A); \]
where M is a module which returns one result whose type is compatible with X,
- Is A used in an assertion which is scheduled after Assertion q?
- If so, generate code to copy A to a new temporary variable T, and pass to module M a reference to T.
- If not, pass a reference to A itself. In this case, we have avoided the overhead of copying the array.
Now, consider the role of A in M. Let us call the formal parameter A' to distinguish it from the actual parameter A. We are guaranteed that A is passed "by value" in the sense that any change to A' in M does not affect the value(s) of the output parameters of the caller of M. This was guaranteed by the parameter passing analysis above. At the PS level, A' cannot appear on the left hand side of an assertion. This restriction does not of course apply in the generated C code. If there are local variables or output parameters of the same type as A', it might be desirable to reuse A' storage for another variable.
Example:
\[ (* \text{C and B are local variables or output parameters of the same type as APrime} \quad *) \]
Here, after virtual dimension analysis, \( C \) is reduced to a scalar. Then, since \( B \) has the same type as \( A_{\text{Prime}} \), we can alias the former to the latter, giving the following output C code:
\[
C[i] = A_{\text{Prime}}[i] * 2; \\
B[i] = A_{\text{Prime}}[i] + C[i];
\]
In this case, we have avoided having the variable \( B \) appear in the generated program. Before such a transformation can be done, we must of course ensure that all uses of \( A_{\text{Prime}} \) have been completed prior to the reassignment. If such a schedule cannot be effected, (for example, if \( A_{\text{Prime}} \) and \( B \) are needed in the same equation), then \( B \) must be allocated its own storage.
The analysis just outlined can be used in the Gauss to give the input and output matrices the same locations.
4 Conclusion
In this paper, we have introduced a new nonprocedural language PS in which equations define the relationships between data. The language provides user-defined data types in a Pascal or Modula-2 framework, and accepts a specification which a sequence of module descriptions. The current implementation of the PS compiler is written in Berkeley Pascal using the llama parser generator and generates C code.
We have discussed several optimizations to minimize storage requirements in the generated program. We attempt to locate virtual array dimensions, so that the virtual dimension may be replaced in the generated program by a window of elements. We have seen how loop merging can uncover virtual dimensions, so that local variables can be reduced in dimensionality. Parameter passing is another area in which storage reuse is important. We have discussed conditions under which it is possible to pass by reference rather than by value, thus saving space and avoiding unnecessary copying.
5 Bibliography
The Problem Specification (PS) nonprocedural language is a very high level language for algorithm specification. PS is suitable for nonprogrammers, who can specify a problem using mathematically-oriented equations; for expert programmers, who can prototype different versions of a software system for evaluation; and for those who wish to use specifications for portions (if not all) of a program. PS has data types and modules similar to Modula-2. The compiler generates C code.
In this paper, we first show PS by example, and then discuss efficiency issues in scheduling and code generation.
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19870002813.pdf", "len_cl100k_base": 6885, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 47900, "total-output-tokens": 8348, "length": "2e12", "weborganizer": {"__label__adult": 0.00033354759216308594, "__label__art_design": 0.0002586841583251953, "__label__crime_law": 0.00029468536376953125, "__label__education_jobs": 0.0006184577941894531, "__label__entertainment": 6.318092346191406e-05, "__label__fashion_beauty": 0.00014412403106689453, "__label__finance_business": 0.0002233982086181641, "__label__food_dining": 0.000362396240234375, "__label__games": 0.00052642822265625, "__label__hardware": 0.0012979507446289062, "__label__health": 0.0005078315734863281, "__label__history": 0.00021409988403320312, "__label__home_hobbies": 0.00010001659393310548, "__label__industrial": 0.0005259513854980469, "__label__literature": 0.00022590160369873047, "__label__politics": 0.00024580955505371094, "__label__religion": 0.0004696846008300781, "__label__science_tech": 0.03240966796875, "__label__social_life": 7.241964340209961e-05, "__label__software": 0.0052490234375, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.00030541419982910156, "__label__transportation": 0.0006175041198730469, "__label__travel": 0.00017786026000976562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30435, 0.02742]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30435, 0.81336]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30435, 0.86122]], "google_gemma-3-12b-it_contains_pii": [[0, 411, false], [411, 1506, null], [1506, 3197, null], [3197, 5365, null], [5365, 7367, null], [7367, 8645, null], [8645, 10493, null], [10493, 12597, null], [12597, 12623, null], [12623, 14278, null], [14278, 16959, null], [16959, 17575, null], [17575, 20047, null], [20047, 21927, null], [21927, 22963, null], [22963, 24447, null], [24447, 26460, null], [26460, 28284, null], [28284, 29841, null], [29841, 30435, null]], "google_gemma-3-12b-it_is_public_document": [[0, 411, true], [411, 1506, null], [1506, 3197, null], [3197, 5365, null], [5365, 7367, null], [7367, 8645, null], [8645, 10493, null], [10493, 12597, null], [12597, 12623, null], [12623, 14278, null], [14278, 16959, null], [16959, 17575, null], [17575, 20047, null], [20047, 21927, null], [21927, 22963, null], [22963, 24447, null], [24447, 26460, null], [26460, 28284, null], [28284, 29841, null], [29841, 30435, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30435, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30435, null]], "pdf_page_numbers": [[0, 411, 1], [411, 1506, 2], [1506, 3197, 3], [3197, 5365, 4], [5365, 7367, 5], [7367, 8645, 6], [8645, 10493, 7], [10493, 12597, 8], [12597, 12623, 9], [12623, 14278, 10], [14278, 16959, 11], [16959, 17575, 12], [17575, 20047, 13], [20047, 21927, 14], [21927, 22963, 15], [22963, 24447, 16], [24447, 26460, 17], [26460, 28284, 18], [28284, 29841, 19], [29841, 30435, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30435, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
76474d4278e0f006abfdae5e2713d187fd39ed0c
|
[REMOVED]
|
{"len_cl100k_base": 4916, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22741, "total-output-tokens": 6367, "length": "2e12", "weborganizer": {"__label__adult": 0.00034737586975097656, "__label__art_design": 0.0003995895385742187, "__label__crime_law": 0.00032901763916015625, "__label__education_jobs": 0.0011091232299804688, "__label__entertainment": 0.00010442733764648438, "__label__fashion_beauty": 0.00016295909881591797, "__label__finance_business": 0.0003323554992675781, "__label__food_dining": 0.00044155120849609375, "__label__games": 0.0013074874877929688, "__label__hardware": 0.0014829635620117188, "__label__health": 0.0005998611450195312, "__label__history": 0.0003070831298828125, "__label__home_hobbies": 0.00011783838272094728, "__label__industrial": 0.0008950233459472656, "__label__literature": 0.00025010108947753906, "__label__politics": 0.0002551078796386719, "__label__religion": 0.00044608116149902344, "__label__science_tech": 0.1390380859375, "__label__social_life": 0.000110626220703125, "__label__software": 0.018524169921875, "__label__software_dev": 0.83203125, "__label__sports_fitness": 0.0004363059997558594, "__label__transportation": 0.0005898475646972656, "__label__travel": 0.0001962184906005859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27874, 0.02073]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27874, 0.52383]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27874, 0.91555]], "google_gemma-3-12b-it_contains_pii": [[0, 3646, false], [3646, 9059, null], [9059, 14633, null], [14633, 16826, null], [16826, 18635, null], [18635, 20779, null], [20779, 23637, null], [23637, 23678, null], [23678, 25792, null], [25792, 27874, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3646, true], [3646, 9059, null], [9059, 14633, null], [14633, 16826, null], [16826, 18635, null], [18635, 20779, null], [20779, 23637, null], [23637, 23678, null], [23678, 25792, null], [25792, 27874, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27874, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27874, null]], "pdf_page_numbers": [[0, 3646, 1], [3646, 9059, 2], [9059, 14633, 3], [14633, 16826, 4], [16826, 18635, 5], [18635, 20779, 6], [20779, 23637, 7], [23637, 23678, 8], [23678, 25792, 9], [25792, 27874, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27874, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
c629e6a10c55ec495b6bcbfeed6a836f9f939333
|
The Return on Investment on Commercial off-the-shelf (COTS) Software Components
Preliminary Study Results
Date: August 27, 2002
Author: Chris Brooke, ComponentSource
www.componentsource.com
Email: saveit@componentsource.com
US Headquarters
ComponentSource
3391 Town Point Drive NW
Suite 350
Kennesaw, GA 30144-7079
USA
Tel: (770) 250-6100
Fax: (770) 250-6199
International: +1 (770) 250-6100
European Headquarters
ComponentSource
30 Greyfriars Road,
Reading,
Berkshire RG1 1PE
United Kingdom
Tel: 0118 958 1111
Fax: 0118 958 9999
International: +44 118 958 1111
Copyright © 1996-2003 ComponentSource
## Contents
**Introduction**
- Why did we conduct this study? 1
- What benefits does this bring? 1
- How did we obtain and compile this data? 2
**Key Assumptions**
- Measuring ROI on COTS components 2
- Source Lines of Code 2
- Function Points 3
- Interface Points 3
**Methods Used to Determine ROI**
- Development Time 4
- Development Cost 4
- Reuse Utilization 4
**Analysis**
- Table of sample data from study 5
**Conclusion** 6
Introduction
Corporate IT departments are under growing pressure to cut costs and speed development of their software applications while consistently improving quality. Increasingly, they are looking to reuse initiatives to help make this happen. The internal reuse of software assets brings many advantages to the table, including reduced redundancy by not "reinventing the wheel". However, the reuse of internally developed components does not go far enough to solve one of the most significant software development challenges: leveraging your developers’ core competency – or, more accurately, requiring developers to venture outside of their core competency by learning how to code functionality that is already available via tried, tested and debugged third-party components.
For years now, the supply of commercial off-the-shelf software (COTS) components has grown significantly, with thousands of COTS components available today on the open market. These components allow developers to integrate complex functionality into their applications without requiring a steep learning curve. They are built on the same industry standards that developers use to create components internally, and as such, can be easily integrated into an organization’s applications. In the past the return on investment (ROI) value of COTS components has been elusive to purchasers. This study for the first time shows how investing in an expert-built COTS component or Web service – which range in price from several hundred to several thousand dollars on the open market – may offset the development costs of applications to the tune of millions of dollars.
Why did we conduct this study?
While it seems obvious that the price paid for a COTS component ought to be less than the cost of equivalent in-house development, the actual cost of COTS components in terms of effort to develop and encapsulate has never been clear. Until now, the metrics for COTS Components has normally been hidden from purchasers. This provides tremendous value to the end user and industry and offers the measurable business rationale for "buy before you build", from a COTS component perspective. This is the first time that a study of this kind has ever been undertaken. Our relationship with component vendors internationally, places ComponentSource in a unique position to be able to conduct it.
We are publishing the raw data on our public marketplace for our half a million strong developer user base so that they are able to justify the buy versus build decision to management.
This data, moreover, is of importance to our SAVE-IT™ customers who need to be able to justify the acquisition and reuse of COTS components. The same metrics may be used to assign value to their internally-built components and they are able to forecast cost savings/cost avoidance from reusing software assets from both internal and external sources.
What benefits does this bring?
Component usage is fundamentally a supply-driven process. However, until a sufficient supply of internally sourced components becomes available within an organization, COTS component content can be critical to reuse adoption, hitting your project’s early financial targets, and sustaining that effort over time.
The timeframe to stockpile the necessary critical mass of component content from internal efforts can stall reuse behavior, and prevent the required time-to-market and cost avoidance gains needed to achieve short term ROI. Historically, reuse initiatives fall short of breakeven for the first 18 to 24 months. Because of economic pressure, shifts in priorities, and changes in policy it is difficult to sustain the reuse efforts that occur beyond these early timeframes. By supplementing internal components with COTS components, reuse efforts can reach breakeven or positive ROI within a shortened financial horizon. This is because of the relative
cost/price advantage of bought versus built components.
In the long term, the marketplace at large will always possess an exponentially greater capacity to grow a categorically broad and functionally rich component supply more rapidly than any single organization could with its own resources. Structuring COTS component usage as an ongoing supplement to internal reuse efforts leverages this supply of marketplace intellectual property, thus expanding the organization’s ability to deliver solutions without increasing headcount.
Accurate measurement of ROI is a critical success factor for any reuse initiative. It is, therefore, vital that organizations be able to measure the impact that COTS components will have on their overall project costs and time to market.
**How did we obtain and compile this data?**
As the market leader for reusable components, ComponentSource has partnerships with over 700 component vendors. We have leveraged these partnerships to collect the source lines of code (SLOC) by development environment behind each of their component products. Furthermore, a prerequisite information requirement for new component vendors that we bring to the market is that they give a line of code count behind their products. Using SLOC, we then apply industry metrics to estimate development time displaced in person months and cost avoidance as a dollar amount.
This study is two-thirds complete, with data still being collected.
With over 9,000 reusable COTS components and Web services currently available on the open market, this extensive study evolved as part of the work that ComponentSource was undertaking with corporate customers of its large-scale reuse solution SAVE-IT, many of which apply the same metrics used to assign value to COTS components to assign value to their internally-built components. This in turn enables them to forecast cost savings and cost avoidance from reusing software assets from both internal and external sources.
**Key Assumptions**
**Measuring ROI on COTS Components (Metrics)**
There are several methods used throughout the software development industry for measuring software productivity and reuse effectiveness. Of these, the three most commonly used by organizations are: source lines of code (SLOC), function points, and interface points. In this section, we look at these three methods and discuss how they are typically used.
**a) Source Lines of Code (SLOC)**
It is commonly accepted within the development community that for each development language, there is a predictable cost for each line of source code completed, debugged, and integrated into an application. When applied to COTS components, the data needed to determine value are development language and SLOC. The development language is readily known. As a matter of simple compatibility, vendors of COTS components include this information with their components when they are placed onto the open market.
Until now, vendors have not made information on SLOC available. As discussed in the introduction of this white paper, ComponentSource is leveraging its relationships with over 700 component vendors...
vendors - whose products comprise over 9,000 “best-of-breed” components on the open market - to fill in this gap, thus enabling effective measurement of reuse ROI by consumers of COTS components.
The following guidelines were used by the vendors for counting SLOC:
- Count functioning lines of code.
- Don’t count remarks/comments.
- Don’t count resource files/interface definition files.
- Don’t count conditional compilation lines that would not get compiled in the release executable.
- Don’t count standard header files provided by operating systems or the compiler.
- Do include header files if they include macros for lines of code.
These guidelines provide a solid level of consistency, allowing credible comparisons between multiple products from a variety of COTS component vendors.
b) Function Points
Function points were first utilized by IBM as a means to measure program volume\(^2\). Each function point represents a unit of functionality. This can be anything from calculating simple interest in a financial application, to performing complex parsing of data for formatting into a report. While this can be useful when measuring internal reuse - since the level of complexity for each function point is known - it is difficult to determine the work displacement for a COTS component, where the complexity of each individual function point is not known. When trying to determine the suitability of a component for use in an enterprise application, relying on function points is less than optimal.
c) Interface Points
Interface points represent specific interfaces that are exposed by the component and can be executed from the parent application. These include properties, methods and events. Each interface point may contain several function points, each containing possibly hundreds of lines of code. As with function points, interface points can be very useful when applied to internally developed components. However, since COTS components are generally designed for maximum flexibility, interface points are not able to provide the appropriate level of granularity for effective measurement.
Consider, for example, an enterprise business component that performs financial calculations. It could very well have only a few interface points, with each one encapsulating complex operations involving hundreds of lines of code. Conversely, it could expose individual methods to break this functionality into smaller, more specific operations. The size of the component remains the same, but its value in terms of work displaced would be considerably higher if packaged in this way.
Methods Used to Determine ROI
Of the methods outlined above, ComponentSource’s study is based upon Source Lines of Code, as well as industry averages for metrics such as person months to develop and deploy 1,000 lines of code by development environment, the cost per month of a salaried developer inclusive of office overheads and — given the fact that most commercial off-the-shelf components are feature-rich — the estimated percentage of a COTS component that may be used as part of an application. This study takes into account published work from the Software Engineering Institute (SEI) and Cost Xpert. It also includes input from expert organizations such as Software Productivity Research, Inc (SPR) which has specialized in Software Metrics, Software Estimation and Industry Benchmarking over the past 18 years. SPR has an accumulated knowledgebase of industry data and has studied over 10,000 projects of varying types.
The most important measurable statistic needed to perform an ROI analysis is the SLOC. From this we are able to use industry metrics to decipher the cost savings encapsulated in each component. The calculations were based on the following metrics:
\[
\text{SLOC} \times \%\text{Used} = \text{SLOC Displaced}
\]
\[
\text{SLOC Displaced} \times .0035 = \text{Time Avoided (Person Months)}
\]
\[
\text{Time Avoided (Person Months)} \times \text{Developer Cost Per Month} = \text{Cost Avoided}
\]
\[
\text{Cost Avoided} - \text{Single Developer License Price of COTS} = \text{ROI}
\]
- %Used - The percentage of the component that will actually be used (i.e. displace actual work).
- Time Avoided (Person Months) - Based upon the industry average of 3.5 person months per 1,000 lines of code.
This formula is explained in further detail in the following sub-sections.
a) Development Time
First we estimate the person-months required to develop 1,000 lines of code. This varies according to the development environment that you are in. The industry average is 3.5 person months to develop 1,000 lines of code in VC++/C++. Most of the data collected so far is for components developed in the VC++/C++ environment. This assumption may be changed according to the development environment.
b) Development Cost
The cost per month to employ one developer is around $10,000 – this is the overall cost to business to keep a salaried developer employed with office overheads. This assumption may be changed, for example a smaller organization may measure this cost at $5,000 per month.
c) Reuse Utilization
As these are feature-rich components, we assume 10 percent usage of a COTS component. For example, a grid or reporting component may have a variety of optional features that wouldn’t necessarily be used. On the other hand, certain granular components – such as e-mail address validation or address de-duplication – are likely to have most or indeed all of their features utilized in an application. The metrics allow for this to be changed to any percentage value. Even if component usage is changed to 1 percent, savings are still very high.
Analysis
Utilizing the methods described in our assumptions, reuse metrics may be applied to COTS components in our public marketplace. The table below provides a subset of the components analyzed within the scope of the study, and demonstrates the consistency of the metric analysis across a range of products from different vendors. Of particular interest is the diversity of functionality represented in the preliminary results. As illustrated by the sample data below, a large percentage of these COTS components encapsulate specific business processes that are in particular demand in enterprise applications today.
Table 1 - Sample of study results
<table>
<thead>
<tr>
<th>Component</th>
<th>SLOC</th>
<th>Language</th>
<th>% Component Used</th>
<th>SLOC Avoided</th>
<th>Time Avoided (Person Months)</th>
<th>Cost Avoided</th>
<th>1 Developer License</th>
<th>ROI (x:1)</th>
<th>Component Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sax ActiveX Zip Objects</td>
<td>31,000</td>
<td>VC++</td>
<td>10%</td>
<td>3,100</td>
<td>11</td>
<td>$108,500</td>
<td>$399</td>
<td>272</td>
<td>Data Compression Components</td>
</tr>
<tr>
<td>Xceed Zip Compression Library</td>
<td>72,773</td>
<td>VC++</td>
<td>10%</td>
<td>7,277</td>
<td>25</td>
<td>$254,706</td>
<td>$300</td>
<td>849</td>
<td>Data Compression Components</td>
</tr>
<tr>
<td>Dart PowerTCP Zip Compression Tool</td>
<td>29,994</td>
<td>VC++</td>
<td>10%</td>
<td>2,999</td>
<td>10</td>
<td>$104,979</td>
<td>$249</td>
<td>422</td>
<td>Data Compression Components</td>
</tr>
<tr>
<td>Desaware StorageTools</td>
<td>131,000</td>
<td>C++</td>
<td>10%</td>
<td>13,100</td>
<td>46</td>
<td>$458,500</td>
<td>$199</td>
<td>2,304</td>
<td>Data Storage Components</td>
</tr>
<tr>
<td>Dart PowerTCP Mail Tool</td>
<td>44,991</td>
<td>VC++</td>
<td>10%</td>
<td>4,499</td>
<td>16</td>
<td>$157,469</td>
<td>$499</td>
<td>316</td>
<td>Email Components</td>
</tr>
<tr>
<td>Xceed Encryption Library</td>
<td>42,338</td>
<td>VC++</td>
<td>10%</td>
<td>4,234</td>
<td>15</td>
<td>$148,183</td>
<td>$300</td>
<td>494</td>
<td>Encryption Components</td>
</tr>
<tr>
<td>Dart PowerTCP SSL Tool</td>
<td>123,843</td>
<td>VC++</td>
<td>10%</td>
<td>12,384</td>
<td>43</td>
<td>$433,451</td>
<td>$999</td>
<td>434</td>
<td>Encryption Components</td>
</tr>
<tr>
<td>Desaware File Property Component</td>
<td>11,000</td>
<td>C++</td>
<td>10%</td>
<td>1,100</td>
<td>4</td>
<td>$38,500</td>
<td>$79</td>
<td>487</td>
<td>File Handling Components</td>
</tr>
<tr>
<td>Data Dynamics #Grid</td>
<td>750,000</td>
<td>VC++</td>
<td>10%</td>
<td>75,000</td>
<td>263</td>
<td>$2,625,000</td>
<td>$249</td>
<td>10,542</td>
<td>Grid Components</td>
</tr>
<tr>
<td>LEADTOOLS Document Imaging</td>
<td>2,466,899</td>
<td>C, C++</td>
<td>10%</td>
<td>246,690</td>
<td>863</td>
<td>$8,634,147</td>
<td>$1,995</td>
<td>4,328</td>
<td>Imaging Components</td>
</tr>
<tr>
<td>Xceed Absolute Packager</td>
<td>13,817</td>
<td>Delphi</td>
<td>10%</td>
<td>1,382</td>
<td>5</td>
<td>$48,360</td>
<td>$50</td>
<td>967</td>
<td>Installation Tools</td>
</tr>
<tr>
<td>Dart PowerTCP Emulation Tool</td>
<td>38,702</td>
<td>VC++</td>
<td>10%</td>
<td>3,870</td>
<td>14</td>
<td>$135,457</td>
<td>$499</td>
<td>271</td>
<td>Internet Communication Components</td>
</tr>
<tr>
<td>Xceed Winsock Library</td>
<td>79,998</td>
<td>VC++</td>
<td>10%</td>
<td>8,000</td>
<td>28</td>
<td>$279,993</td>
<td>$500</td>
<td>560</td>
<td>Internet Communication Components</td>
</tr>
<tr>
<td>Data Dynamics ActiveCube</td>
<td>360,000</td>
<td>VC++</td>
<td>10%</td>
<td>36,000</td>
<td>126</td>
<td>$1,260,000</td>
<td>$599</td>
<td>2,104</td>
<td>On-Line Analytical Processing Components</td>
</tr>
<tr>
<td>Data Dynamics ActiveReports</td>
<td>690,000</td>
<td>VC++</td>
<td>10%</td>
<td>69,000</td>
<td>242</td>
<td>$2,415,000</td>
<td>$499</td>
<td>4,840</td>
<td>Reporting Components</td>
</tr>
<tr>
<td>Data Dynamics ActiveSizer</td>
<td>31,000</td>
<td>VC++</td>
<td>10%</td>
<td>3,100</td>
<td>11</td>
<td>$108,500</td>
<td>$99</td>
<td>1,096</td>
<td>Resizing Components</td>
</tr>
<tr>
<td>Desaware NT Service Toolkit</td>
<td>58,000</td>
<td>C++</td>
<td>10%</td>
<td>5,800</td>
<td>20</td>
<td>$203,000</td>
<td>$499</td>
<td>407</td>
<td>Security & Administration Components</td>
</tr>
</tbody>
</table>
### Conclusion
When assessing the metrics for COTS components, one should also factor in the cost to evaluate and reuse this functionality. Based on his analysis of a number of published studies, Jeffrey Poulin in his book "Measuring Software Reuse" concludes that reusing software "takes 20% of the effort of new development". Even given Poulin’s assumptions, the study shows that using COTS components represents significant ROI compared to the greater cost of creating the equivalent functionality from scratch.
The metrics supplied on COTS components indicate the potential cost and time avoidance relative to the application development effort, represented through the purchasing and deployment of mature, market proven, expert-built COTS components. Many of our SAVE-IT customers use the data supplied on COTS components to justify their build vs. buy decisions, and cost forecast reuse savings and time to market benefits.
By applying industry averages to assign value to COTS components, companies can gain valuable information on the benefits of reusing these components in addition to leveraging their internally created assets. This information can be deployed at anytime during the development cycle of an application - from project planning to final revisions - to determine if the use of COTS components for specific functionality will be a cost-effective solution to delivering better, faster, and cheaper solutions.
We will be publishing the raw value data for COTS components collected on our public marketplace, this will include the SLOC behind component products and person months displaced per development language.
### Component Vendor Data Courtesy of:
**Dart Communications**
Dart Communications was founded in 1994 to create quality components designed to support Internet communication development. Dart's development teams carefully design each component for ease-of-use and maximum range of effectiveness for both beginners and advanced developers alike. The results are tools that function in many development environments, such as .NET, Visual Basic, Visual C++, PowerBuilder, ASP, Delphi, C++ Builder and Office 97/XP.
**Data Dynamics**
Created in January 1996, Data Dynamics, Ltd., provides software tools and controls for application developers.
© ComponentSource 1996-2003
using Microsoft design environments. The Company’s primary product focus is on Data Analysis and Information Reporting. However, they also offer User Interface development products.
**Desaware**
Since its inception in 1991 as one of the first third party Visual Basic component vendors, Desaware has developed innovative software products to assist developers in their programming efforts. Based on experience going back to the days of Windows 1.0, the company understands the critical features needed by developers, sometimes presenting a solution before developers are even aware of the problem.
**FarPoint**
FarPoint Technologies, Inc. was founded in 1991 and is located next to Research Triangle Park (RTP) in Morrisville, North Carolina. The company develops and publishes professional components for Windows development. Their award-winning tools benefit corporations, software companies, and independent consultants around the world as a cost-effective solution for building distributed enterprise-wide applications for commercial or in-house use.
**LEAD Technologies**
LEAD Technologies has been producing imaging developer toolkits for the past 12 years. LEADTOOLS is designed to handle all imaging needs — from common loading, displaying and image processing, to complex and high performance imaging demands of the document, medical and Internet industries. LEADTOOLS can support projects requiring raster imaging, vector imaging or multimedia support, or even a combination of all three.
**Sax.NET**
Sax.NET, formerly Sax Software, specializes in developing components for communications, data compression, scripting, and user interfaces. Sax.NET was founded six years ago and is located in Eugene, Oregon.
**Xceed Software Inc.**
Xceed Software Inc. creates, markets and distributes software components for Microsoft Windows developers. Since its launch in 1994, Xceed has been devoted exclusively to the Microsoft platform. The company’s very first product, Xceed Zip Compression Library, was built for Microsoft Visual Basic 3.0 and has been migrated to every Microsoft platform since, including ActiveX and the .NET Framework.
**Footnotes:**
1 SAVE-IT consists of an enhanced and proven three-pronged commercial approach to establish the business drivers for reuse of software assets inside of an organization, and a scalable asset rich infrastructure to institutionalize reuse. The customizable solution may be packaged according to a customer’s needs. It differentiates itself in the market on three proven levels comprising: SAVE-IT™ Process™, SAVE-IT™ Catalog™, and SAVE-IT™ Content™.
3 ComponentSource has used industry averages to assign value to COTS components. For more information on these please contact press@componentsource.com.
**Revision History:** First Published: August 27, 2002. May contain copyrighted data previous published and owned by ComponentSource.
**About ComponentSource**
ComponentSource is the world’s largest marketplace and community for reusable software components for all platforms, and is first to market as a Software Asset Value Provider with the launch of SAVE-IT, Software Asset Value Engineering in Information Technology. With seven years’ experience at the helm of the component industry, ComponentSource is able to transfer its experience in running the world’s largest reuse center on its public marketplace to the corporate environment. SAVE-IT is a mature three-pronged approach that establishes effective enterprise-scale software reuse and is the backbone technology for the National Software Component Exchange and private sector customers worldwide. The respected barometer for the component industry, ComponentSource pioneered the open market for reusable software components in 1995, and continues to drive the market through its award-winning e-business model and groundbreaking work to establish the first widely accepted reusable component standard. A global e-business with customers in over 110 countries, ComponentSource is headquartered in Atlanta and has offices in Reading, England. For more information, please visit www.componentsource.com.
|
{"Source-Url": "http://www.componentsource.com/Services/ROI_on_COTS_Components_White_Paper.pdf", "len_cl100k_base": 5513, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 20544, "total-output-tokens": 5901, "length": "2e12", "weborganizer": {"__label__adult": 0.0005393028259277344, "__label__art_design": 0.0004763603210449219, "__label__crime_law": 0.0005588531494140625, "__label__education_jobs": 0.0009527206420898438, "__label__entertainment": 7.349252700805664e-05, "__label__fashion_beauty": 0.00021076202392578125, "__label__finance_business": 0.01270294189453125, "__label__food_dining": 0.00046372413635253906, "__label__games": 0.000682830810546875, "__label__hardware": 0.001178741455078125, "__label__health": 0.00047898292541503906, "__label__history": 0.00017201900482177734, "__label__home_hobbies": 0.00013589859008789062, "__label__industrial": 0.00057220458984375, "__label__literature": 0.00020515918731689453, "__label__politics": 0.00027751922607421875, "__label__religion": 0.0003025531768798828, "__label__science_tech": 0.003307342529296875, "__label__social_life": 8.934736251831055e-05, "__label__software": 0.0142822265625, "__label__software_dev": 0.9609375, "__label__sports_fitness": 0.0003407001495361328, "__label__transportation": 0.00066375732421875, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24943, 0.05169]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24943, 0.25616]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24943, 0.90906]], "google_gemma-3-12b-it_contains_pii": [[0, 608, false], [608, 1081, null], [1081, 4982, null], [4982, 8130, null], [8130, 10734, null], [10734, 13864, null], [13864, 18415, null], [18415, 20732, null], [20732, 24943, null]], "google_gemma-3-12b-it_is_public_document": [[0, 608, true], [608, 1081, null], [1081, 4982, null], [4982, 8130, null], [8130, 10734, null], [10734, 13864, null], [13864, 18415, null], [18415, 20732, null], [20732, 24943, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24943, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24943, null]], "pdf_page_numbers": [[0, 608, 1], [608, 1081, 2], [1081, 4982, 3], [4982, 8130, 4], [8130, 10734, 5], [10734, 13864, 6], [13864, 18415, 7], [18415, 20732, 8], [20732, 24943, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24943, 0.12258]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.