id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
6dd24d4fa060e07bd679d87ebf9665da94c29a06
|
Active Resource Management for Declarative Data-Flow Processing
Grelck, C.U.; Gijsbers, E.J.
Published in:
Preproceedings of the 15th Symposium on Trends in Functional Programming (TFP2014)
Citation for published version (APA):
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
Preproceedings of the 15th Symposium on Trends in Functional Programming (TFP2014)
Jurriaan Hage (editor)
Active Resource Management
for Declarative Data-Flow Processing
(Extended Abstract)
Clemens Grelck\textsuperscript{1} and Bert Gijsbers\textsuperscript{2,3}
\textsuperscript{1} University of Amsterdam
Amsterdam, Netherlands
c.grelck@uva.nl
\textsuperscript{2} Ghent University
Ghent, Belgium
bert.gijsbers@ugent.be
Abstract. S-Net is a declarative asynchronous data-flow coordination language. Like many other high-level multi-core programming approaches, the S-Net runtime system makes use of light-weight task abstractions that are automatically mapped to a set of heavy-weight kernel threads for execution. The number of kernel threads is typically motivated by the number of cores in the hardware. We argue that such a fixed choice of kernel threads is suboptimal in two scenarios. Firstly, an application may temporarily expose less concurrency than the underlying hardware offers. In this case the cores waste energy. Secondly, the number of hardware cores effectively available to an application may dynamically change in multi-application and/or multi-user environments. This leads to an over-approximation of the available hardware by individual applications, costly time scheduling by the operating system and, as a consequence, to both waste of energy and loss of performance. We propose an active resource management layer for S-Net that effectively overcomes these issues.
1 Introduction
S-Net \cite{Grelck2008, Grelck2010} is a declarative coordination language whose design thoroughly avoids the intertwining of computational and organizational aspects. S-Net achieves a near-complete separation of the concern of writing sequential application building blocks (i.e. application engineering) from the concern of composing these building blocks to form a parallel application (i.e. concurrency engineering). S-Net defines the coordination behaviour of networks of asynchronous, stateless components and their orderly interconnection via typed streams. We deliberately restrict S-Net to coordination aspects and leave the specification of the concrete operational behaviour of basic components, named boxes, to conventional programming languages, primarily C and the purely functional, data-parallel array language SAC \cite{SAC2002}.
\textsuperscript{3} This work was performed while Bert still worked at the University of Amsterdam.
An S-Net box is connected to the outside world by two typed streams, a single input stream and a single output stream. Boxes execute fully asynchronously: as soon as data is available on the input stream, a box may start computing. The operational behaviour is determined by a stream transformer function that maps a single data item from the input stream to a (possibly empty) sequence of data items on the output stream. Boxes are free of internal state to facilitate dynamic reconfiguration, which facilitates dynamic reconfiguration and resource mapping, including elastic box replication. S-Net effectively implements a macro data flow model, macro because boxes do not normally represent basic operations but rather individually non-trivial computations.
While the original S-Net runtime system [4] merely served as a proof of concept for the macro data flow approach as such, we recently developed the novel Front runtime system [5] that achieves very competitive runtime performance [6]. Whereas the original proof-of-concept design more or less directly implements the operational semantics of S-Net [7] (i.e. a system of asynchronous components, each implemented by a kernel thread, that communicate via bounded buffers), the Front runtime system employs a fixed number of kernel threads as a software abstraction of the underlying hardware, in practice one thread per core. As a rule of thumb the number of kernel threads used typically equals the number of cores in the system or a deliberately chosen subset thereof.
We argue that any fixed number of kernel threads used throughout a program run is suboptimal for two reasons. Firstly, we waste energy for operating all computing resources whenever the application effectively exposes less concurrency than the execution architecture provides. Secondly, in typical multi-application or even multi-user environments we cannot expect any single application to have exclusive access to the hardware resources. Consequently, applications compete for resources in an uncontrolled and non-cooperative way. This leads to time slicing in the operating system and thus to suboptimal performance of each application (assuming non-interactive, compute-oriented applications).
The contribution of this paper is the extension of the Front runtime system by active resource management. A resource management server dynamically allocates execution resources to a running S-Net program. The (fine-grained) tasks managed by the Front runtime system are automatically mapped to the dynamically varying number of effectively available kernel threads. Their number is continuously adapted to the effective level of concurrency exposed by the running S-Net streaming network.
In this way, we actively control the energy consumption of a system and reduce the energy footprint of an S-Net application compared to greedy resource utilisation, assuming that the underlying operating system automatically reduces the clock frequency and potentially the voltage of underutilised processors and cores or switches them off entirely. Furthermore, we create the means to simultaneously run multiple independent and mutually unaware S-Net applications on the same set of resources by continuously negotiating resource distribution proportional to demands.
2 The FRONT runtime system
The FRONT runtime system[5] is characterised by a fixed number of worker kernel threads that run S-Net components (boxes) which are activated by the presence of input data. Instead of costly roaming the dynamic streaming network of asynchronous components on the search for such activated boxes, a FRONT worker thread maintains a private queue of such cases that it continuously processes. Execution of a box yields a stream of output data that the worker thread inserts into its own work queue. When the worker is confronted with an empty work queue, it starts to actively look for work elsewhere. One place to do so is the global input stream of the S-Net streaming network. Another option is to steal work from other worker threads, similar to other work stealing runtimes as pioneered by Cilk [8]. We employ a hierarchical organisation of worker threads in order to accelerate detection of absence of work across all or many workers.
3 Resource management server
The resource management server is a system service that dynamically allocates execution resources to running programs, more precisely to S-Net streaming networks of asynchronous components on demand. Whereas the FRONT runtime system originally is based on a configurable but dynamically constant number of kernel threads, we now relax this restriction and make the number of worker threads variable over the entire program runtime. A dedicated resource server (thread) is responsible for dynamically spawning and terminating worker threads as well as for binding worker threads to execution resources like processor cores, hyperthreads or hardware thread contexts, depending on the architecture being used. We illustrate our system architecture in Fig. 1.
Fig. 1: Resource server architecture for FRONT runtime systems
Upon program startup only the resource server thread is active; this is the master thread of the process. The resource server thread identifies the hardware
architecture the process is running on by means of the hwloc utility. Optionally, the number of cores or hardware threads to be effectively used can be restricted by the user; this is primarily meant as a means for experimentation, not for production use. Next, the resource server sets up the static property graph, which is to be shared by all worker threads. Once the set up is completed, the resource server launches the first worker thread.
The worker thread executes its standard work stealing procedure. In the presence of an obviously empty work queue it reads the global input stream and, thus, creates the first data item in the system. This is afterwards processed as usual in FRONT, which triggers the creation of further data items to be put into the local work queue.
Creation (and termination) of worker threads is controlled by the resource server making use of two counters, or better resource level indicators. The first one is the obvious number of currently active worker threads. This is initially zero. The second resource level indicator is a measure of demand for compute power. This reflects the number of work queues in the system. This is not the same as the number of threads because we have a very special further work queue not associated with any of the workers: the global input of the S-Net streaming network. Thus, the demand indicator is initially set to one. Both resource level indicators are restricted to the range between zero and the total number of hardware execution resources found in the system.
If the demand for computing resources is greater than the number of workers (i.e. the number of currently employed computing resources), the resource server spawns an additional worker thread. Initially, this condition holds trivially. The creation of an additional worker thread temporarily brings the (numerical) demand for resources into an equilibrium with the number of actively used resources. Before increasing the demand the new worker thread must actually find some work to do. In particular during the startup phase of an S-Net streaming network, this usually happens by reading another item from the global input stream. In general, the new thread could alternatively steal existing work from other threads. In any case, once doing productive work, the worker signals this to the resource server, and the resource server increments the demand level indicator, unless demand (and hence resource use) has already reached the maximum for the given architecture.
This procedure guarantees a smooth and efficient organisation of the ramp up phase. As a standard scenario we assume the availability of considerable input data on the global input stream as well as a non-trivial amount of initial computation on each data item. In this case, we effectively overlap (the overhead of) worker thread creation with reading data from global input and its processing. Moreover, only one worker thread at a time attempts to read the global input stream, which avoids costly synchronisation upon accessing this device.
Executing the FRONT work stealing model potentially leads worker threads to states of unemployment. With the local work queue being empty, no new data items on the global input and nothing to steal from other workers, there is nothing left to do for a worker thread. The worker signals this state to the
resource server, which in turn reduces the demand level indicator by one. The worker thread does not immediately terminate because we would like to avoid costly repeated termination and re-creation of worker threads in not uncommon scenarios of oscillating resource demand. The worker thread, however, does effectively terminate with a configurable delay following an extended period of inactivity.
4 Energy consumption
Effective application-level software control over energy consumption parameters such as clock frequency and voltage is still in its infancy. While such features exist on some architectures, e.g. Intel’s Single Chip Cloud Computer (SCC) [9, 10], portability in the availability of features and their control are still to come.
As a consequence, we decided for indirect control over energy consumption and anticipate corresponding support in the operating system for automatic clock frequency and potentially voltage scaling. Most commonly used architectures and operating systems do support this today. Originally motivated by the needs of battery-powered devices like laptops and notebooks server installations likewise use these features nowadays to avoid wasting energy when compute power is temporarily unrequested.
We make use of these facilities by creating worker threads step-wise in a demand-driven manner and bind these threads to run on hardware resources as concentrated as possible. For example, on a dual-processor, quad-core, twice hyperthreaded system we would start at most 16 worker threads. While ramping up the number of active worker threads we first fill the hyperthreads of one core, then the cores of one processor, and only when the number of workers exceeds eight, we make use of the second processor. This policy allows the operating system to keep the second processor at the lowest possible clock frequency or even to keep it off completely until we can indeed make efficient use of it.
While we only ramp up the number of worker threads on-demand as computational needs grow within the S-Net streaming network, we also reduce the number of workers when computational needs decrease. This fits well with our work stealing based runtime system organisation. If a worker runs out of private work, i.e. its work queue becomes empty, it first tries to get hold of the input device and import new data items from the global input stream. If that fails, the worker turns into a thief and tries to obtain work from other workers’ work queues. If that also fails, it must be concluded that there is at least currently no useful work to do and the worker terminates. By doing so the worker releases the corresponding hardware resource and, thus, gives the operating system the opportunity to reduce its energy consumption by reducing clock frequency and/or voltage or by shutting it down entirely.
While it is fairly straightforward during worker thread creation to incrementally invade the available hierarchical execution resources, worker thread termination as described above is bound to result in a patchwork distribution of active workers over hardware resources over time. This would render the energy-
saving capacities of the operating system largely ineffective. To overcome this
shortcoming, the resource server continuously monitors the allocation of worker
threads to hardware resources and rebinds the workers as needed.
5 Multiple independent applications
The next step in advancing the concept of resource management servers is to
address multiple independent and mutually unaware applications (or instances
thereof) running at overlapping intervals of time on the same set of execution
resources. Fig. 2 illustrates our approach with two applications. The role of the
resource management server as introduced in the previous section is split into
two disjoint parts: a local resource server per application (process) manages
the worker threads of the S-Net runtime system and adapts the number and
core-binding of the workers as described before.
The second part of multi-application resource management servers lies with a
separate process that we coined meta resource server. This meta resource server
is started prior to any S-Net-related application process. It is in exclusive con-
trol of all hardware execution resources of the given system. We deliberately
ignore the underlying operating system here as well as potentially running fur-
ther applications unaware of our resource management model. Whenever a local
resource server has reason to spawn another worker thread, in the current multi-
application scenario, it first must contact the meta resource server to obtain
another execution resource. The meta server either replies with a concrete core
identifier or it does not reply at all. In the former case the local resource server
of the corresponding application spawns another worker thread and binds it to
the given core. In the latter case the local resource server simply does nothing,
which means that the number of execution resources currently occupied by this
application remains unmodified.
As said before, the meta resource server is in control of all execution resources
and decides which application can make use of which cores. With a single appli-
cation (instance) the system behaves almost exactly as described in the previous
section. The local resource server, assuming that the application exposes am-
ple concurrency, incrementally obtains all available resources on the compute
node. Only the additional inter-process communication marginally slows down
this process.
Let us look at the more interesting scenario of two applications that both
expose sufficient concurrency to make use of the entire compute server by them-
selves. One is started first and obtains one core after the other until it occupies
the entire system.
Now, we start the other application. To do this we must first admit that
the meta resource server as well as the local resource servers are scheduled pre-
emptively by the operating system. In other words they are not in possession
of an exclusive core. And neither are the worker threads. While we guarantee
that no two worker threads are bound to the same core at the same time, re-
source management servers may well interfere with worker execution. With large
numbers of cores it may prove more suitable in the future to reserve particular cores for resource management, but the still fairly low core counts representative today, we choose the above solution in order to avoid wasting considerable computing resources. Our general underlying assumption here is that time spent on any form of resource management is negligible compared with the actual computing.
Coming back to our example, all cores are in “exclusive” use by the first application when we start the second application. Hence, we effectively only start the second application’s local resource server, which in turn contacts the
for (i = 0; i < num_clients; ++i) {
client = all[i];
if (client->local_workload >= 1) {
++num_positives;
total_load += client->local_workload;
portions[i] = 1;
} else portions[i] = 0;
remains[i] = 0.0;
}
assert(host->nprocs < total_load);
for (i = 0; i < num_clients; ++i) {
client = all[i];
if (client->local_workload >= 2) {
portions[i] += (client->local_workload - 1) *
(host->nprocs - num_positives) /
(total_load - num_positives);
remains[i] = ((double)((client->local_workload - 1) *
(host->nprocs - num_positives)))
/ ((double)(total_load - num_positives))
- (double)(portions[i] - 1);
}
num_assigned += portions[i];
}
while (num_assigned < host->nprocs) {
p = 0;
for (i = 1; i < num_clients; ++i) {
if (remains[i] > remains[p]) p = i;
}
if (remains[p] > 0) {
portions[p] += 1;
num_assigned += 1;
remains[p] = 0.0;
} else break;
}
Fig. 3: Algorithm to divide resources between independent applications proportionally to their resource demand
meta resource server via inter-process communication to ask for a computing core. Since the meta resource server has no such core at hand, it first needs to get one back from another application. To determine the relative need for computing resources the meta resource server compares two numbers for each application:
a) the number of currently allocated cores;
b) the demand for cores, i.e. how many cores the application has asked for.
The quotient between the latter and the former determines the relative need for cores.
In our running example and assuming an 8-core system, the first application has a demand quotient of $\frac{9}{8}$ because it currently occupies all eight cores but asked for one more core (we assume ample internal concurrency). The second application has a demand quotient of $\frac{1}{0}$ which we interpret as infinitely high. Thus, a new application that has been started but does not yet have any execution resources has a very high relative demand. The meta resource server goes back to the first application and withdraws one the cores previously allocated to it. The local resource server decides which worker thread to terminate and emp-
ties that threads work queue, which is simply appended to another work queue. The worker thread is not preemptively terminated but we wait until it finishes its current box computation. After that the worker thread tries to retrieve the next read license from its work queue, but finds its work queue removed. The thread, thus, signals the local resource server its end and terminates. The local resource server immediately communicates the availability of the corresponding core back to the meta resource server. The meta resource server allocated that core to the second application, which now starts to ramp up its execution.
Assuming that the second application likewise exposes ample concurrency, it will soon ask the meta resource server for another threads. The meta resource server, by means of the demand quotients, step-by-step takes execution resources away from the first application and gives them to the second application until an equilibrium is reached. In order to avoid moving resources back and forth uselessly, the meta resource server makes sure that moving one execution resource from one application to another does not invert relative demands.
If the first application terminates at some point in time while the second is still running, all vacated resources will be moved over to the second application. Fig. 3 shows an excerpt of the relevant algorithm to ensure proportional resource distribution among applications.
6 Related work
still missing
7 Conclusion and future work
We presented the extension of the FRONT runtime system for asynchronous stream processing by explicit resource servers. They ensure an active management of execution resources, i.e. processors, cores, hyperthreads of a given system. Instead of running an S-Net streaming network on all available resources (or some explicitly defined subset thereof), we dynamically adjust the actually employed resources to the continuously varying demand of the S-Net streaming application.
Our motivation for this extension is essentially twofold. Firstly, we aim at reducing the energy footprint of streaming applications by shutting down system resources that at times we cannot make effective use of due to limitations in the concurrency exposed. Secondly, we aim at efficiently mediating the available resources among several S-Net streaming applications, that are independent and unaware of each other.
In the future we plan to run extensive experiments demonstrating the positive effect on both combined performance of multiple applications and energy footprint.
While the concrete motivation of our work is fueled by S-Net component coordination and asynchronous streaming networks, the general ideas if not even
certain parts of the implementation can relatively straightforwardly be carried over to a range of runtime systems for other high-level parallel languages that share with FROiNT the common idea to use a fixed set of kernel threads as an abstraction of the available multi-core hardware, and thus suffer from exactly the same potential shortcomings with respect to energy consumptions and multi-application coordination.
References
|
{"Source-Url": "https://pure.uva.nl/ws/files/2413472/163308_preproceedingstfp2014.pd.pdf", "len_cl100k_base": 4708, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24651, "total-output-tokens": 6101, "length": "2e12", "weborganizer": {"__label__adult": 0.00041747093200683594, "__label__art_design": 0.0004148483276367187, "__label__crime_law": 0.0003879070281982422, "__label__education_jobs": 0.0005865097045898438, "__label__entertainment": 0.0001188516616821289, "__label__fashion_beauty": 0.00019049644470214844, "__label__finance_business": 0.0003323554992675781, "__label__food_dining": 0.0004062652587890625, "__label__games": 0.0007343292236328125, "__label__hardware": 0.0030727386474609375, "__label__health": 0.00072479248046875, "__label__history": 0.0003428459167480469, "__label__home_hobbies": 0.00014483928680419922, "__label__industrial": 0.0007619857788085938, "__label__literature": 0.0003147125244140625, "__label__politics": 0.0003514289855957031, "__label__religion": 0.0006847381591796875, "__label__science_tech": 0.1533203125, "__label__social_life": 9.590387344360352e-05, "__label__software": 0.0095367431640625, "__label__software_dev": 0.82568359375, "__label__sports_fitness": 0.0003654956817626953, "__label__transportation": 0.0009016990661621094, "__label__travel": 0.0002493858337402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27297, 0.01252]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27297, 0.52164]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27297, 0.90407]], "google_gemma-3-12b-it_contains_pii": [[0, 1343, false], [1343, 1450, null], [1450, 3804, null], [3804, 7096, null], [7096, 9071, null], [9071, 12436, null], [12436, 15591, null], [15591, 18732, null], [18732, 19367, null], [19367, 21733, null], [21733, 24450, null], [24450, 27297, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1343, true], [1343, 1450, null], [1450, 3804, null], [3804, 7096, null], [7096, 9071, null], [9071, 12436, null], [12436, 15591, null], [15591, 18732, null], [18732, 19367, null], [19367, 21733, null], [21733, 24450, null], [24450, 27297, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27297, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27297, null]], "pdf_page_numbers": [[0, 1343, 1], [1343, 1450, 2], [1450, 3804, 3], [3804, 7096, 4], [7096, 9071, 5], [9071, 12436, 6], [12436, 15591, 7], [15591, 18732, 8], [18732, 19367, 9], [19367, 21733, 10], [21733, 24450, 11], [24450, 27297, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27297, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
195ae4406e036d6d2a935b705a2d1d897d15cb6a
|
Advanced OpenACC
John Urbanic
Parallel Computing Scientist
Pittsburgh Supercomputing Center
Outline
Loop Directives
Data Declaration Directives
Data Regions Directives
Cache directives
Wait / update directives
Runtime Library Routines
Environment variables
Outline
- How OpenACC work is organized
- Gangs/Workers/Threads
- kernels
- parallel regions
- Things you didn’t know were missing (OpenACC 2.0)
- Procedure calls
- Nested Parallelism
- More complex hardware configurations
- Device Specific Tuning
- Multi-threading and multiple devices
- Alternative threading approaches
- Using asynchronous features
- Manual Data Management
- Profiling
Part of the awesomeness of OpenACC has been that you have been able to ignore the hardware specifics. But, now that you know a little bit more about CUDA/GPU architecture, you might suspect that you can give the compiler still more help in optimizing. In particular, you might know the hardware specifics of a particular model. The compiler might only know which “family” it is compiling for (Fermi, Kepler, etc.).
Indeed, the OpenACC spec has some clauses to target architecture hierarchies, and not just GPUs (think Intel MIC). Let’s see how they map to what we know about GPUs.
The OpenACC execution model has three levels: **gang**, **worker** and **vector**.
This is supposed to map to any architecture that is a collection of Processing Elements (PEs) where each PE is multithreaded and each thread can execute vector instructions.
Targeting the Architecture
As we said, OpenACC assumes a device will contain multiple processing elements (PE) that run in parallel. Each PE also has the ability to efficiently perform vector-like operations. For NVIDIA GPUs, it is reasonable to think of a PE as a streaming multiprocessor (SM). Then an OpenACC gang is a threadblock, a worker is effectively a warp, and an OpenACC vector is a CUDA thread. Phi, or similar Intel SMP architectures also map in a logical, but different, fashion.
<table>
<thead>
<tr>
<th></th>
<th>GPU</th>
<th>SMP (Phi)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Vector</td>
<td>Thread</td>
<td>SSE Vector</td>
</tr>
<tr>
<td>Worker</td>
<td>Warp</td>
<td>Core</td>
</tr>
<tr>
<td>Gang</td>
<td>SM</td>
<td>CPU</td>
</tr>
</tbody>
</table>
NVIDIA GPU Task Granularity (Take Notes!)
- Each kernel is executed on one device
- Multiple kernels can execute on a device at one time
- Each thread is executed by a core
- Each block is executed by one SM and does not migrate
- Several concurrent blocks can reside on one SM depending on the blocks’ memory requirements and the SM’s memory resources
- Each kernel is executed on one device
- Multiple kernels can execute on a device at one time
Warps - on Kepler (Still taking notes?)
- Blocks are divided into 32 thread wide units called warps
- Size of warps is implementation specific and can change in the future
- The SM creates, manages, schedules and executes threads at warp granularity
- Each warp consists of 32 threads of contiguous threadIds
- All threads in a warp execute the same instruction
- If threads of a warp diverge the warp serially executes each branch path taken
- When a warp executes an instruction that accesses global memory it coalesces the memory accesses of the threads within the warp into as few transactions as possible
Determining block size - on Kepler (You can stop now)
- 32 thread wide blocks are good for Kepler, since warps are allocated by row first.
- 32 thread wide blocks will mean all threads in a warp are reading and writing contiguous pieces of memory
- Coalescing
- Try to keep total threads in a block to be a multiple of 32 if possible
- Non-multiples of 32 waste some resources & cycles
- Total number of threads in a block: between 256 and 512 is usually a good number.
Determining grid size - on Kepler
Most people start with having each thread do one unit of work.
Usually better to have fewer threads so that each thread could do multiple pieces of work.
What is the limit to how much smaller we can make the number of total blocks?
- We still want to have at least as many threads as can fill the GPU many times over (for example 4 times). That means we need at least $2880 \times 15 \times 4 = \sim 173,000$ threads.
- Experiment by decreasing the number of threads.
Mapping OpenACC to CUDA Threads and Blocks
```c
#pragma acc kernels
for( int i = 0; i < n; ++i )
y[i] += a*x[i];
#pragma acc kernels loop gang(100) vector(128)
for( int i = 0; i < n; ++i )
y[i] += a*x[i];
#pragma acc parallel num_gangs(100) vector_length(128)
{
#pragma acc loop gang vector
for( int i = 0; i < n; ++i ) y[i] += a*x[i];
}
```
- 100 thread blocks, each with 128 threads, each thread executes one iteration of the loop.
- 16 blocks, 256 threads each.
- 100 thread blocks, each with 128 threads, each thread executes one iteration of the loop, using parallel
SAXPY Returns For Some Fine Tuning
The default (will work OK):
```c
#pragma acc kernels loop
for( int i = 0; i < n; ++i )
y[i] += a*x[i];
```
Some suggestions to the compiler:
```c
#pragma acc kernels loop gang(100), vector(128)
for( int i = 0; i < n; ++i )
y[i] += a*x[i];
```
Specifies that the kernel will use 100 thread blocks, each with 128 threads, where each thread executes one iteration of the loop. This beat the default by ~20% last time I tried…
<table>
<thead>
<tr>
<th>Feature</th>
<th>Fermi GF100</th>
<th>Fermi GF104</th>
<th>Kepler GK104</th>
<th>Kepler GK110</th>
<th>Maxwell GM107</th>
<th>Pascal GP100</th>
</tr>
</thead>
<tbody>
<tr>
<td>Compute Capability</td>
<td>2.0</td>
<td>2.1</td>
<td>3.0</td>
<td>3.5</td>
<td>5.0</td>
<td>6.0</td>
</tr>
<tr>
<td>Threads / Warp</td>
<td>32</td>
<td>32</td>
<td>32</td>
<td>32</td>
<td>32</td>
<td>32</td>
</tr>
<tr>
<td>Max Warps / Multiprocessor</td>
<td>48</td>
<td>48</td>
<td>54</td>
<td>64</td>
<td>64</td>
<td>64</td>
</tr>
<tr>
<td>Max Threads / Multiprocessor</td>
<td>1536</td>
<td>1536</td>
<td>2048</td>
<td>2048</td>
<td>2048</td>
<td>2048</td>
</tr>
<tr>
<td>Max Thread Blocks / Multiprocessor</td>
<td>8</td>
<td>8</td>
<td>16</td>
<td>16</td>
<td>32</td>
<td>32</td>
</tr>
<tr>
<td>32-bit Registers / Multiprocessor</td>
<td>32768</td>
<td>32768</td>
<td>65536</td>
<td>131072</td>
<td>65536</td>
<td></td>
</tr>
<tr>
<td>Max Registers / Thread</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>255</td>
<td>255</td>
<td></td>
</tr>
<tr>
<td>Max Threads / Thread Block</td>
<td>1024</td>
<td>1024</td>
<td>1024</td>
<td>1024</td>
<td>1024</td>
<td>1024</td>
</tr>
<tr>
<td>Shared Memory Size Configurations</td>
<td>16k/48k</td>
<td>16k/48k</td>
<td>16k/32k/48k</td>
<td>16k/32k/48k</td>
<td>16k/32k/48k</td>
<td></td>
</tr>
<tr>
<td>Max X Grid Dimension</td>
<td>2^16</td>
<td>2^16</td>
<td>2^32</td>
<td>2^32</td>
<td>2^32</td>
<td>2^32</td>
</tr>
<tr>
<td>Hyper-Q</td>
<td>No</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Dynamic Parallelism</td>
<td>No</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
</tbody>
</table>
- Do you want to have to keep up with this?
- Maybe the compiler knows more about this than you? Is that possible?
- CUDA programmers do have to worry about all of this, and much more.
- But doesn’t hurt much to try.
### Rapid Evolution
<table>
<thead>
<tr>
<th></th>
<th>Fermi GF100</th>
<th>Fermi GF104</th>
<th>Kepler GK104</th>
<th>Kepler GK110</th>
<th>Maxwell GM107</th>
<th>Pascal GP100</th>
</tr>
</thead>
<tbody>
<tr>
<td>Compute Capability</td>
<td>2.0</td>
<td>2.1</td>
<td>3.0</td>
<td>3.5</td>
<td>5.0</td>
<td>6.0</td>
</tr>
<tr>
<td>Threads / Warp</td>
<td>32</td>
<td>32</td>
<td>32</td>
<td>32</td>
<td>32</td>
<td>32</td>
</tr>
<tr>
<td>Max Warps / Multiprocessor</td>
<td>48</td>
<td>48</td>
<td>54</td>
<td>64</td>
<td>64</td>
<td>64</td>
</tr>
<tr>
<td>Max Threads / Multiprocessor</td>
<td>1536</td>
<td>1536</td>
<td>2048</td>
<td>2048</td>
<td>2048</td>
<td>2048</td>
</tr>
<tr>
<td>Max Thread Blocks / Multiprocessor</td>
<td>8</td>
<td>8</td>
<td>16</td>
<td>16</td>
<td>32</td>
<td>32</td>
</tr>
<tr>
<td>32-bit Registers / Multiprocessor</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Max Registers / Thread Block</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Max Threads / Thread Block</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Shared Memory Size Configurations</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Max X Grid Dimension</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Hyper-Q</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Dynamic Parallelism</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Yes</td>
<td>Yes</td>
</tr>
</tbody>
</table>
- Do you want to have to keep up with this?
- Maybe the compiler knows more about this than you? Is that possible?
- CUDA programmers do have to worry about all of this, and much more. But doesn’t hurt much to try.
Parallel Regions vs. Kernels
We have been using *kernels* thus far, to great effect. However OpenACC allows us to more explicitly control the creation of tasks via the *gang*, *worker* and *vector* clauses. We can do this inside of *parallel* regions.
These approaches come from different backgrounds.
```
PGI Accelerator region --> OpenACC kernels
OpenMP parallel --> OpenACC parallel
```
Parallel Construct
Fortran
```fortran
!$acc parallel [clause …]
structured block
!$acc end parallel
```
Clauses
- `if( condition )`
- `async( expression )`
- `num_gangs( expression )`
- `num_workers( expression )`
- `vector_length( expression )`
C
```c
#pragma acc parallel [clause …]
{ structured block }
```
- `private( list )`
- `firstprivate( list )`
- `reduction( operator: list )`
Also any data clause
Parallel Clauses
num_gangs ( expression )
Controls how many parallel gangs are created.
num_workers ( expression )
Controls how many workers are created in each gang.
vector_length ( list )
Controls vector length of each worker.
private( list )
A copy of each variable in list is allocated to each gang.
firstprivate ( list )
private variables initialized from host.
reduction( operator: list )
private variables combined across gangs.
Parallel Regions
As in OpenMP, the OpenACC parallel construct creates a number of parallel gangs that immediately begin executing the body of the construct redundantly. When a gang reaches a work-sharing loop, that gang will execute a subset of the loop iterations. One difference between the OpenACC parallel construct and OpenMP is that there is no barrier at the end of a work-sharing loop in a parallel construct.
SAXPY as a parallel region
```c
#pragma acc parallel num_gangs(100), vector_length(128)
{
#pragma acc loop gang, vector
for( int i = 0; i < n; ++i )
y[i] += a*x[i];
}```
Compare and Contrast
Let’s look at how this plays out in actual code.
This
```c
#pragma acc kernels
{
for( i = 0; i < n; ++i )
a[i] = b[i] + c[i];
}
```
Is the same as
```c
#pragma acc parallel
{
#pragma acc loop
for( i = 0; i < n; ++i )
a[i] = b[i] + c[i];
}
```
Compare and Contrast
But not
```c
#pragma acc parallel
{
for( i = 0; i < n; ++i )
a[i] = b[i] + c[i];
}
```
By leaving out the loop directive we get totally redundant execution of the loop by each gang. This is not desirable.
Parallel Regions vs. Kernels
From these simple examples you could get the impression that simply putting in loop directives everywhere would make parallel regions equivalent to kernels. That is not the case.
The sequence of loops here
```c
#pragma acc kernels
{
for (i=0; i<n; i++)
a(i) = b(i)*c(i)
for (i=1; i<n-1; i++)
d(i) = a(i-1) + a(i+1)
}
```
Does what you might think. Two kernels are generated and the first completes before the second starts.
A parallel region will work differently
```c
#pragma acc parallel
{
#pragma acc loop
for (i=0; i<n; i++)
a(i) = b(i)*c(i)
#pragma acc loop
for (i=1; i<n-1; i++)
d(i) = a(i-1) + a(i+1)
}
```
The compiler will start some number of gangs, work-share the iterations of the first loop across those gangs, and work-share the iterations of the second loop across the same gangs. There is no guarantee that for the same loop iteration, the same value of `i` will be executed by the same gang for both loops. In fact, that's likely to be untrue, and some value of `i` will be executed by different gangs between the first and second loop. There is also no synchronization between the first and second loop, so there's no guarantee that the assignment to `a(i)` from the first loop will be complete before its value is fetched by some other gang for the assignment in the second loop.
Parallel Regions vs. Kernels
(Which is best?)
To put it simply, kernels leave more decision making up to the compiler. There is nothing wrong with trusting the compiler (“trust but verify”) and that is probably a reasonable place to start.
If you are an OpenMP programmer, you will notice a strong similarity between the tradeoffs of kernels and regions and that of OpenMP parallel for/do versus parallel regions. We will discuss this later when we talk about OpenMP 4.0.
As you gain experience, you may find that the parallel construct allows you to apply your understanding more explicitly. On the other hand, as the compilers mature, they will also be smarter about just doing the right thing. *History tends to favor this second path heavily.*
OpenACC 2.0 & 2.5
Things you didn’t know were missing.
The latest version of the specification has a lot of improvements. The most anticipated ones remove limitations that you, as new users, might not have known about. However, they may still linger until all of the compilers get up to spec.
- Procedure Calls
- Nested Parallelism
As well as some other things that you might not have thought about
- Device specific tuning
- Multiple host thread support
Don’t be afraid to review the full spec at
In OpenACC 1.0, all procedures had to be inlined. This limitation has been removed, but you do need to follow some rules.
```
#pragma acc routine worker
extern void solver(float* x, int n);
.
.
#pragma acc parallel loop num_gangs(200)
for( int index = 0; index < N; index++ ){
solver( x, n);
.
.
}
```
```
#pragma acc routine worker
void solver(float* x, int n){
.
.
#pragma acc loop
for( int index = 0; index < n; index++ ){
x[index] = x[index+2] * alpha;
.
.
}
.
}
```
In this case, the directive tells the compiler that “solver” will be a device executable and that it may have a loop at the worker level. No caller can do worker level parallelism.
Nested Parallelism
The previous example had gangs invoking workers. But, it is now possible to have kernels actually launch new kernels.
```c
#pragma acc routine
extern void solver(float* x, int n);
.
#pragma acc parallel loop
for (int index = 0; index < N; index++) {
solver( X, index);
}
```
```c
#pragma acc routine
void solver(float* x, int n){
#pragma acc parallel loop
for (int index = 0; index < n; index++) {
x[index] = x[index+2] * alpha
}
}
```
Having thousands of lightweight threads launching lightweight threads is probably not the most likely scenario.
Nested Parallelism
This is a more useful case. We have a single thread on the device launching parallelism from its thread.
```c
#pragma acc routine extern void solver(float* x, int n);
.
.
#pragma acc parallel num_gangs(1)
{
solver( X, n1 );
solver( Y, n2 );
solver( Z, n3 );
}
#pragma acc routine void solver(float* x, int n) {
#pragma acc parallel loop
for ( int index = 0; index < n; index++ ){
x[index] = x[index+2] * alpha;
.
}
}
```
The objective is to move as much of the application to the accelerator and minimize communication between it and the host.
Device Specific Tuning
I hope from our brief detour into GPU hardware specifics that you have some understanding of how hardware specific these optimizations can be. Maybe one more reason to let kernel do its thing. However, OpenACC does have ways to allow you to account for various hardware details. The most direct is `device_type()`.
```c
#pragma acc parallel loop
for( index = 0; index < n; index++ ){
x[i] += y[i];
solver( x, y, n );
}
```
Multiple Devices and Multiple Threads
Multiple threads and one device: fine. You are responsible for making sure that the data is on the multi-core host when it needs to be, and on the accelerator when it needs to be there. But, you have those data clauses in hand already (present_or_copy will be crucial), and OpenMP has its necessary synchronization ability.
Multiple threads and multiple devices. One might hope that the compilers will eventually make this transparent, but at the moment you need to:
- Assign threads to devices:
- `omp_get_thread_num`
- `call acc_set_device_num`
- Manually break up the data structure into several pieces:
- `!$acc kernels loop copyin(x(offs(i)+1:offs(i)+nsec),y(offs(i)+1:offs(i)+nsec))`
From excellent example on Page 15 of the PGI 12.6 OpenACC Getting Started Guide
Asynchronous Behavior
There are synchronization rules associated with each type of loop construct, and some of the data constructs (involving updates and independent data management). You may want to keep them in mind if you drift very far from a *kernels* model. In those cases you have `wait()`, `asynch()` and `atomic` clauses, directives or APIs to manage your flow. There are several variations of each to accommodate multiple types of conditions to continue (one or multiple waits, test or block).
As data movement can take so much time, overlapping computation by using these commands can be very effective.
Data Management
Again, as you get farther from a simple *kernels* model, you may find yourself needing to manage data transfers in a more explicit manner. You can manually:
- **Create global data:**
- *declare create* (create on host and device, you will probably use *update* to manage)
- *declare device_resident* (create on device only, only accessible in compute regions)
- *declare link* and *declare create pointer* (pointers are created for data to be copied)
- **Create data transfer regions:** *enter data* (in effect until *exit data*).
- Like *copyin*, etc. except that they do not need to apply to a structured block. Can just stick one in some initialization routine.
- **Update data directly:** *update*
You should never find yourself frustrated for lack of control. You can move data at will with the available options. And you can be fearless with the new “OK to copy even if data is already there” default (the old *present_* commands are obsolete).
There are clause, directive and even API versions of these, depending on appropriateness.
Profiling
So, how do you recognize these problems (opportunities!) besides the relatively simple timing output we have used in this class?
One benefit of the NVIDIA ecosystem is the large number of tools from the CUDA community that we get to piggyback upon.
The following uses the NVIDIA Visual Profiler which is part of the CUDA Toolkit.
Mandlebrot Code
This is for an OpenACC Mandlebrot set image generation code from NVIDIA. You can grab it at
https://github.com/NVIDIA-OpenACC-Course/nvidia-openacc-course-sources
Step 1 Profile
Lots of Data Transfer Time
Half of our time is copying, none of it is overlapped.
We’re still much faster than the CPU because there’s a lot of work.
Broken Into Blocks With Asynchronous Transfers
Pipelining with 32 blocks
Baseline runs in parallel on 16 cores
1. Parallelized
2. Blocked
3. Update Added
4. Asynchronous
OpenACC Things Not Covered
The OpenACC specification has grown quite accommodating as of Version 2.5. You have already seen some redundancy between directives, clauses and APIs, so I have made no attempt to do “laundry lists” of every option along the way. It would be quite repetitive. I think you are well prepared to glance at the OpenACC Specification and grasp just about all of it.
We have omitted various and sundry peripheral items. Without attempting to be comprehensive, here are a few topics of potential interest to some of you.
- **Environment variables:** Useful for different hardware configurations
- **if clauses, macros and conditional compilation:** allow both runtime and compile time control over host or device control flow.
- **API versions of nearly all directives and clauses**
- **Hybrid programming.** *Works great!* Don’t know how meaningful this is to you...
Some of these examples are derived from excellent explanations by these gentlemen, and more than a little benefit was derived from their expertise.
Michael Wolfe, PGI
Jeff Larkin, NVIDIA
Mark Harris, NVIDIA
Cliff Woolley, NVIDIA
|
{"Source-Url": "http://videolectures.net/site/normal_dl/tag=1058329/ihpcss2016_urbanic_advanced_openACC_01.pdf", "len_cl100k_base": 5676, "olmocr-version": "0.1.50", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 47410, "total-output-tokens": 6865, "length": "2e12", "weborganizer": {"__label__adult": 0.0004189014434814453, "__label__art_design": 0.0004837512969970703, "__label__crime_law": 0.0003101825714111328, "__label__education_jobs": 0.00047969818115234375, "__label__entertainment": 7.939338684082031e-05, "__label__fashion_beauty": 0.0001970529556274414, "__label__finance_business": 0.00014400482177734375, "__label__food_dining": 0.0003514289855957031, "__label__games": 0.0010242462158203125, "__label__hardware": 0.006710052490234375, "__label__health": 0.0004620552062988281, "__label__history": 0.00028204917907714844, "__label__home_hobbies": 0.00015842914581298828, "__label__industrial": 0.0008134841918945312, "__label__literature": 0.0001863241195678711, "__label__politics": 0.00025534629821777344, "__label__religion": 0.0007190704345703125, "__label__science_tech": 0.051666259765625, "__label__social_life": 6.765127182006836e-05, "__label__software": 0.00708770751953125, "__label__software_dev": 0.9267578125, "__label__sports_fitness": 0.0005106925964355469, "__label__transportation": 0.0006952285766601562, "__label__travel": 0.00023353099822998047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21124, 0.01453]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21124, 0.20791]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21124, 0.89413]], "google_gemma-3-12b-it_contains_pii": [[0, 93, false], [93, 259, null], [259, 665, null], [665, 1247, null], [1247, 1505, null], [1505, 2206, null], [2206, 2656, null], [2656, 3276, null], [3276, 3755, null], [3755, 4261, null], [4261, 4853, null], [4853, 5324, null], [5324, 7299, null], [7299, 9200, null], [9200, 9604, null], [9604, 10024, null], [10024, 10466, null], [10466, 11062, null], [11062, 11359, null], [11359, 11600, null], [11600, 12065, null], [12065, 12950, null], [12950, 13701, null], [13701, 14264, null], [14264, 14979, null], [14979, 15574, null], [15574, 16171, null], [16171, 16627, null], [16627, 17446, null], [17446, 18063, null], [18063, 19134, null], [19134, 19477, null], [19477, 19658, null], [19658, 19826, null], [19826, 19900, null], [19900, 20001, null], [20001, 20892, null], [20892, 21124, null]], "google_gemma-3-12b-it_is_public_document": [[0, 93, true], [93, 259, null], [259, 665, null], [665, 1247, null], [1247, 1505, null], [1505, 2206, null], [2206, 2656, null], [2656, 3276, null], [3276, 3755, null], [3755, 4261, null], [4261, 4853, null], [4853, 5324, null], [5324, 7299, null], [7299, 9200, null], [9200, 9604, null], [9604, 10024, null], [10024, 10466, null], [10466, 11062, null], [11062, 11359, null], [11359, 11600, null], [11600, 12065, null], [12065, 12950, null], [12950, 13701, null], [13701, 14264, null], [14264, 14979, null], [14979, 15574, null], [15574, 16171, null], [16171, 16627, null], [16627, 17446, null], [17446, 18063, null], [18063, 19134, null], [19134, 19477, null], [19477, 19658, null], [19658, 19826, null], [19826, 19900, null], [19900, 20001, null], [20001, 20892, null], [20892, 21124, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21124, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21124, null]], "pdf_page_numbers": [[0, 93, 1], [93, 259, 2], [259, 665, 3], [665, 1247, 4], [1247, 1505, 5], [1505, 2206, 6], [2206, 2656, 7], [2656, 3276, 8], [3276, 3755, 9], [3755, 4261, 10], [4261, 4853, 11], [4853, 5324, 12], [5324, 7299, 13], [7299, 9200, 14], [9200, 9604, 15], [9604, 10024, 16], [10024, 10466, 17], [10466, 11062, 18], [11062, 11359, 19], [11359, 11600, 20], [11600, 12065, 21], [12065, 12950, 22], [12950, 13701, 23], [13701, 14264, 24], [14264, 14979, 25], [14979, 15574, 26], [15574, 16171, 27], [16171, 16627, 28], [16627, 17446, 29], [17446, 18063, 30], [18063, 19134, 31], [19134, 19477, 32], [19477, 19658, 33], [19658, 19826, 34], [19826, 19900, 35], [19900, 20001, 36], [20001, 20892, 37], [20892, 21124, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21124, 0.08505]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
5158ba346a30c9c533d11e205d1a6623824cfcf6
|
Operator Overloading in Modelica 3.1
Hans Olsson\(^1\), Martin Otter\(^2\), Hilding Elmqvist\(^1\), Dag Brück\(^1\),
\(^1\)Dassault Systèmes, Lund, Sweden (Dynasim)
\(^2\)German Aerospace Centre (DLR), Institute for Robotics and Mechatronics, Germany
Hans.Olsson@3ds.com, Martin.Otter@DLR.de,
Hilding.Elmqvist@3ds.com, Dag.Bruck@3ds.com
Abstract
The constructor and operator overloading introduced in Modelica 3.1 is discussed. The goal is that elementary operators like ‘+’ or ‘*’ can be overloaded for records. This makes it possible to define and use, in a convenient way, complex numbers, polynomials, transfer functions, state space systems, etc. The chosen approach is different to other languages: (a) Only scalar operations need to be overloaded. Array operations are then automatically available, so the growth of the number of overloaded functions is avoided. (b) Automatic type casts between different data types is performed using overloaded constructor functions. Again this reduces the number of overloaded functions. (c) The approach is conservative and only allows overloading if no ambiguity is present, in order to not introduce pitfalls into the language. This is reached by basing the overloading on disjoint sets of matching functions and not on a priority match.
Keywords: overloading, automatic overloading of arrays, overloading without ambiguities.
1 Introduction
Operator overloading is a well known concept in computer science and is available in languages such as Ada (ANSI 1983), C++ (ISO 1998), C#, Mathematica, Matlab and Python. In 2002-2005 the Modelica Association has worked on operator overloading for the Modelica language and several different versions have been designed by different people, especially to avoid some of the known problems of overloading from other languages. The work was then suspended for some years to concentrate on the improved safety in Modelica 3.0. Work has restarted in 2008: Based on a prototype implementation in Dymola and by applying this prototype to the Beta version of the Modelica LinearSystems2 library (Baur et. al. 2009), the 7th design version from 2005 was revised considerably and finally resulted in a version that has been included in Modelica 3.1 (Modelica 2009).
The overloading introduced in Modelica 3.1 is seen as a first step and more features might be introduced later, based on the gained experience. The design is conservative and restrictive in order to reduce the probability to introduce pitfalls in the language. For example, ambiguities are not allowed. This is opposed to other languages where ambiguities are often resolved by priorities in function matches. An important, new feature is that it usually suffices to overload scalar operations and that array operations are automatically mapped to the overloaded scalar operations. The benefit is that explosive growth of the number of overloaded functions to define all possible combinations of data types and number of array dimensions is avoided.
2 Example with Complex numbers
The basic properties of operator overloading in Modelica 3.1 shall first be demonstrated by an example to introduce a user-defined data type Complex. In section 3, the formal rules are defined and design considerations are explained.
Assume a record “Complex” with overloaded scalar operators is available (see below). When using this definition in an interactive environment, e.g., in a Modelica script file that is executed by Dymola (Dymola 2009), then in the command window of Dymola the output as shown in the right part of Figure 1 appears.
From this example it can be seen that the user defined Complex type can hardly be distinguished from a built-in type like Real. In particular, standard array operations can be applied on Complex, although only the scalar operations are overloaded. Also type casts from Real or Integer to Complex are automatically performed, for example in “a = 2 + 3*j” where 2 is added to the Complex expression “3*j”).
The “essential” difference to a built-in type is the name look-up: If a variable is declared as “Real a”, then it is first determined whether “Real” is a built-in type before performing another lookup. If a variable is declared as “Complex c”, then “Complex” is searched hierarchically from the current scope up to the global scope. For example, if a user introduces an own “Complex” type in the local scope, then this type is used and not the one from the global scope.
For the example above, the following definitions are needed:
```modelica
record Complex
Real re "Real part";
Real im "Imaginary part";
end Complex;
function j
output Complex result;
algorithm
result := Complex(0,1);
end j;
operator 'constructor'
function fromReal
input Real re;
input Real im=0;
output Complex result;
algorithm
result = Complex(re=re, im=im);
end fromReal;
```
```modelica
operator '+'
function add
input Complex c1;
input Complex c2;
output Complex result;
algorithm
result := Complex(c1.re + c2.re, c1.im + c2.im);
end add;
end '+';
operator '-'
function negate
input Complex c;
output Complex result;
algorithm
result := Complex(- c.re, - c.im);
end negate;```
Figure 1: Using the overloaded Complex data type in a script file (left) and the output in the command window of Dymola 7.3 (right).
From this example it can be seen that the user defined Complex type can hardly be distinguished from a built-in type like Real. In particular, standard array operations can be applied on Complex, although only the scalar operations are overloaded. Also type casts from Real or Integer to Complex are automatically performed, for example in “a = 2 + 3*j” where 2 is added to the Complex expression “3*j”).
The “essential” difference to a built-in type is the name look-up: If a variable is declared as “Real a”, then it is first determined whether “Real” is a built-in type before performing another lookup. If a variable is declared as “Complex c”, then “Complex” is searched hierarchically from the current scope up to the global scope. For example, if a user introduces an own “Complex” type in the local scope, then this type is used and not the one from the global scope.
For the example above, the following definitions are needed:
```modelica
record Complex
Real re "Real part";
Real im "Imaginary part";
end Complex;
function j
output Complex result;
algorithm
result := Complex(0,1);
end j;
operator 'constructor'
function fromReal
input Real re;
input Real im=0;
output Complex result;
algorithm
result = Complex(re=re, im=im);
end fromReal;
```
```modelica
operator '+'
function add
input Complex c1;
input Complex c2;
output Complex result;
algorithm
result := Complex(c1.re + c2.re, c1.im + c2.im);
end add;
end '+';
operator '-'
function negate
input Complex c;
output Complex result;
algorithm
result := Complex(- c.re, - c.im);
end negate;
```
```modelica
// also: '*', '/', '^', '==', '<>'
operator 'String'
function toString
input Complex c;
input String name="j";
output String s;
algorithm
s := String(c.re);
if c.im <> 0 then
s := if c.im > 0 then
s + " + "
else
s + " - "
s := s + String(abs(c.im))
name;
end if;
end toString;
end 'String';```
```modelica
// Scalar operations
j = Complex.j();
a = 2 + 3*j
b = a + 4
c = -b*(a + 2*b)/(a+4)
// Complex arrays
A = [2, -3; 4, 5]
Complex.eigenValues(A)
B = [1+2*j, 3+4*j;
-2*j, 2-4*j]
x = [2+3*j, 1+2*j]
B*x
```
```modelica
// Complex arrays
A = [2, -3; 4, 5]
Complex.eigenValues(A)
B = [1+2*j, 3+4*j;
-2*j, 2-4*j]
x = [2+3*j, 1+2*j]
B*x
```
```modelica
function eigenValues
input Real A[:, :];
output Complex ev[size(A, 1)];
import Modelica.Math.Matrices;
protected
Integer nx=size(A, 1);
Real evr[nx, 2];
algorithm
evr := Matrices.eigenValues(A);
for i in 1:nx loop
evr[i, 1] := Complex(ev[1], ev[1, 2]);
end for;
end eigenValues;
```
As can be seen, operator overloading is defined for functions that are defined in a record. The record definition holds a data structure in the usual way (here: two Real variables). Operators are defined in a record with the new construct
\[ \text{operator } <\text{name}> \]
\[ \text{end } <\text{name}> \]
where \(<\text{name}>\) is the operator to be overloaded enclosed in apostrophes. This has the advantage that a valid, unique Modelica name is used which is very close to the operator that shall be overloaded.
Inside an “operator”, one or more Modelica functions are defined. There are no particular requirements for these functions with the exception that every function must have exactly one output argument and that the number of arguments without a default value must be identical to the number of arguments required from the respective operator (e.g., function “add” inside operator ‘+’ must have exactly two arguments without a default value. If there are more arguments, all must have a default value.
The special operator ‘constructor’ serves two purposes: First it gives different record constructors to provide various ways to generate an instance of the record. Second it is used to define automatic type casts. Examples:
\[
\begin{align*}
// \text{Default record constructor:} \\
c1 = \text{Complex}(1,2); & \quad \text{c1 = 1+2j;} \\
\text{// Overloaded constructor "fromReal":} \\
c2 = \text{Complex}(3); & \quad \text{c2 = 3+0j;} \\
\text{// Automatic type cast due to "fromReal":} \\
c3 = c1 + 5; & \quad \text{c3 = 6+2j;} \\
\end{align*}
\]
No overloaded operator is defined to add a Complex to a Real. However, a constructor is defined to generate a Complex number from the literal “5” and then there is an overloaded operator to add two Complex numbers.
3 Rules for Overloading
In this section the rules for the operator overloading are stated and design decisions are discussed.
3.1 Overloaded operators
A Modelica record can define the behavior for operations such as constructing, adding, multiplying etc. This is done using the specialized class \texttt{operator} (a restricted class similar to \texttt{package}) comprised of functions implementing different variants of the operation for the record class in which the respective \texttt{operator} definition resides. The overloading is defined in such a way that ambiguities are not allowed and give an error. Furthermore, it is sufficient to define overloading for scalars. Overloaded array operations are automatically deduced from the overloaded scalar operations, if an appropriately overloaded function for arrays is not present. The \texttt{operator} keyword is followed by the name of the operation which can be one of:
\[
\begin{align*}
\text{constructor}, +, - & \quad (\text{includes both subtraction and negation}), \\
\times, \div, ^, ==, \\
\lt, \lt=, \gt, \gt=, \ne, \text{and}, \\
or, \text{not}, \text{String}. \\
\end{align*}
\]
The functions defined in the operator-class in the record must take at least one argument of this record type as input, except for the constructor-functions which instead must return one component of the record type. All of the functions shall return exactly one output.
The record may also contain additional functions, packages of functions, and declarations of components of the record. To avoid problems with slicing, it is not legal to extend from a record with operators.
The precedence and associativity of the overloaded operators is identical to built-in operators (e.g. ‘*’, has always higher precedence than ‘+’). Definition of new operator symbols is not allowed. These restrictions simplify specification and implementation, and improve translation speed.
Only overloading of the most important operators is defined. In the future, this list might be extended, but the goal is to first get experience with a minimum set of overloaded operators.
3.2 Matching Functions
All functions defined inside the \texttt{operator} class must return one output and may include functions with optional arguments, i.e. functions of the form
\[
\begin{align*}
\text{function } f \\
\text{input } A_1, u_1; \\
\text{...} \\
\text{input } A_n, u_n = a_n; \\
\text{...} \\
\text{output } B, y; \\
\text{algorithm} \\
\text{end } f; \\
\end{align*}
\]
The vector \(P\) below indicates whether argument \(m\) of \(f\) has a default value (\texttt{true} for default value, \texttt{false} otherwise). A call \(f(a_1, a_2, \ldots, a_k, b_1 = w_1, \ldots, b_p = w_p)\) with distinct names \(b_i\) is a valid match for the function \(f\), provided (treating Integer and Real as the same type)
• \( A_i = \text{typeOf}(a_i) \) for \( 1 \leq i \leq k \),
• the names \( b_j = u_{Qj}, Qj > k, A_{Qj} = \text{typeOf}(w_i) \)
for \( 1 \leq j \leq p \), and
• if the union of \{ \( i \leq i \leq k \), \{ Q \} : \( 1 \leq j \leq p \}, \text{and} \n\{ m : P_{m} \text{true} \text{and} 1 \leq m \leq n \} \) is the set \{ \( i : 1 \leq i \leq n \} \).
This corresponds to the normal treatment of a function call with named arguments, requiring that all inputs have some value given by a positional argument, named argument, or a default value (and that positional and named arguments do not overlap). Note that this only defines a valid call, but does not explicitly define the set of domains.
### 3.3 Overloaded constructors and operators
As defined in detail in the Modelica language specification (Modelica 2009), using an operator (such as `+`) goes through a number of steps where a set of functions is found, and if one of them is a matching function it is used; multiple matches are seen as an error.
Array operations are defined in terms of the scalar operation, for multiplication assuming that the scalar element form a non-commutative ring that does not necessarily have a multiplicative identity (since the definition in the specification implicitly assumes that addition is associative and commutative); the operations vector*vector and vector*matrix are explicitly excluded, since there are cases where this does not give the “natural” interpretation, e.g., for complex vectors. For the future it will be possible to extend operations with complex conjugate (allowing a clean definition of vector*vector) and zero (allowing e.g. matrix multiplication with zero inner dimensions); without invalidating existing models.
The precise rules for binary operations will be now presented to show the flavor of the definition:
Let \( op \) denote a binary operator like `+` and consider an expression \( a op b \) where \( a \) is of type \( A \) and \( b \) is of type \( B \). An example is \( “2.0 + j” \), where \( “2.0” \) is of type \( \text{Real} \) and \( “j” \) is of type \( \text{Complex} \).
1. If \( A \) and \( B \) are basic types or arrays of such, then the corresponding built-in operation is performed (e.g., for \( “2 + 3” \), the built-in operation for two integer numbers is performed).
2. Otherwise, if there exists exactly one function \( f \) in the union of \( A.op \) and \( B.op \) such that \( f(a, b) \) is a valid match for the function \( f \), then \( a op b \) is evaluated using this function. It is an error, if multiple functions match. If \( A \) is not a record type, \( A.op \) is seen as the empty set, and similarly for \( B \). Note, Having a union of the operators ensures that if \( A \) and \( B \) are the same, each function only appears once. In our example, \( “2.0 + j” \) has only a match in the Complex record after converting 2.0 to Complex: Complex:`+` and therefore a matching function was found.
3. Otherwise, consider the set given by \( f \) in \( A.op \) and a record type \( C \) (different from \( B \)) with a constructor, \( g \), such that \( C.'constructor'.g(b) \) is a valid match, and \( f(a, C.'constructor'.g(b)) \) is a valid match; and another set given by \( f \) in \( B.op \) and a record type \( D \) (different from \( A \)) with a constructor, \( h \), such that \( D.'constructor'.h(a) \) is a valid match and \( f(D.'constructor'.h(a), b) \) is a valid match. If the sum of the sizes of these sets is one this gives the unique match. If the sum of the sizes is larger than one there is an ambiguity which is an error.
Informally, this means: If there is no direct match of “\( a \) op \( b \)”, then it is tried to find a direct match by automatic type casts of “\( a \)” or “\( b \)”, by converting either “\( a \)” or “\( b \)” to the needed type using an appropriate constructor function from one of the record types used as arguments of the overloaded “\( op \)” functions. Example using the Complex-definition from above:
```plaintext
Real a;
Complex b;
Complex c = a+b;
// interpreted as:
Complex.'+'(Complex.'constructor'.fromReal(a),b);
```
4. If \( A \) or \( B \) is an array type, then the expression is conceptually evaluated according to the rules for arrays (Modelica 2009, section 10.6). The resulting scalar operations are then treated with 1-3.
Example:
```plaintext
Complex A[2,2], x[2];
Complex b[2] = A*x;
// interpreted as:
// The scalar operations can now be
// treated with the rules for scalar
// operations
```
5. Otherwise the expression is erroneous.
### 3.4 Syntactical simplification
In many cases there is only one function in the operator; either because only one makes sense or because another is not yet added. This is handled by stating that
```plaintext
operator function ‘**’
end ‘**’;
```
is treated in the same way as
```plaintext
operator `*`
function multiply
…
end multiply
end `*';
```
The advantage of the shorter form is that it reads nicer, and avoids introducing an arbitrary name of a function.
However, by stating that they are equivalent, no loss of functionality is introduced; and one can always later add additional overloaded variants in a safe way.
4 Design and future considerations
The overall design is intended as a first step, and intended to allow future extensions in a backward compatible way.
4.1 Operator as a “semi-package” in record
In the current design an operator defines a hierarchical level; grouping together the variants.
The alternative of having multiple overloaded functions with identical names and different signatures (as in C++) was considered; but rejected for several reasons including the fact that it would no longer be possible to uniquely reference a function by name. However, the syntactic simplification introduced avoids redundant levels.
Another alternative would be to have the operators defined in the enclosing scope of the record - similarly to Ada. This would have required a modification of the function call lookup to include some form of argument-dependent name lookup (“Koenig lookup”) as in C++ (ISO 1998, Section 3.4.2). This would be complex to implement, and possibly influence existing function calls (note that in Modelica function calls normally use hierarchical names in contrast to many other languages). Furthermore it was found that it often leads to a two-step hierarchy where a record ‘complex’ was defined in a package ‘complexPackage’ merely containing the record and its operations (cf. “header files”); and this was not deemed attractive.
One of the drawbacks of this design is that new operations on existing types cannot be added without modification of classes, which may not be possible for protection or licensing reasons.
Stroustrup (1994, Chapter 11) describes several related design issues and tradeoffs for C++.
4.2 Symmetric
Binary operators are defined so that operations can either be found in left or right operands. This is needed in order to handle combinations with built-in types in a clean way.
4.3 Few priority levels
For function matching there are only a few levels defined; whereas, e.g. C++ has a much more detailed set of priorities between functions in order to handle type conversions and many arguments for general functions.
A number of such detailed rules were considered in the Modelica design group, but due to limited resources they could not be investigated. Thus such cases currently lead to ambiguities, these cases could in the future be disambiguated with more detailed rules – but the intent is that everything that is currently unambiguous will stay that way.
4.4 Fewer operators
It is common to define only a few operators and define others in terms of these. This is here done for array operations, but not for e.g. relational operators (usually everything is defined in terms of ‘<’ and/or ‘==’). It was not clear how common overloaded relational operators will be in Modelica and for what purpose, and thus this was deemed as an issue that will be handled in the future.
An important consideration is whether relational operators will be used for general routines such as sorting as in the Standard Template Library of C++ (where ‘<’ is more used as a sorting order than a mathematical total order); or for more general mathematical routines, e.g. computations for IEEE floating numbers including NaN where such rules do not hold.
4.5 Zero values and complex numbers
As indicated above matrix multiplication is currently undefined if the inner dimension is zero. A simple solution would be to introduce an operator ‘zero’ having no inputs and returning the additional identity of the class. An important consideration will be whether this operator should be required for matrix multiplication in general; and whether it should be used for other purposes.
Similarly vector*vector could be defined if there existed an operator ‘conjugate’ in the class.
4.6 Hierarchy of conversions
In the future it might be necessary to add another ‘constructor’-operator containing only explicit constructors – i.e. constructors that, as in C++, only will be called if the constructor is explicitly invoked and not for implicit conversions.
Without this care must be taken when designing multiple records such that conversions form an ordered hierarchy.
At one point in the design it was considered to have conversions in both directions and instead introduce additional operators to disambiguate calls; e.g. have Complex and ComplexPolar that both can be converted automatically from the other one and instead define operations such as addition to disambiguate the results:
```plaintext
record Complex
operator '+'
function addComplex
input Complex a;
input Complex b;
output Complex c;
...
function addPolar "Example only"
input Complex a;
input ComplexPolar b;
output Complex c;
...
end Complex;
```
The problem with this approach is that $c+2$ is ambiguous since it is not clear if 2 should be converted to polar or Cartesian form before being added. It would be possible to handle this by having an additional operation for addition with Real; but it was deemed that the resulting number of functions grew too much and a cleaner design was to remove addPolar.
5 Conclusion
Modelica 3.1 was released in May 2009. The operator overloading as introduced in this new version was discussed and examples are given to demonstrate the usage. The introduced operator overloading is seen as a first step, to gain experience with it in Modelica. Especially, it is clear that function overloading is missing and has to be introduced.
With respect to other languages, the design is restrictive, but has the advantage that it usually suffices to define overloaded scalar operations between the same types. Array operations and operations between different types can then be automatically deduced by a Modelica tool.
6 Acknowledgements
Partial financial support of DLR by BMBF (BMBF Förderkennzeichen: 01IS07022F) for this work within the ITEA project EUROSYSLIB (http://www.itea2.org/public/project_leaflets/EUROSYSLIB_profile_oct-07.pdf) is highly appreciated.
Furthermore, we would like to thank Marcus Baur (DLR) for fruitful discussions.
References
|
{"Source-Url": "https://2009.international.conference.modelica.org/proceedings/pages/papers/0100/0100_FI.pdf", "len_cl100k_base": 5747, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19638, "total-output-tokens": 6600, "length": "2e12", "weborganizer": {"__label__adult": 0.0003039836883544922, "__label__art_design": 0.00030493736267089844, "__label__crime_law": 0.0003101825714111328, "__label__education_jobs": 0.0004031658172607422, "__label__entertainment": 6.210803985595703e-05, "__label__fashion_beauty": 0.0001195073127746582, "__label__finance_business": 0.00016617774963378906, "__label__food_dining": 0.0003197193145751953, "__label__games": 0.00038743019104003906, "__label__hardware": 0.0009765625, "__label__health": 0.00042366981506347656, "__label__history": 0.0002061128616333008, "__label__home_hobbies": 9.322166442871094e-05, "__label__industrial": 0.0006022453308105469, "__label__literature": 0.0001723766326904297, "__label__politics": 0.00023496150970458984, "__label__religion": 0.0004253387451171875, "__label__science_tech": 0.037322998046875, "__label__social_life": 8.308887481689453e-05, "__label__software": 0.00850677490234375, "__label__software_dev": 0.94775390625, "__label__sports_fitness": 0.0002815723419189453, "__label__transportation": 0.0005002021789550781, "__label__travel": 0.0001817941665649414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24657, 0.02121]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24657, 0.60551]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24657, 0.87469]], "google_gemma-3-12b-it_contains_pii": [[0, 3581, false], [3581, 8032, null], [8032, 12660, null], [12660, 17557, null], [17557, 21680, null], [21680, 24657, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3581, true], [3581, 8032, null], [8032, 12660, null], [12660, 17557, null], [17557, 21680, null], [21680, 24657, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24657, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24657, null]], "pdf_page_numbers": [[0, 3581, 1], [3581, 8032, 2], [8032, 12660, 3], [12660, 17557, 4], [17557, 21680, 5], [21680, 24657, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24657, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
5cab8e60c3e50594d4329a5dec31d1d4882f7d42
|
Using Deep Q-Learning to Compare Strategy Ladders of Yahtzee
Philip Vasseur
Advised by James Glenn
December 12, 2019
# CONTENTS
1 Introduction 2
2 Yahtzee Gameplay 2
2.1 Categories ................................................................. 3
2.1.1 Upper ................................................................. 3
2.1.2 Lower ................................................................. 3
2.1.3 Bonuses ............................................................... 3
2.2 Optimal Play .............................................................. 3
2.3 Single vs Two Player Yahtzee ......................................... 4
3 Reinforcement Learning 4
3.1 Deep Q-Network ......................................................... 4
3.2 Double and Dueling DQN .............................................. 5
4 Implementation Details 5
4.1 Yahtzee Architecture .................................................. 5
4.2 Training Architecture ................................................ 6
4.3 Self-Play Implementation ............................................. 6
5 Results 7
5.1 Strategy Ladder Comparison ......................................... 7
5.1.1 Quantifiers .......................................................... 7
5.1.2 Analysis ............................................................. 8
5.2 Self-Play ................................................................. 10
6 Future Work 10
6.1 Solitaire Yahtzee ...................................................... 10
6.2 Two Player Yahtzee .................................................... 10
7 Conclusion 10
8 Acknowledgements 11
References 11
Abstract—“Bots” playing games is not a new concept, likely going back to the first video games. However, there has been a new wave recently using machine learning to learn to play games at a near optimal level - essentially using neural networks to “solve” games. Depending on the game, this can be relatively straightforward using supervised learning. However, this requires having data for optimal play, which is often not possible due to the sheer complexity of many games. For example, solitaire Yahtzee has this data available, but two player Yahtzee does not due to the massive state space. A recent trend in response to this started with Google Deep Mind in 2013, who used Deep Reinforcement Learning to play various Atari games [4].
This project will apply Deep Reinforcement Learning (specifically Deep Q-Learning) and measure how an agent learns to play Yahtzee in the form of a strategy ladder. A strategy ladder is a way of looking at how the performance of an AI varies with the computational resources it uses. Different sets of rules changes how the AI learns which varies the strategy ladder itself. This project will vary the upper bonus threshold and then attempt to measure how “good” the various strategy ladders are - in essence attempting to find the set of rules which creates the “best” version of Yahtzee. We assume/expect that there is some correlation between strategy ladders for AI and strategy ladders for human, meaning that a game with a “good” strategy ladder for an AI indicates that game is interesting and challenging for humans.
1 INTRODUCTION
This project aims to use Deep Q-learning (DQL) to measure the strategy ladders of various versions of single-player Yahtzee, a dice game invented in the 1940s, as well as implementing DQL self-play for two player Yahtzee. A strategy ladder is a way of looking at how the performance of an AI varies with the computational resources it uses - in this project computational resources will be simply defined as training time for the reinforcement learning. This project will compare the various strategy ladders to determine which set of rules gives the “best” strategy ladder, where a ”good” strategy ladder would be one that is not too vertical or too horizontal, and rather gives a good combination of learning over time.
This project specifically compares the strategy ladders of solitaire Yahtzee when varying the upper bonus threshold from 53 to 75.
2 YAHTZEE GAMEPLAY
Yahtzee is a board game originating in the 1940s which has an incredibly large random element, though is still largely dependant on skill. The game consists of 13 rounds where the player is given some amount of points based on the combination of dice they have and category they choose. In each of the 13 rounds, the player starts off by rolling 5 dice. The first two “moves” in each round is to pick some subroll of the dice to keep and then reroll the rest. The third and final “move” in each round is for the player to pick one of the 13 categories in Yahtzee to score their moves on. At the end of the 13 rounds, the scores from each category are summed up to give the player’s final score. In two player Yahtzee, the player with the highest score is the winner. There is clearly an incredibly large chance element in Yahtzee consisting of rolling the dice. Even if the player chooses optimally at each state, they could get a subpar score.
2.1 Categories
The number of points awarded depends on the category chosen and the current dice rolled. Each of the 13 categories can only be used once in the game (though the Yahtzee category has special rules, which will be explained later on). The categories in Yahtzee are split between the upper and lower sections.
2.1.1 Upper: The first six categories are in the upper section, which are just simply aces, two, threes, fours, fives, and sixes. The number of points the player earns from choosing an upper section category is just the sum of the die faces with that number. If, for example, one chooses sixes without any six faces in their current lineup of dice, they would be given zero points and no longer be able to use the sixes category.
2.1.2 Lower: The latter seven categories are in the lower section. These are three of a kind, four of a kind, full house, small straight, large straight, chance, and Yahtzee. The important thing to note is that the points are only awarded if the category is satisfied. So for three/four of a kind the player gets points equivalent to the sum of all the dice if they have three or four of a kind, respectively. This is the same for all of the rest - full house gives 25 points, small/large straights give 30 and 40 points flat respectively, and Yahtzee gives 50 points flat (again assuming your 5 die satisfy the category conditions). The unique category in this respect is chance, which gives points equivalent to the sum of all the dice regardless of which dice.
2.1.3 Bonuses: There are two main bonuses in Yahtzee. The first is the upper section bonus, which awards 35 points to the player if the player scores a threshold amount of points in the upper section (63 in the official rules, which is achievable with three die in each of the categories). The second is the Yahtzee bonus. If the Yahtzee category has already been successfully filled, then a subsequent Yahtzee used to fill any other category also gives an additional 100 points. The final rule is the joker rule. This project used the free choice joker rule, which allows a subsequent Yahtzee to satisfy full-house and small/large straight conditions if the Yahtzee bonus is satisfied and the corresponding upper section category is filled.
The upper section bonus is a heavily weighted bonus in optimal Yahtzee play (see optimal statistics image below) due to the high amount of points it can give and the relative ease to achieve it. While the Yahtzee bonus gives many more points, it is much more difficult to obtain. In this project, we will be adjusting the upper bonus threshold to values between 53 and 75 and comparing the measured strategy ladders.
2.2 Optimal Play
Solitaire (single-player) Yahtzee has a small enough state space that it can be effectively solved and played...
The optimal average score in solitaire Yahtzee including the bonuses and jokers mentioned above is approximately 254.59 with a standard deviation of 59.61 [2]. The variance is quite high as mentioned earlier due to the inherent randomness in rolling the dice and simply the mechanics of the game. The worst possible score that can be earned is 5, while the best is 1575. While both of these are extremely unlikely (and the former you would seemingly have to try to do terribly), it wouldn’t be unusual for a skilled player to get scores of less than 150 when down on their luck.
A simple example shows this: the trivial random Yahtzee player (who chooses a valid category at the start of each turn) averages around 191 points with a standard deviation of 40.37 points. Assuming normal distribution, let \( R \sim N(191.16, 40.37) \) and \( O \sim N(254.59, 59.61) \). We then have \( P(R > O) = P(R - O > 0) = P(D > 0) \approx 0.189 \), where \( D \sim N(-63.43, 71.99) \). This straight forward calculation shows this extremely naive bot gets a higher score than the optimal solitaire Yahtzee player slightly under 20% of the time.
### 2.3 Single vs Two Player Yahtzee
While this project mainly focuses on single player, Deep Q-Learning with self-play was implemented for two player Yahtzee. While the difference is subtle, the optimal strategy for single player is different than the optimal strategy for two player Yahtzee. This difference stems from the fact that the goal of two player Yahtzee is not to get the most points possible, but to simply get more points than your opponent. These are obviously correlated, but not perfectly. Moments of distinction can easily be thought of, such as when a player is down by slightly under 100 points on their final turn. The Yahtzee bonus might be their only chance at winning and thus is obviously the best move to go for. However, going for a Yahtzee bonus probably has a lower expected total score (depending on the current roll) because of how rare it is to get a Yahtzee. In this case, the optimal single player strategy and the optimal two player strategy would differ.
The other reason why two player is interesting to look at is because of how large the state space becomes. While single player Yahtzee is solvable, two player Yahtzee nearly squares the state space, making it much too large to solve with other typical methods. There is no labelled data possible to use supervised learning on, but an option like Deep Reinforcement Learning could work quite well theoretically.
### 3 Reinforcement Learning
#### 3.1 Deep Q-Network
This project uses Deep Reinforcement Learning (a Deep Q-Network specifically) to allow the agent to learn as it plays. Deep Reinforcement learning works similar to normal reinforcement learning, but rather than using a State-Action table or another approximator for Q-value, it uses a neural network. The neural
network takes in an input the dimensions of the state, and produces an output the dimensions of the action space - with the output vector representing the Q-values for each action. From regular reinforcement learning, the Bellman equation is
\[ Q(s_t, a_t) := \alpha (r_t + \gamma \max_{a'} Q(s_{t+1}, a') - Q(s_t, a_t)) \]
Where \( \alpha \) is the learning rate and \( \gamma \) is the decay rate. The \( Q \) function is iteratively updated, in an attempt to have it converge to a point where
\[ r_t + \gamma \max_{a'} Q(s_{t+1}, a') (\text{the target}) = Q(s_t, a_t) \]
the current value of the network. To achieve this in the neural network world, the loss function is simply a masked mean-squared error of the Q-value outputs and the target. All output nodes except \( a_t \) are masked to 0, and gradient descent is then performed to minimize
\[ r_t + \gamma \max_{a'} Q(s_{t+1}, a') - Q(s_t, a_t). \]
### 3.2 Double and Dueling DQN
A problem with vanilla DQNs is that they often overestimate action values due to the inherent maximization bias. Q-Learning at its core is learning to estimate from its own estimates, and therefore it is somewhat clear how this along with the fact that the target Q-value is maximized can lead to substantially overestimated Q-values and thus substantially worse performances. The solution to this is Double DQNs [6]. This modification creates an entirely separate target network \( Q' \) separate from \( Q \). \( Q \) is still used to evaluate the Q-values of the actions, but rather than using a target of \( r_t + \gamma \max_{a'} Q(s_{t+1}, a') \) we use
\[ r_t + \gamma Q(s_{t+1}, \text{argmax}_{a'} Q'(s_{t+1}, a')) \]
using the target network \( Q' \) to select the action and updating \( Q' = Q \) every so often. This then disentangles the overestimate bias.
Another improvement is the Dueling DQN [8]. This seemed to have less of an affect in practice, but was still useful. The Dueling DQN deals with the issue that not all states are valuable and it is often “wasteful” in a sense to learn the value of an action at every state, if the state itself is invaluable. For example, there might be situations in Yahtzee where which action is taken does not truly affect things, and the state itself is the important factor. The Dueling DQN solves this by separating the \( Q(s, a) \) into \( A(s, a) + V(s) \) - a function to calculate the advantage of selecting a specific action plus the value of the state itself. The output though must be the same, so they are combined back in an identifiable way, to still output \( Q(s, a) \) and train in the same manner (though now with the networks split earlier on). This may seem unimpactful, but in fact allows the network to learn the value of a state without having to learn which actions are or are not valuable.
### 4 Implementation Details
#### 4.1 Yahtzee Architecture
The implementation of single and two player Yahtzee was largely an extension of previously written work by my advisor, Professor James Glenn (for which I am extremely grateful). The state of a game of Yahtzee simply consists of a scoresheet, a roll, and a number of rerolls left, which were already written. To simplify the code, these things were combined into a class \texttt{YahtzeeGame}, which upon initialization allows for a single player game of Yahtzee to be played by repeatedly calling \texttt{game.make_move}.
In order to simplify debugging, this was then wrapped in an Open AI Gym environment \texttt{YahtzeeEnv}. This extends \texttt{gym.Env}, which is a common API for reinforcement learning problems. The API consists of \texttt{init} to get to the starting state, \texttt{reset} to get back to the starting state and return this state, and most importantly \texttt{step} which takes in an action and returns the next state, reward, and whether the game is done. This allows Yahtzee to be easily swapped out when reinforcement learning
algorithm with any other simple Open AI Gym environment. The *YahtzeeEnv step* function takes in a meta action (essentially representing which category the player is aiming for - one to six, n-kind, full house, straights, or chance) instead of a roll or a category, and uses a conversion utility to change the meta action into an action which can be passed into *YahtzeeGame*. The environment wrapper also took in a number of players, allowing for two player Yahtzee, which initialized multiple Yahtzee games and kept track of turns. As the environment was also built for reinforcement learning problems, it had the functionality to calculate and return the state of the game in terms of an input vector, which varied in size depending on how many players were in the game.
### 4.2 Training Architecture
The main workhorse of the project is a class *Trainer*, which is given an environment (such as Yahtzee), an agent (such as the Huskarl DQN agent discussed below), and a number of players - after which one can simply call `trainer.train(num_steps)`. The trainer iterates `num_steps` times, getting an action from the agent based on the current state, pushing the State-Action-Reward-NextState (SARS) tuple into the agents memory, and then having the agent train by sampling from its memory. While iterating it also checks when the current game finishes, where it then resets the environment and calls the `update` function. This simply keeps track of the progress of the agent as it learns, for example validating its current total score in single player every 1000 games of training to keep track of the current best model.
The lowest level parts of the reinforcement learning for single and two player uses a forked version of a deep reinforcement learning library Huskarl. The vanilla version of Huskarl provides an implementation of a Deep Q-Network (DQN) which then rely only on a few basic calls to use - namely `act`, `push`, and `train`, to calculate the action from the current state, to add to the agents memory, and to train based off the memory respectively. While this seemed fantastic, in practice the package was quite problematic, which was why for this project we forked and made some small adjustments. Other than simply tuning various parameters, the implementation of the memory was simplified (as the Huskarl implementation of prioritized experience replay was quickly became the bottleneck of training), the save functionality was rewritten (as it was poorly written, used outdated tensorflow functionality, and didn’t save the full state of the network), and the load function added (as it somehow did not exist beforehand). The original version of Huskarl worked somewhat well - out of the box it was able to have the single player agent learn from averaging 70 points to averaging around 130 - but with tuning and massively speeding up the runtime in a matter of two hours or so, the single player agent was able to learn from averaging 70 to averaging 232.
### 4.3 Self-Play Implementation
Many aspects of the codebase did not need to be changed in order to accommodate training for two player Yahtzee. As the game environment was built with multiplayer in mind, the main adjustments were with the *Trainer* class.
The first step was to create another opponent copy of the agent. This opponent would be updated periodically as the main agent improved. A threshold used in Alpha Go Zero [5] is if the current agent wins 55% of games, which is what was used as inspiration for many aspects of the self-play implementation. The two agents went back and forth every turn acting on the environment, though instead of pushing each SARS tuple into the main agent’s memory, it was
stored in a temporary buffer. This is because there is no current reward, and rather also inspired by AlphaGo Zero, the reward is simply whether the agent has won the game or not. Finally, after the game finishes, the buffer of SARS is updated with the proper rewards, which are then pushed into the main agents memory to be trained on later.
5 Results
5.1 Strategy Ladder Comparison
In order to reduce the inherent variance of Yahtzee, the strategy ladder was measured by taking the average score of the current model, but rather the average score of the best model so far. While generally training improved the model, slight changes to the model could result in dramatically worse scores, which made the results much less interpretable. Further, rather than measuring the raw scores themselves, the fraction of the optimal score for the respective set of rules was recorded. This was done to normalize the strategy ladders, as changing the upper bonus threshold would clearly affect the optimal score. Using code previously written by my advisor, the optimal values for Yahtzee with an upper bonus from 53 to 75 were calculated.
In the end, strategy ladders were obtained by testing the average score of the model over 1000 games every 1000 games of training for 160,000 games. If the model scored better than the current best model, than the best model was replaced with the current (see Figure 3).
5.1.1 Quantifiers: In order to quantitatively compare the strategy ladders, I’ll be using three different measurements of “goodness”. As mentioned earlier, a “good” strategy ladder is one that does not improve too quickly, but also not too slowly. So we are looking for one which takes many small steps over a few big steps, but also one which has a high horizontal asymptote (which represents how much it was able to learn in 160,000 games - the final performance “FP”). The first measure of “goodness” is one described in Depth in Strategic Games [3], which picks a constant step size for \( x \) and \( y \) and counts how many times the target \( y \) (which is the previous \( y + \) the \( y \) step size) is reached when taking the \( x \) step size. This effectively measures how many decently sizeable steps are taken. We will call this “P1”. The \( y \) step size was chosen arbitrarily as \( \frac{1}{12} \) of what was left to learn - when calculating the metric with sizes from \( \frac{1}{10} \) to \( \frac{1}{30} \) and averaging, the top three thresholds were still 55, 57, and 53 (averaging 18.1, 17.7, and 17.65 for P1 respectively).
The second measure of “goodness” is simply \( \text{np.sum}(\text{np.log}(1 + \text{np.diff(data)})) \).
This is essentially just summing up how much is learned each step, but taking the log makes it so taking multiple small steps is weighted much heavier than taking one large step. We will call this “P2”. The third measure of “goodness” is \( (P1/\text{np.max}(P1) + P2/\text{np.max}(P2))/2 \), which is simply a scaled combination of P1 and P2. We will call this “P3”. Calculating these metrics for each set of rules gives the table in Figure 4.
<table>
<thead>
<tr>
<th>Threshold</th>
<th>P1</th>
<th>P2</th>
<th>P3</th>
<th>FP</th>
</tr>
</thead>
<tbody>
<tr>
<td>57.0</td>
<td>14.0</td>
<td>0.5992</td>
<td>0.9929</td>
<td>0.8956</td>
</tr>
<tr>
<td>55.0</td>
<td>14.0</td>
<td>0.5979</td>
<td>0.9918</td>
<td>0.8925</td>
</tr>
<tr>
<td>53.0</td>
<td>13.0</td>
<td>0.6079</td>
<td>0.9643</td>
<td>0.9002</td>
</tr>
<tr>
<td>63.0</td>
<td>13.0</td>
<td>0.6017</td>
<td>0.9592</td>
<td>0.9121</td>
</tr>
<tr>
<td>65.0</td>
<td>13.0</td>
<td>0.577</td>
<td>0.9389</td>
<td>0.8908</td>
</tr>
<tr>
<td>67.0</td>
<td>13.0</td>
<td>0.5591</td>
<td>0.9242</td>
<td>0.8721</td>
</tr>
<tr>
<td>59.0</td>
<td>12.0</td>
<td>0.5806</td>
<td>0.9062</td>
<td>0.882</td>
</tr>
<tr>
<td>73.0</td>
<td>12.0</td>
<td>0.5741</td>
<td>0.9008</td>
<td>0.9011</td>
</tr>
<tr>
<td>71.0</td>
<td>12.0</td>
<td>0.5685</td>
<td>0.8962</td>
<td>0.8931</td>
</tr>
<tr>
<td>61.0</td>
<td>11.0</td>
<td>0.577</td>
<td>0.8675</td>
<td>0.8829</td>
</tr>
<tr>
<td>75.0</td>
<td>11.0</td>
<td>0.56</td>
<td>0.8535</td>
<td>0.888</td>
</tr>
<tr>
<td>69.0</td>
<td>11.0</td>
<td>0.5497</td>
<td>0.845</td>
<td>0.8681</td>
</tr>
</tbody>
</table>
Fig. 4. Performance metrics for each of the thresholds
Plotting these three metrics versus the final performance gives the scatter plots in Figure 5.
5.1.2 Analysis: The first thing to note is that the learning parameters were not tuned for each threshold value. Rather, they were tuned for a threshold value of 63 and then the same parameters were reused for the rest of the thresholds. This was largely because tuning the parameters was very time consuming and as such, tuning the parameters for 12 different models would have become overly tedious. This makes it so the final performance result of threshold 63 must be taken with a grain of salt when comparing.
P1 and P2 show relatively similar results in which rules create the “best” strategy ladder - they have a correlation coefficient of 0.6985. Thresholds of 53 to 57 seem to have the highest performance measurements, and relatively high final performance (with only 63 easily beating them - potentially due to the reason listed above).
This is also supported by the cumulative metric P3. In the plot of P3 we also interestingly see that the very high thresholds seem to provide poor strategy ladders. When qualitatively analyzing the category statistics from the models, this is supported - after a threshold of 67 the emphasis on the upper bonus threshold becomes dramatically lower. With thresholds of 71, 73, and 75 the average upper bonus points were 0.770, 0.245, and 0.070 respectively, while with the official threshold of 63 our model averaged 15.753 upper bonus points. In the variations of the game with a high threshold, the upper bonus becomes much less emphasized, as one would expect. This seemingly makes the game simpler and less “interesting”.
Fig. 3. Plot of the normalized strategy ladders for each variation on the official set of rules
Fig. 5. Results scatter plots of P1, P2, and P3 versus the final performance of the model
Overall, it seems like decreasing the upper bonus threshold in Yahtzee could lead to a “better” strategy ladder - especially when taking into account that if thresholds 53-57 had their parameters tuned, they would have had a higher final performance.
5.2 Self-Play
Learning and comparisons in solitaire Yahtzee worked well, though this was not the case with two player Yahtzee. Rather than using a trained solitaire Yahtzee network as a place to jump start from, initially a new neural network was used in order to test the waters and see if any learning would occur. While there was a quick jump in the first 1000 games or so from averaging 70 points to averaging 100 points, afterwards no consistent learning occurred. Despite trying various parameters, adjusting what was included in the two player state, including and excluding the non-main agents SARS from the memory buffer, and numerous other various, there seemed to be no substantial change. After jumping up to 100, the network often went back down to 70 or even lower. For the rest of the 80,000 games trained, the main agent rarely ever won more than 55% of the test games, showing no progress whatsoever.
Slower learning was somewhat expected, as the rewards as much less nuanced - being 1 or -1 depending on whether the agent has won or lost. The dimensions of the input being twice the size also would likely have a similar effect. Due to these two things, the difficult of learning likely increased dramatically and in order to obtain any good results a much larger and (or) deeper neural network would be necessary - this would also require much more powerful machines.
6 Future Work
6.1 Solitaire Yahtzee
While this project was only a semester long, with more time it would have been interesting to look at more variations on the solitaire Yahtzee rules. Comparing a large number of variations and combinations of those variations to attempt to come up with a truly “best” version of Yahtzee would be a challenging, but incredibly interesting project. To do this truly properly, one would also need to spend much more time on each variation actually tuning the model in order to get a better idea of top performance. The tuning in this project was done “by hand”, though a pipeline to automatically test a large number of parameters for some amount of training would be incredibly useful and help make the above possible.
6.2 Two Player Yahtzee
With more powerful machines and more time to train, building out an optimal two player Yahtzee agent would be an obvious follow up to the work attempted in this project. The question of how different/how much better an optimal two player Yahtzee agent is compared to an optimal solitaire agent was left unanswered but would be an interesting question to look further into. Along with the research of solitaire Yahtzee, once the above is completed it would be possible to also compare the strategy ladders of two player Yahtzee - looking at how often they win against some constant agent as they learn.
7 Conclusion
This project used Deep Q-Learning to measure and compare the strategy ladder of solitaire Yahtzee when varying the upper bonus threshold from 53 to 75. Quantifying the strategy ladders via two performance metrics and a cumulative metric of the two showed that the official rules of Yahtzee might not be the “best” rules, and that lowering the upper bonus threshold to somewhere in the range of 53 to 57 could result in a more “interesting” variation of Yahtzee.
8 ACKNOWLEDGEMENTS
I would like to thank Professor James Glenn for being my advisor throughout this semester. Professor Glenn helped me countless times through a large variety of problems, giving extremely valuable insight and feedback as I progressed through my work (as well as an implementation of Yahtzee to start this project off with) - thank you! I would also like to thank my friends (especially the non-STEM ones) who listened to me ramble about my project for the past few months. Finally, I would like to thank my brothers and parents.
REFERENCES
|
{"Source-Url": "https://zoo.cs.yale.edu/classes/cs490/19-20a/vasseur.philip.pjv24/final_report.pdf", "len_cl100k_base": 6735, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 30466, "total-output-tokens": 7793, "length": "2e12", "weborganizer": {"__label__adult": 0.0020313262939453125, "__label__art_design": 0.0018749237060546875, "__label__crime_law": 0.0020389556884765625, "__label__education_jobs": 0.00466156005859375, "__label__entertainment": 0.0011510848999023438, "__label__fashion_beauty": 0.001129150390625, "__label__finance_business": 0.0011463165283203125, "__label__food_dining": 0.0026836395263671875, "__label__games": 0.2265625, "__label__hardware": 0.003879547119140625, "__label__health": 0.002593994140625, "__label__history": 0.0020809173583984375, "__label__home_hobbies": 0.0006389617919921875, "__label__industrial": 0.00247955322265625, "__label__literature": 0.001667022705078125, "__label__politics": 0.0011777877807617188, "__label__religion": 0.00202178955078125, "__label__science_tech": 0.293212890625, "__label__social_life": 0.0004737377166748047, "__label__software": 0.010772705078125, "__label__software_dev": 0.429931640625, "__label__sports_fitness": 0.0034637451171875, "__label__transportation": 0.001651763916015625, "__label__travel": 0.0008764266967773438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29660, 0.08422]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29660, 0.21107]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29660, 0.93547]], "google_gemma-3-12b-it_contains_pii": [[0, 120, false], [120, 1723, null], [1723, 5127, null], [5127, 7938, null], [7938, 10845, null], [10845, 14778, null], [14778, 18486, null], [18486, 22308, null], [22308, 24076, null], [24076, 24166, null], [24166, 27667, null], [27667, 29660, null]], "google_gemma-3-12b-it_is_public_document": [[0, 120, true], [120, 1723, null], [1723, 5127, null], [5127, 7938, null], [7938, 10845, null], [10845, 14778, null], [14778, 18486, null], [18486, 22308, null], [22308, 24076, null], [24076, 24166, null], [24166, 27667, null], [27667, 29660, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29660, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29660, null]], "pdf_page_numbers": [[0, 120, 1], [120, 1723, 2], [1723, 5127, 3], [5127, 7938, 4], [7938, 10845, 5], [10845, 14778, 6], [14778, 18486, 7], [18486, 22308, 8], [22308, 24076, 9], [24076, 24166, 10], [24166, 27667, 11], [27667, 29660, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29660, 0.11024]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
8242ad5235f21be5869b1950462325fe46c64a6c
|
Integrating an Intelligent Tutoring System for TAOs with Second Life
Jeremy Ludwig, Erik Sincoff, Emilio Remolina, Richard Stottler
Stottler Henke Associates, Inc.
San Mateo, CA
ludwig, esincoff, remolina, stottler @stottlerhenke.com
Anh Nguyen
Naval Undersea Warfare Center, Division Newport
Newport, RI
anh.b.nguyen1@navy.mil
ABSTRACT
The Tactical Action Officer on board a U.S. Navy Cruiser, Destroyer, or Frigate is responsible for the operation of the entire watch team manning the ship’s command center. Responsibilities include tactical decision-making, console operation, communications, and oversight of a variety of watchstander responsibilities in air, surface, and subsurface warfare areas. In previous work the PORTS TAO ITS, an Intelligent Tutoring System (ITS) for the instruction of Tactical Action Officers (TAOs) was developed to support training at the Surface Warfare Officers School. The system was built on the PC-based Open-architecture Reconfigurable Training System (PORTS). This paper describes a novel extension of the PORTS ITS, where it is integrated with the popular Second Life 3D virtual world. In this integration, the TAO logs on as an avatar in Second Life (SL) and interacts with a number of computer-controlled objects that take on the roles of the TAO’s teammates. TAOs rely on the same mechanism to communicate with the simulated teammates in SL as they would to communicate with other human players. That is, the TAO speaks to these simulated teammates using the chat window built into SL and sees their replies and comments in the chat window as well. We provide both a high-level overview of the integration process as well as the details of integrating a deployed training system with the Second Life virtual world. Additionally, this paper presents “food for thought” on how recent advances in technology and social connectivity can be applied to military training domains and outlines proposed future work based on this integration.
ABOUT THE AUTHORS
Jeremy Ludwig, Ph.D. is a Research Scientist and Project Manager at Stottler Henke Associates. His research areas include intelligent tutoring systems, behavior modeling, and machine learning, focusing on a number of research projects that utilize both game and simulation technology for training. He joined Stottler Henke in the fall of 2000 and holds a Ph.D. in computer science from the University of Oregon.
Erik Sincoff is a Lead Software Engineer at Stottler Henke Associates. At Stottler Henke, he has focused on the design and implementation of intelligent tutoring systems in a variety of domains including emergency medical training, military equipment training, and homeland security training. Previously, he was a senior software engineer at Teknowledge Corporation, where he worked in numerous domains, including implementing tutors in multiuser worlds. He has been at Stottler Henke since 2005 and has a MS in computer science from Stanford University.
Emilio Remolina, Ph.D. is an Artificial Intelligence Research Scientist at Stottler Henke Associates. He received his Ph.D. in Computer Science from the University of Texas at Austin in 2001. Dr. Remolina has been the main designer and developer of different ITS systems: ICT (Intelligent Counter Intelligence in Combating Terrorism Tutor), CITTP (Computerized Individual Trainer for Team Performance), AIS-IFT (helicopter flying trainer) and PORTS TAO-ITS (tactical training of TAOs using the AEGIS system). Dr. Remolina’s research interests include intelligent tutoring systems, planning, simulation and common sense reasoning.
Richard Stottler co-founded Stottler Henke Associates, Inc., an artificial intelligence consulting firm in San Mateo, California, in 1988 and has been the president of the company since then. He has been the principal investigator on a large number of tactical decision-making intelligent tutoring system projects conducted by Stottler Henke including projects for the Navy, Army, Air Force and Marine Corps including the PORTS TAO ITS for the US Navy, and a Littoral Combat Ship project. He has a Masters degree in Computer Science from Stanford University.
Anh Nguyen is a Computer Engineer at the Naval Undersea Warfare Center (NUWC) located in Newport, RI. He holds a B.S. in Computer Systems Engineering from the University of Massachusetts Amherst. He works in the Undersea Warfare Combat Systems Department, Tactical Control and Contact Management Branch. His current assignment entails prototyping, design, and development of future capabilities for the Advanced Processor Build (APB) - Tactical program, focusing on Target Motion Analysis (TMA) enhancements for the fleet. He is also the NUWC Virtual Training Lead for the NUWC Metaverse Exploration Project in which he focuses on leveraging the capabilities of the Virtual World Technologies (VWTs) in order to improve current conventional Undersea Warfare (USW) training.
Integrating an Intelligent Tutoring System for TAOs with Second Life
Jeremy Ludwig, Erik Sincoff, Emilio Remolina, Richard Stottler
Stottler Henke Associates, Inc.
San Mateo, CA
ludwig, esincoff, remolina, stottler @stottlerhenke.com
Anh Nguyen
Naval Undersea Warfare Center, Division Newport
Newport, RI
anh.b.nguyen1@navy.mil
INTRODUCTION
The Tactical Action Officer (TAO) on board a U.S. Navy Cruiser, Destroyer, or Frigate is responsible for the operation of the entire watch team manning the ship’s command center. Responsibilities include tactical decision-making, console operation, communications, and oversight of a variety of watchstander responsibilities in air, surface, and subsurface warfare areas. The PORTS TAO ITS has been previously implemented and deployed to provide instruction for Tactical Action Officers (TAOs) in training at the Surface Warfare Officers School (Stottler et al., 2007). This Intelligent Tutoring System (ITS) builds on the PC-based Open-architecture Reconfigurable Training System (PORTS).
This paper describes a novel extension of the PORTS ITS, where it is integrated with the popular Second Life 3D virtual world. In this integration, the TAO logs on as an avatar in Second Life (SL; www.secondlife.com) and interacts with a number of computer-controlled objects that take on the roles of the TAO’s teammates. The TAO relies on the same communication mechanism to interact with the simulated teammates in SL as they would with other human avatars in SL. That is, the TAO “speaks” to these simulated teammates using the chat window built into SL and sees their replies and comments in the chat window as well. Communication with others is a vital part of the TAO’s responsibilities, which creates the opportunity to investigate extending this training to social virtual worlds. The communication modality differs in the extension (voice input/audio output in PORTS ITS; chat input/output in SL) but the communication acts remain the same.
The motivation behind this extension is to provide a way to investigate the possible benefits of such an integration. That is, this integration can be used in the future to experiment with how to take advantage of the significant capabilities provided by SL (support for online, multi-player interaction in a 3D environment) to enhance training for the Navy. For example, one possibility lies in providing support for simulated role players to be optionally replaced by human avatars. With this type of functionality, a human could step in as the instructor for more ambiguous scenarios even if the TAO and instructor were at different locations. Taken further, it could also support multiple configurations of team training, with simulated role players used to fill in the gaps when humans are not available.
The remaining introduction provides background information on the training problem, the PORTS system, and the ITS. Following this, the methods section contains both a high-level overview of the integration process as well as the details of integrating a deployed training system with the Second Life virtual world. The outcome of this integration is briefly described in the results section. Finally, the conclusion section includes lessons learned while performing this integration and directions for future work. While this paper does not contain experimental results on the evaluation of the efficacy of SL integration, it does provide the technical details, and lessons learned to assist others with similar integration efforts. Additionally, this paper presents “food for thought” on how recent advances in technology and social connectivity can be applied to military training domains.
Background
The mission of the Surface Warfare Officers School (SWOS) in Newport, Rhode Island is to provide professional education and training to prepare officers of the U.S. Surface Navy to serve at sea. As part of his training at SWOS, each Surface Warfare Officer learns how to “fight” their ship as a Tactical Action Officer. The TAO training consists of three months of classroom and simulator time wherein students are exposed to all elements of surface warfare; air, surface, subsurface, and amphibious operations as well as electronic and other support mechanisms. One major
responsibility a TAO has is to be able to exercise command over the major systems of their ship (weapons, support platforms, radar and sonar, and navigation) during potentially hostile situations. The tactical decisions he makes during such situations can easily affect the outcome of the ship's mission, as well as having life or death consequences. In summary, to paraphrase the SWOS instructors, the TAO gathers information, analyzes it, and ensures, by issuing verbal orders and queries that the correct decisions are made and actions taken based on the tactical situation. Additionally, TAOs "command by negation" in that much of what the Combat Information Center (CIC) team needs to accomplish in tactical situations is decided upon autonomously by individual watchstanders who also state their intentions before they execute their tasks. When these decisions are correct, the TAO must merely acknowledge the watchstander's decision. However, if these decisions are incorrect or omitted, the TAO must negate the incorrect decision or proactively initiate the omitted actions.
PORTS
Previously, in order to address the need for watchstanders to practice tactical scenarios without requiring the use of expensive special purpose hardware, the Navy commissioned the development of the Generic Reconfigurable Training System (GRTS) now renamed the PC-based Open-architecture Reconfigurable Training System (PORTS), which replicates watchstation functionality with high fidelity, on low-cost generic PC hardware. One of the PORTS watchstations is the TAO's. It also includes a simulation of the naval tactical environment. The system was already used at SWOS to train TAO students in console operation, though an instructor was needed for every two students, to play the role of other CIC team members and provide tutoring. SWOS had already set up an electronic classroom that included 42 student PCs for viewing electronic materials, networked to a single instructor console.
Figure 1 shows the PORTS simulated TAO console. It includes panels for Variable Action Buttons (VABs), display selection (map control keys), radio control, tactical situation map (a scaled version of the large screen display), and Automatic Status Boards that, among other things, display information on the hooked track. The mouse is used to push buttons and select tracks. A tactical simulation of the ownership's sensors and weapons, external platforms, and the environment drives these displays. PORTS simulations are initialized from a PORTS scenario file created using a graphical scenario editor.
PORTS TAO ITS
PORTS TAO-ITS (Stottler et al., 2007) uses a learn-by-doing strategy whereby the TAO is presented with a computer-simulated tactical situation, in which he should act as if aboard an actual ship. The ITS uses a high-fidelity simulation of the Aegis system consoles based on PORTS. Artificial intelligence techniques are employed to model the behavior of automated crewmembers that autonomously react to the tactical situation and interact among them and with the TAO. As described previously, the majority of the TAO’s decisions are manifested by verbal commands. The ITS enables TAO students to interact naturally, using spoken language to command and query simulated entities corresponding to other crewmembers and off-ship personnel through a natural language interface.
Before the use of PORTS TAO ITS, an instructor was needed for every two students. The instructor played the role of other teammates and provided coaching and after action review. The logistics of this training setup provided limited training opportunities to the students. With the advent of the ITS, only 1 instructor is needed in a classroom of 42 students (Remolina et al., 2009). The ITS teaches material that has the least ambiguity and fewest controversies. Ambiguous / controversial situations are discussed with instructors and other students and rehearsed using a traditional fully manned simulation.
METHODS
The objective of the work described in this paper is to support the same training that is currently being carried out in PORTS ITS in the Second Life virtual world environment. That is, the TAO would log on as an avatar in SL and interact with simulated role players (SL objects) that perform the actions of other crewmembers and the instructor. In this section we present an overview of the integration process followed by the technical details of the integration. Note that throughout this section we use the word *avatar* to refer to the human student in SL and *object* to refer to the simulated role players in SL that are controlled by the PORTS ITS.
Integration Overview
An overview of the integration is shown in Figure 2, where the main components are PORTS, PORTS ITS, SL Communication Layer (COMM), and SL. PORTS was not modified at all for this integration effort.
The second component, PORTS ITS, was modified in two minor ways. First, the Text to Speech manager that normally converted a text phrase to be spoken in the voice of a particular simulated role player was modified to instead notify COMM that a particular object in SL should speak the given phrase. Second, the speech recognition manager that usually converts the spoken words of the TAO to text was modified to instead take as input the chat text of the TAO from SL.
Figure 2. Component level integration overview among PORTS, PORTS ITS, the SL Communication Layer, and Second Life
Additionally, when running in Second Life mode, the PORTS ITS is also responsible for launching the COMM component. PORTS ITS and COMM communicate with each other through standard Java method calls. In similar integration efforts, other training systems would need to be similarly modified to support input from, and output to, SL avatars and objects.
The COMM component is a Java program that was developed specifically to link the PORTS ITS with the SL application program interface (API). As part of the incoming sub-component, COMM sets up a TCP/IP port to listen for messages that are broadcast from objects (e.g. Captain) in Second Life via HTTP. These messages include registering a SL object with COMM and in some cases, any chat messages that a nearby human enters to be broadcast, appropriately. If COMM receives a message from an object that the TAO avatar has spoken (by entering text in the local chat) COMM will pass this information down to PORTS. The outgoing sub-component of COMM sends messages to SL using the SL XML-RPC (remote procedure call) API. When the Text to Speech manager in PORTS ITS notifies COMM that a particular object (e.g. Tutor) should “speak” a phrase, COMM sends an XML message that identifies the object that should receive the message.
Finally, in Second Life, each object that represents a simulated role player has a script connected to it that listens for these XML-RPC messages. In this particular integration, the commands received include the text that the object should appear to say. The script parses the message and outputs the appropriate text to the local chat. These local chat messages look the same as messages typed in by other nearby humans. Additionally, information spoken by PORTS ITS simulated entities is also displayed over the object’s head as shown in Figure 3.
**Technical Details**
In this section, low-level technical details are provided. These include select Java code excerpts for COMM, select script excerpts for Second Life that are used to transfer information from SL to PORTS ITS and vice versa, and how the particular integration demonstration is set up in SL. Readers not interested in the gritty implementation details should skip to the results.
**COMM**
In listening for message from Second Life, COMM uses standard java socket methods (ServerSocket) to listen for HTTP requests. A new Socket is created when the server socket accepts a message from SL. The socket reads in the message, after which it is checked for correct formatting. The required formatting includes standard HTTP information (such as that it begins with “GET” and ends with “HTTP/1.0”) along with the COMM-specific string “/SLTranslate?” on the HTTP path. The information following this designation is parsed into one of two specific commands for the given name. The first is registering of a SL object that mirrors a PORTS ITS simulated entity (e.g. Captain) and has the same name, and the second indicates that the human TAO has entered chat text that should be relayed to PORTS ITS using standard Java method calls.
The COMM program makes use of Apache XML-RPC (http://ws.apache.org/xmlrpc/) to send messages back to SL. The XML-RPC client setup is quite simple:
1. XmlRpcClientConfigImpl config = new XmlRpcClientConfigImpl();
2. config.setServerURL(new URL("http://xmlrpc.secondlife.com/cgi-bin/xmlrpc.cgi"));
3. XmlRpcClient client = new XmlRpcClient();
4. client.setConfig(config);
Once the client is established, messages are sent to SL by placing several values into a hashmap: **Channel** - the unique object identifier acquired during registration, **IntValue** - an integer not used in our implementation, and **StringValue** - what the object should say). This map is then sent to the designated Second Life object by executing a remote procedure call as shown in line 7. For more information on how to use this functionality, see http://wiki.secondlife.com/wiki/Category:LSL_XML-RPC:
```
1. HashMap<String, Object> map = new HashMap<String, Object>();
2. map.put("Channel", objectId);
3. map.put("IntValue", intval);
4. map.put("StringValue", stringVal);
5. Vector<HashMap<String, Object>> params = new Vector<HashMap<String, Object>>();
6. params.add(map);
7. client.execute("llRemoteData", params);
```
**SL Scripts**
Object in Second Life can be configured to run user-defined scripts that control their behavior. The scripts
are written in the Linden Scripting Language (LSL). For this integration, scripts were created that sent messages to COMM for registration and forwarding of what the human TAO typed in, and scripts that listened for remote data calls.
For example, the microphone object worn by the TAO listens on the local chat and forwards whatever the TAO enters on to the COMM object in the following LSL excerpt. The state receiving (1.) and listen (2.) keywords are built-in aspects of LSL, which are used to listen for a message on local chat. The message itself is sent as an HTTP request to the pre-defined ip and port numbers used by the COMM server (3.). The llKey2Name(kOwner) converts the unique id of the human speaker wearing the microphone to the human-readable name (e.g. TAO).
1. state receiving {...
2. listen(integer channel, string name, key id, string message){...
3. llHTTPRequest("http://" + ip + ":" + port + "/SLTranslate?name=" + llKey2Name(kOwner) +
"&action=taoSpoke" +
"&text=" + message, [],"; ...} ...}
The LSL script for the headset also includes code to display the text that would be heard by the TAO through the general communication network. This code is also included in the state receiving (1.) script. However, the remote data (2.) keyword indicates that the script is listening for llRemoteData rpc calls. Once the data is received, the object makes two calls into the built-in scripting language. The first is to say the string passed into the remote procedure call (sval) on the local chat stream (3.). The second displays the same text above the object in a color distinct from the other simulated entities. This script also includes some extensions not described here, such as handling the display of long text messages and ensuring that the “bubble text” above the object fades away at the appropriate time.
1. state receiving {...
2. remote_data(integer type, key channel, key message_id, string sender, integer ival, string sval){...
3. llSay(0,sval);
4. llSetText(sval, mycolor, 1.0); ...} ...}
For more information on working with Second Life, see the official development website at: http://develop.secondlife.com/ in addition to the cited wiki address.
**Setup**
With these components and scripts in place, the actual setup is fairly simple. First, PORT ITS is run in an integration mode that also launches the COMM server. Second, the object instances are created and placed in Second Life based on pre-defined models. For this integration we used a headset model (worn by the TAO) and an avatar-like model (replicated for each of the simulated entities in PORTS ITS). Third, once the objects are defined, the associated script for each entity is updated. In this integration we had one script for the headset and one that was re-used for each of the avatar-like models. The re-used script was customized for each associated object to use a unique color when displaying text spoken by it. Fourth, the avatar of the human TAO needs to pick up and wear the headset. Finally, an activation message is sent by the TAO and the scenario begins.
**RESULTS**
The end result of this integration is that the TAO can log on as an avatar in Second Life (SL) and interact with a number of computer-controlled objects that take on the roles of the TAO’s teammates, as shown in Figure 3. Each of the standing figures, and the headset, represent a simulated role player in PORTS ITS (Surface, Bridge, Captain, Tutor, and Communication Channel). The TAO relies on the same communication mechanism to communicate with the simulated teammates in SL as they would to communicate with other human users. That is, the TAO speaks to these simulated teammates using the chat window built into SL and sees their replies and comments in the chat window as well. For clarity, the TAO also sees the text that the simulated entity speaks above the object’s head in a “text bubble”. In the end, this integration allows the TAO to carry out their communication decisions with the other simulated entities in Second Life using any of the existing PORTS ITS scenarios.
CONCLUSION
This paper describes a novel extension of the deployed training system PORTS ITS, where it is integrated with the popular Second Life 3D virtual world. The paper also discusses the motivation behind this extension and the technical details that were used to carry it out. While performing this experiment, we learned a number of lessons and developed some specific directions for future work utilizing simulation-based intelligent tutoring systems and the Second Life virtual world.
Lessons Learned
- **Modularity** – The fact that the ITS system was already developed in a modular fashion made the integration process relatively straightforward. If the dialog management was implemented in a less modular fashion then significant changes would have been required.
- **Learning Curve** – We encountered two distinct learning curves in this integration. The first was from the developer perspective. Even though, as shown in this paper, the integration was relatively small in size, it could still take a significant amount of time given that most software engineers have no prior experience working with SL. Second, and more importantly, there is also a learning curve for users of the SL virtual world. That is, the human sitting down for TAO training will need to spend some time getting used to interacting with Second Life before they are ready to start training. The built-in SL tutorial for new users takes about 15 minutes to complete and would give them enough understanding to participate in an exercise. While using the training system the student would be expected to improve their familiarity with certain features (e.g. how to
beyond this introduction. However, there is a lot of non-required functionality that could keep students busy for quite awhile (e.g. customizing avatars) if the students choose to spend their time on it.
- **Limitations** – A number of lessons were also learned about the limitations imposed by working with the existing SL API. For example, at the time of this writing, the TAO console itself (Figure 1) could not be displayed within SL. From a training perspective, this is an important piece of the puzzle that is missing. Another example is that an SL object cannot receive messages faster than once every 3-5 seconds. This had a negligible effect in this integration but could obviously cause problem for some ITS / simulators. A third limitation is the use of chat as the communication method. The integration would be greatly improved by being able to get voice input from SL and feed that into the PORTS ITS voice recognition and similarly to make use of text-to-speech to provide audio output for the simulated characters in SL. However, SL is a growing system so it updates APIs from time to time and some of these features are slated to be better supported in the future.
- **Changing API** – This paper represents a static snapshot of what can be done to integrate an existing training system with SL. The specific technical details behind the integration will need to be updated as the SL API evolves, but the general premise described in this paper is easily adapted to handle these changes. Design of similar integration efforts will want to take this into account as well.
**Future Work**
Based on the integration of PORTS ITS into SL, an example of a virtual submarine Fire Control Technician tutor was developed at the virtual Naval Undersea Warfare Center campus in SL. This example demonstrates the potential of an intelligent agent capable of augmenting traditional training in a virtual environment. An extension to this integration work is being considered by the U.S. Navy to create a platform that can rapidly elicit expert knowledge and develop intelligent agents based off this knowledge. These intelligent agents could be used as tutors in training scenarios or as automated bots in fleet team experimentations where they can model individual roles to reduce manning and cost.
While intelligent agents provide many benefits in the form of increased training availability and lower manpower cost, these agents are unable to perfectly emulate humans in teaching, learning, or behaving as a human teammate. Along with the development of the platform, work could also be extended to include an investigation to understand how best to integrate intelligent agents into submarine school curriculums in the virtual world.
**REFERENCES**
|
{"Source-Url": "https://www.stottlerhenke.com/papers/IITSEC-10_taoits_second_life.pdf", "len_cl100k_base": 5603, "olmocr-version": "0.1.51", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28725, "total-output-tokens": 6228, "length": "2e12", "weborganizer": {"__label__adult": 0.0009937286376953125, "__label__art_design": 0.0009765625, "__label__crime_law": 0.0026683807373046875, "__label__education_jobs": 0.072509765625, "__label__entertainment": 0.0004527568817138672, "__label__fashion_beauty": 0.0005402565002441406, "__label__finance_business": 0.0007877349853515625, "__label__food_dining": 0.000988006591796875, "__label__games": 0.01180267333984375, "__label__hardware": 0.003173828125, "__label__health": 0.00220489501953125, "__label__history": 0.0015573501586914062, "__label__home_hobbies": 0.00025010108947753906, "__label__industrial": 0.0016393661499023438, "__label__literature": 0.0008101463317871094, "__label__politics": 0.0008997917175292969, "__label__religion": 0.0007987022399902344, "__label__science_tech": 0.23876953125, "__label__social_life": 0.0005965232849121094, "__label__software": 0.047088623046875, "__label__software_dev": 0.60498046875, "__label__sports_fitness": 0.0019512176513671875, "__label__transportation": 0.0030689239501953125, "__label__travel": 0.0006542205810546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28289, 0.00894]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28289, 0.47444]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28289, 0.9358]], "google_gemma-3-12b-it_contains_pii": [[0, 3601, false], [3601, 4935, null], [4935, 9212, null], [9212, 12588, null], [12588, 14672, null], [14672, 19080, null], [19080, 23173, null], [23173, 24827, null], [24827, 28289, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3601, true], [3601, 4935, null], [4935, 9212, null], [9212, 12588, null], [12588, 14672, null], [14672, 19080, null], [19080, 23173, null], [23173, 24827, null], [24827, 28289, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28289, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28289, null]], "pdf_page_numbers": [[0, 3601, 1], [3601, 4935, 2], [4935, 9212, 3], [9212, 12588, 4], [12588, 14672, 5], [14672, 19080, 6], [19080, 23173, 7], [23173, 24827, 8], [24827, 28289, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28289, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
939b661c7d02863d3c825dd2c536deb6bbf6f677
|
Continuous Distance-Dependent Level of Detail for Rendering Heightmaps (CDLOD)
Filip Strugar, 11 July 2010
**Abstract.** This paper presents a technique for GPU-based rendering of heightmap terrains, which is a refinement of several existing methods with some new ideas. It is similar to the terrain clipmap approaches [Tanner et al. 98, Losasso 04], as it draws the terrain directly from the source heightmap data. However, instead of using a set of regular nested grids, it is structured around a quadtree of regular grids, more similar to [Ulrich 02], which provides it with better level-of-detail distribution. The algorithm's main improvement over previous techniques is that the LOD function is the same across the whole rendered mesh and is based on the precise three-dimensional distance between the observer and the terrain. To accomplish this, a novel technique for handling transition between LOD levels is used, which gives smooth and accurate results. For these reasons the system is more predictable and reliable, with better screen-triangle distribution, cleaner transitions between levels, and no need for stitching meshes. This also simplifies integration with other LOD systems that are common in games and simulation applications. With regard to the performance, it remains favourable compared to similar GPU-based approaches and works on all graphics hardware supporting Shader Model 3.0 and above. Demo and complete source code is available online under a free software license.
**Introduction**
Heightmap display and interaction are a frequent requirement of game and simulation graphics engines. The simplest way to render a terrain is the brute force approach, in which every texel in the source heightmap data is represented with one vertex in the regular grid of triangles. For larger datasets this is not practical or possible: thus the need for level of detail (LOD) algorithms. An LOD system will produce a mesh of different complexity, usually as the function of the observer distance, so that the on-screen triangle distribution is relatively equal while using only a subset of data and rendering resources.
Historically these algorithms were executed on the CPU; a good example is the classic academic algorithm [Duchainea et al. 97]. However, since the GPU's raw (mostly parallel) processing power has been improving much faster than the CPU's, in order for the whole system to be optimally used, the terrain-rendering algorithms have changed to draw on the graphics hardware as much as possible.
Currently, in the context of modern PC and game console architecture, there is little benefit of having an algorithm that produces optimal triangle distribution on the CPU if it cannot provide enough triangles for the hardware graphics pipeline or if it uses too much CPU processing power. The API, driver, and OS layer between the CPU and GPU is also a common bottleneck, as explained in [Wioka 03]. Therefore, even a simplistic GPU-based approach can be faster and provide better visual results than those complex approaches executed on the CPU and formerly considered optimal.
One of the first examples of this trend is the algorithm given in [de Boer 00], which, while still essentially a CPU-based algorithm, is aimed at producing a high triangle output with less optimal distribution but also less expensive execution [Duchainea et al. 97], thus providing better results when running on early dedicated graphics hardware.
Later [Ulrich 02] developed one of the first completely GPU-oriented algorithms, which is still an excellent choice for rendering terrains on modern hardware due to its good detail distribution and optimally tessellated mesh. Its drawbacks are the lengthy pre-processing step involved, inability to modify terrain data in real time, and a somewhat inflexible and less correct LOD system.
A simpler heightmap-based approach is sometimes preferred; one of the most popular is the [Asirvatham et al. 05] and its various improvements.
This paper presents a technique that builds upon the idea of using a fixed grid mesh displaced in the vertex shader to render the terrain directly from the heightmap data source while dealing with some of the shortcomings and complexities found in [Ulrich 02] and [Asirvatham et al. 05] by using a quadtree-based structure and a novel, completely predictable continuous LOD system. This technique is intended to provide an optimal way of rendering heightmap-based terrains on graphics hardware from the Shader Model 3.0 generation and above; the technique can be extended with hardware tessellation support to fully take advantage of the latest Shader Model 5.0 generation capabilities.
**LOD function**
One major drawback of the basic clipmaps [Asirvatham et al. 05] algorithm is that the level of detail is essentially based on the two-dimensional (x; y: latitude, longitude) components of the observer position, while ignoring the height. This results in unequal distribution of mesh complexity and aliasing problems as, for example, when the observer is high above the mesh, the detail level below remains much greater than required and vice versa. This is only partially addressed in [Asirvatham et al. 05] by taking the current observer height above the terrain into consideration and dropping higher LOD levels if needed.
On the other hand, [Ulrich 02] uses an LOD function that is the same over the whole chunk (mesh block), providing only approximate three-dimensional distance-based LOD.
Such approximations can be limiting in scenarios frequently encountered in modern games or simulation systems, where terrain is commonly very uneven and the observer's height and position changes quickly, as it produces poor detail distribution or movement-induced rendering artifacts. It also causes integration difficulties with other rendering system, some of which use level-of-detail optimizations themselves, since the unpredictable terrain LOD causes rendering errors such as unwanted intersection or floating of objects placed on the terrain.
The technique presented here of continuous distance-dependent level detail (CDLOD) solves these problems by providing an LOD distribution that is a direct function of three-dimensional distance across all rendered vertices and is thus completely predictable.
**LOD transition**
Another drawback of the techniques of [Ulrich 02] and [Asirvatham et al. 05] is that the discontinuities between LOD levels require additional work to remove gaps and provide smooth transitions. This is addressed by using additional connecting (“stitching”) strips between different LOD levels. These strips, besides adding to the rendering cost, can cause various issues such as artifacts when rendering shadow maps; unwanted overdraw when the terrain is rendered transparently (which is a problem when rendering terrain water or similar effects); etc.
Also, since the mesh swap between LOD levels happens between meshes of different triangle count and shape, differences in vertex output interpolation will result in "popping" artifacts that appear if any vertex-based effect (such as vertex lighting) is used. This also makes it a less suitable platform for hardware tessellation.
The CDLOD technique inherently avoids these problems because the algorithm used to transition between LOD levels completely transforms the mesh of a higher level into the lower detailed one before the actual swap occurs. This ensures a perfectly smooth transition with no seams or artifacts. In addition, the rendering itself is simpler and more predictable than that of [Asirvatham et al. 05], as only one rectangular regular grid mesh is needed to render everything.
**Data organization and streaming**
Storing, compressing, and streaming the terrain data can be done as with other techniques, requiring no special attention. While this topic is outside the scope of this paper, it is a necessary part of any practical large terrain rendering system implementation. Thus an example CDLOD implementation with full data streaming is provided with the *StreamingCDLOD* demo.
**Algorithm implementation**
**Overview**
CDLOD organizes the heightmap into a quadtree, which is used to select appropriate quadtree nodes from different LOD levels at run time. The selection algorithm is performed in such a way as to provide approximately the same amount of on-screen triangle complexity regardless of the distance from the observer.
The actual rendering is performed by rendering areas covered by selected nodes using only one unique grid mesh, reading the heightmap in the vertex shader, and displacing the mesh vertices accordingly.
A more detailed mesh can be smoothly morphed into the less detailed one in the vertex shader so that it can be seamlessly replaced by a lower resolution one when it goes out of range, and vice versa.
The quadtree structure is generated from the input heightmap. It is of constant depth, predetermined by memory and granularity requirements. Once created, the quadtree does not change unless the source heightmap changes. Every node has four child nodes and contains minimum and maximum height values for the rectangular heightmap area that it covers. A provided example, *BasicCDLOD*, uses a naive explicit quadtree implementation where all required quadtree data is contained in the node structure. The *StreamingCDLOD* example implements a more advanced version in which the algorithm relies only on the (partially compressed) min/max maps while all other data is implicit and generated during the quadtree traversal. This version uses far less memory but is slightly more complex.
**Quadtree and node selection**
The first step of the rendering process is the quadtree node selection. It is performed every time the observer moves, which usually means during every frame.
In the presented version of the algorithm, a quadtree depth level always corresponds to the
level of detail. This is an artificial constraint induced for simplicity reasons because it allows for the same grid mesh with a fixed triangle count to be used to render every quadtree node. In that case every child node has four times more mesh complexity per square area unit than its parent, since a child node occupies a fourth of the area. This complexity difference scale factor of four is then used as a basis for the LOD level distribution. In other words, each successive LOD level is used to render four times as many triangles and contains four times more nodes than the previous one (see Figure 1).

**LOD distances and morph areas**
In order to know which nodes to select where, distances covered by each LOD layer are precalculated before the node selection process is performed. These are calculated with the goal of producing approximately the same average number of triangles per square unit of screen over the whole rendered terrain.
Since the complexity difference between each successive LOD level is fixed to four by the algorithm design, the difference in distance covered by them needs to be close to 2.0 to accomplish relatively even screen triangle distribution, assuming that the three-dimensional projection transform is used for rendering (due to the way projection transform works).
The array of LOD ranges is thus created (see Figure 2) with the range of each level being two times larger than that of the previous one, and the range of the final level (largest, least detailed) representing the required total viewing distance.

To provide smooth LOD transitions, the morph area is also defined, marking the range along which a higher complexity mesh will morph into the lower one. This morph area usually covers the last 15%-30% of every LOD range.
**Quadtree traversal and node selection**
Once the array of LOD ranges is calculated it is used to create a selection (subset) of nodes representing the currently observable part of the terrain. To do this, the quadtree is traversed recursively beginning from the most distant nodes of the lowest-detailed level, working down to the closest, most detailed ones. This selection then contains all dynamic metadata required to
render the terrain.
The following C++ pseudocode illustrates a basic version of the algorithm:
```cpp
01 // Beginning from the LODLevelCount and going down to 0; lodLevel 0 is the highest detailed level.
02 bool Node::LODSelect( int ranges[], int lodLevel, Frustum frustum )
03 {
04 if( !nodeBoundingBox.IntersectsSphere( ranges[ lodLevel ] ) )
05 {
06 // no node or child nodes were selected; return false so that our parent node handles our area
07 return false;
08 }
09
10 if( !FrustumIntersect( frustum ) )
11 {
12 // we are out of frustum, select nothing but return true to mark this node as having been
13 // correctly handled so that our parent node does not select itself over our area
14 return true;
15 }
16
17 if( lodLevel == 0 )
18 {
19 // we are in our LOD range and we are the last LOD level
20 AddWholeNodeToSelectionList( );
21 return true; // we have handled the area of our node
22 }
23 else
24 {
25 // we are in range of our LOD level and we are not the last level: if we are also in range
26 // of a more detailed LOD level, then some of our child nodes will need to be selected
27 // instead in order to display a more detailed mesh.
28 if( !nodeBoundingBox.IntersectsSphere( ranges[lodLevel-1] ) )
29 {
30 // we cover the required lodLevel range
31 AddWholeNodeToSelectionList( );
32 }
33 else
34 {
35 // we cover the more detailed lodLevel range; some or all of our four child nodes will
36 // have to be selected instead
37 foreach( childNode )
38 {
39 if( !childNode.LODSelect( ranges, lodLevel-1, frustum ) )
40 {
41 // if a child node is out of its LOD range, we need to make sure that the area
42 // is covered by its parent
43 AddPartOfNodeToSelectionList( childNode.ParentSubArea );
44 }
45 }
46 }
47 return true; // we have handled the area of our node
48 }
49}
50}
```
Selecting a node involves storing its position, size, LOD level, info on partial selection, and other data if needed. This is saved in a temporary list later used for rendering.
Each node can be selected partially over the area of only some of its four child nodes. This is done so that not all child nodes need to be rendered if only a few are in their LOD range, allowing for earlier exchange between LOD levels, which increases the algorithm performance and flexibility.
Nodes are also frustum culled while traversing the quadtree, eliminating the rendering of non-visible nodes. When rendering a shadow map or a similar effect, the frustum cull is based on the shadow camera, but the actual LOD selection should still be based on the main camera settings. This is done to avoid rendering artifacts that would be caused by difference in the terrain mesh resulting from two different LOD functions (for the shadow map camera and the main camera).
Since the LOD layer selection is based on the actual three-dimensional distance from the observer, it works correctly for all terrain configurations and observer heights. This results in correct detail complexity and better performance in various scenarios, as illustrated in Figure 3 and Figure 4.
**Rendering**
To render the terrain, we iterate through the selected nodes and their data is used to render the patch of the terrain that they cover. Continuous transition between LOD levels is done by morphing the area of each layer into a less complex neighbouring layer to achieve seamless transition between them (see *Figure 5*).

*Figure 5* Distribution of LOD levels and nodes (different colors represent different layers).
Rendering is then fairly straightforward: a single grid mesh of fixed dimensions is transformed in the vertex shader to cover each selected node area in the world space, and vertex heights are displaced using texture fetches, thus forming the representation of the particular terrain patch. Commonly used grid-mesh dimensions are 16x16, 32x32, 64x64 or 128x128, depending on the required output complexity. Once chosen, this grid mesh resolution is constant (i.e., equal for all nodes) but can be changed at run time, which can come in handy for rendering the terrain at lower resolutions for effects requiring less detail such as reflections, secondary views, low quality shadows, etc.
**Morph implementation**
Using CDLOD, each vertex is morphed individually based on its own LOD metric unlike the method in *Ulrich 02*, where the morph is performed per-node (per-chunk). Each node supports transition between two LOD layers: its own and the next larger and less complex one. Morph operation is performed in the vertex shader in such way that every block of eight triangles from the more detailed mesh is smoothly morphed into the corresponding block of two triangles on the less detailed mesh by gradual enlargement of the two triangles and reduction of the remaining six triangles into degenerate triangles that are not rasterized. This process produces smooth transitions with no seams or T-junctions (see *Figure 6* and *Figure 7*).
First, the approximate distance between the observer and the vertex is calculated to determine the amount of morph required. The vertex position used to calculate this distance can be an approximation as long as the approximating function is consistent over the whole dataset domain. This is necessary since, in order to prevent seams, the vertices on the node's grid mesh edges must remain exactly the same as the ones on the neighbouring nodes. In our case, $x$ (latitude) and $y$ (longitude) components will always be correct as the same function is used to stretch them to the world space, but $z$ (height) must either be approximated or sampled from the heightmap using a consistent filter.
The morph value, $\text{morphK}$ in the example code, ranging between 0 (no morph, high detail mesh) and 1 (full morph, four times fewer triangles), is used to gradually move each morph mesh vertex towards the corresponding no-morph one. A morph vertex is defined as a grid vertex having one or both of its grid indices $(i, j)$ as an odd number, and its corresponding no-morph neighbour is the one with coordinates $(i - i/2, j - j/2)$.
Following is the HLSL code used to morph vertices:
```csh
// morphs input vertex uv from high to low detailed mesh position
// - gridPos: normalized [0, 1] .xy grid position of the source vertex
// - vertex: vertex.xy components in the world space
// - morphK: morph value
custom float2 g_gridDim = float2(64, 64);
float2 morphVertex( float2 gridPos, float2 vertex, float morphK )
{
float2 fracPart = frac( gridPos.xy * g_gridDim.xy * 0.5 ) * 2.0 / g_gridDim.xy;
return vertex.xy - fracPart * g_quadScale.xy * morphK;
}
```
Finally, the $z$-component is obtained by sampling the heightmap with texture coordinates calculated from $x$ and $y$ components using a bilinear filter (filtering is only needed for vertices in the morph region). When all vertices of a node are morphed to this low-detail state, the mesh then effectively contains four times fewer triangles and exactly matches the one from the lower LOD layer; hence it can be seamlessly replaced by it.


**Settings**
Settings used to generate the quadtree and run the algorithm need to be carefully chosen to match terrain dataset characteristics while providing the best performance. In the accompanying examples, each dataset defines its own settings. Following is a brief description of the most important ones:
The **LeafQuadTreeNodeSize** setting determines the depth (granularity) of the quadtree in heightmap raster size, and **LODLevelCount** defines the number of LOD levels. Using more LOD levels provides greater viewing distance and better LOD distribution, but reduces performance and increases memory use. Smaller values of **LeafQuadTreeNodeSize** allow for a greater range of viewing distances and better handling of very uneven terrains, but increases quadtree memory use and batch count [Wloka 03]. These two values are set during the export phase and remain fixed at run time.
The **RenderGridResolutionMult** defines the resolution of the static grid mesh used to render the terrain - it affects the triangle output but does not change algorithm behaviour in any other way. It is a handy tool for easily balancing rendering quality and algorithm performance at run time.
**View distance** can also be varied at run time and is used to change maximum rendering distance and detail level. Unlike **RenderGridResolutionMult**, changing it will affect LOD ranges, node selection, and consequently the number of render batches, required streaming data, etc.
**Granularity issues**
One limitation of the algorithm is that a single quadtree node can only support transition between two LOD layers. This limits the minimum possible viewing range, or the minimum quadtree depth, because only one smooth transition can be performed between two LOD layers over the area of the same node. Increasing the viewing range will move LOD transition areas further away from each other and solve the problem at the expense of having more render data to process. The other options are to reduce the number of LOD levels, which reduces the benefits of the LOD system, or to increase the quadtree depth to increase the granularity, which increases quadtree memory and CPU use. The size of the morph area can also be decreased to mitigate this problem, but that can make the transition between levels noticeable.
Since the LOD works in three dimensions, this problem will be enhanced when using extremely rough terrain with large height differences: thus, different settings might be required for each dataset.
In the provided data examples, LOD settings are tuned so that the ranges are always acceptable. In the case where different datasets and settings are used, these LOD transition problems can appear in the form of seams between LOD levels. Debug builds of the demo code will always detect a possibility of such a situation and display a warning so that settings can be corrected (detection code is overzealous, so a warning does not guarantee that the seam will be observable - just that it is theoretically possible).
**Streaming and optimal quadtree data storage**
Any practical implementation of a large terrain-rendering algorithm must also provide mechanics for keeping only a subset of the required terrain dataset in the working memory. This subset is usually the minimum required to render the terrain from the observer's perspective, and it is loaded (streamed) in and out as the observer's position changes. While a detailed discussion
of the topic of streaming is outside the scope of this article, some basics will be covered. The StreamingCDLOD implementation provides all necessary implementation details if required.
The CDLOD technique requires storage for the two separately handled datasets:
- **Quadtree data**, which is the metadata required for the LOD algorithm to work.
- **Terrain data**, terrain data, which is the actual data required for the rendering and consists of a heightmap, normal map, overlay images, etc.
**Quadtree data**
The quadtree data in the BasicCDLOD implementation is a part of the node structure. As each node contains data defining its size, position, pointers to its siblings, etc., a lot of unnecessary memory is used. The StreamingCDLOD implementation avoids this by keeping only the necessary $minZ$ and $maxZ$ values, which describe the minimum and maximum height values of the heightmap area that the node covers. The rest of the data is automatically generated during quadtree traversal at some small additional computational cost and a very high (approximately 30x) memory saving.
To store this $min/max$ data, each LOD level uses a two-dimensional matrix of two unsigned 16-bit integer values: one two-value set for each quadtree node of the corresponding quadtree level. The dimensions of the matrix for the most detailed LOD level 0 are thus $[\text{HeightmapWidth}/\text{LeafQuadTreeNodeSize},\, \text{HeightmapHeight}/\text{LeafQuadTreeNodeSize}]$, with every next LOD level requiring four times less $min/max$ data.
To further improve memory use, the most detailed LOD level matrix is compressed by storing the $min/max$ values in the normalized space of the parent node’s $min/max$ values, which can be stored using unsigned 8 bit integers with little precision loss. This reduces memory usage to approximately 1/45 compared to the naive implementation in the BasicCDLOD implementation, and is close to the maximum theoretically possible.
This $min/max$ matrix represents all the data required by the CDLOD algorithm (aside from the actual heightmap). Since it is small compared to the rest of the terrain data, the StreamingCDLOD implementation keeps the whole $min/max$ matrix in memory. For example, rendering a terrain with the source heightmap of 32K x 32K bytes and a leaf quadtree node size of 32 requires around 3.5 MiB of memory.
**Terrain data**
The terrain data (heightmap, normal map, overlay images, etc.) is split into streamable blocks for each LOD layer and is optionally compressed. In some cases, texture blocks might need to overlap slightly with neighbouring ones to avoid rendering artifacts near the border of the areas due to texture filtering on the edges.
At run time, the same LOD selection algorithm used for rendering is also used to select nodes in range, which are then used to mark data blocks as potentially visible and to stream them in or out accordingly. The streaming data block granularity is usually much smaller than the quadtree granularity, so one data block contains data for many quadtree nodes.
Different streaming LOD settings can be used to balance higher quality rendering and lower memory use.
Example code
Two example projects are provided: BasicCDLOD and StreamingCDLOD. Both projects are written in C++, using DirectX9 and HLSL, and should work on most GPUs that support vertex shader texture sampling, i.e., ones supporting Shader Model 3.0.
BasicCDLOD implements the CDLOD technique in its basic form. StreamingCDLOD implements the CDLOD technique in its practical usable form with more optimal base algorithm memory use and simple data streaming.
The source code is distributed under Zlib license; see the end of this paper for download links.
Performance
Performance measurements presented here are obtained using the StreamingCDLOD implementation of the algorithm.
As with other similar GPU-based techniques, performance of the CDLOD will be limited by the GPU's ability to execute the vertex shader and/or rasterize triangles. Although sampling textures in the vertex shader can be very expensive, this cost is usually not the main limiting factor due to the good screen triangle distribution, which ensures that a very high triangle number is not required to achieve good visual quality.
Comparison
When compared to the methods in [Asirvatham et al. 05], CDLOD produces higher quality triangle distribution and utilization, fewer or no rendering artifacts, and a similar triangle output and render batch count [Wloka 03]. On the downside, there is a small added memory cost required for the quadtree storage.
On the other hand, the algorithm of [Ulrich 02] has a potentially higher GPU performance for the similar visual quality as it renders the precalculated adaptively tessellated terrain mesh, but it is much less practical than a heightmap-based approach for the same reason. It also suffers from similar visual artifacts as [Asirvatham et al. 05] due to lack of smooth LOD distribution, as detailed in the Introduction.
Datasets
These are the datasets used for testing:
- califmtns_large: 48897 x 30465 heightmap with a normalmap and a simple texture splatting technique;
- hawaii: 13825 x 16769 heightmap with a normalmap and overlay topographic map
- puget: the classic “Puget Sound” 16385x16385 heightmap with no normalmap, no dynamic lighting, no detail map, and an overlay color map with embedded lighting
Settings used for the measurements provide balance between quality and performance. The datasets are large enough for realistic profiling of memory use (especially the califmtns_large) and CPU/GPU performance; increasing the source heightmap/image size will proportionally
increase hard disk storage requirements and quadtree memory use (which is not streamed in the example code), but it will not significantly affect total memory use or CPU/GPU performance.
**CPU performance**
Since no data is generated on the CPU and a small amount of data is transferred to the GPU per frame, the algorithm is inherently GPU-limited in almost all scenarios. Most of the required CPU time is consumed by DirectX9 rendering API calls, while a smaller portion of it is spent in the actual CDLOD algorithm (quadtree traversal and data selection). The usual number of render batches per LOD level is ten. With seven to nine LOD levels used in demo datasets, an average of 60 to 120 render batches per frame is required to render the terrain.
*Table 1* shows typical CPU time spent during one frame on quadtree traversal (together with streaming data selection) and DirectX9 rendering (where applicable) on three different CPU platforms.
The cost of the data streaming is not presented as it is directly dependent on the observer movement, compression algorithms, and data size, and it is usually executed on separate threads anyway (as in the StreamingCDLOD example).
As presented, the DirectX9 rendering cost should only be used as a guideline since the rendering code is not very optimized and can also vary greatly depending on the driver, OS version, etc.
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Rendered triangle count</th>
<th>Rendered triangle count</th>
<th>Intel Atom330 1.6Ghz quadtree (+ rendering)</th>
<th>AMD AthlonXP 1.6Ghz Quadtree</th>
<th>Intel E8500 3.16Ghz quadtree (+ rendering)</th>
</tr>
</thead>
<tbody>
<tr>
<td>califmt.</td>
<td>74</td>
<td>380k</td>
<td>0.573 ms (+ 1.208 ms)</td>
<td>0.295 ms</td>
<td>0.085 ms (+ 0.415 ms)</td>
</tr>
<tr>
<td>puget</td>
<td>84</td>
<td>105k</td>
<td>0.525 ms (+ 0.937 ms)</td>
<td>0.280 ms</td>
<td>0.075 ms (+ 0.268 ms)</td>
</tr>
<tr>
<td>hawaii</td>
<td>83</td>
<td>446k</td>
<td>0.537 ms (+ 1.189 ms)</td>
<td>0.287 ms</td>
<td>0.077 ms (+ 0.350 ms)</td>
</tr>
</tbody>
</table>
*Table 1* Typical CPU time spent during one frame on quadtree traversal and DirectX9 rendering on three CPU platforms.
**GPU performance**
The performance bottleneck on the GPU is either the vertex texture fetch cost (used for displacement of terrain vertices), or the triangle rasterization and pixel shader cost. This mostly depends on the settings, GPU hardware, and display resolution.
*Table 2* shows typical framerates (frames/second) over various display resolutions and different datasets. The datasets califmtns_large and hawaii are displayed using lighting based on one directional light and normals from the high resolution normalmap texture, with no overlay texturing. The puget dataset uses only overlay texture with baked lighting.
Measurements were performed on a Core2 Duo 3.16Ghz system, except for the NVidia ION, which uses a 1.66 GHz Atom 330 CPU.
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Rendered</th>
<th>NVidia ION</th>
<th>NVidia 8800GT</th>
<th>NVidia 8800GT</th>
<th>ATI 5870</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>triangle count</td>
<td>640x480</td>
<td>1024x768</td>
<td>1680x1050</td>
<td>640x480</td>
</tr>
<tr>
<td>califmt.</td>
<td>380k</td>
<td>85</td>
<td>75</td>
<td>56</td>
<td>760</td>
</tr>
<tr>
<td>puget</td>
<td>105k</td>
<td>192</td>
<td>161</td>
<td>95</td>
<td>2037</td>
</tr>
<tr>
<td>hawaii</td>
<td>446k</td>
<td>78</td>
<td>60</td>
<td>53</td>
<td>690</td>
</tr>
</tbody>
</table>
*Table 2* Typical framerates (frames/second) over various display resolutions and different datasets.
Memory use
Memory usage depends on the quadtree granularity, streaming settings, and data. Changing the exporter LOD settings and viewing range can change the memory use by an order of magnitude. For that reason, care must be taken to establish the right balance between rendering quality and memory use. Table 3 shows typical memory usage of demo datasets. Data includes heightmap, normal map, and image overlay textures, and they are measured using default settings and worst-case observer locations.
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Resolution (heightmap / detailmap) and coverage</th>
<th>Viewing range</th>
<th>Quadtree memory use</th>
<th>Texture memory use (hm+nm+om)</th>
</tr>
</thead>
<tbody>
<tr>
<td>califmt.</td>
<td>10 / 5 m, 488 x 304 km</td>
<td>92 km</td>
<td>4.735 KiB</td>
<td>30 MiB, 26 MiB, 34 MiB (90 MiB total)</td>
</tr>
<tr>
<td>puget</td>
<td>10 m, 163 x 163 km</td>
<td>175 km</td>
<td>853 KiB</td>
<td>24 MiB, 0 MiB, 14 MiB (38 MiB total)</td>
</tr>
<tr>
<td>hawaii</td>
<td>10 / 5 m, 138 x 167 km</td>
<td>176 km</td>
<td>737 KiB</td>
<td>28 MiB, 30 MiB, 45 MiB (103 MiB total)</td>
</tr>
</tbody>
</table>
Table 3 Typical memory usage of demo datasets.
Memory used to run decompression and other algorithms that are not directly related to the terrain rendering are not presented as such usage will vary greatly based on the use scenario and implementation details. This additional memory use is usually substantially lower than the combined quadtree and texture memory use presented in the Table 3.
Additional thoughts
Hardware (GPU-Based) tessellation
The newer generation of graphics hardware (Shader Model 5) supports programmable hardware-based tessellation. This can be used to provide the best of both worlds when rendering a terrain: performance and good visual output of an adaptively tessellated mesh and the flexibility of a heightmap-based approach.
Since CDLOD performs the transition between LOD layers using smooth continuous morph of triangles with no discontinuities or T junctions, additional subdivision can easily be applied on top without breaking the continuity that makes it a good platform for hardware tessellation.
Performance on older hardware
In the case of older graphics cards (Shader Model 2 or lower), the heightmap sampling in the vertex shader can be too costly or even impossible due to the lack of hardware capabilities. In that case, merging the CDLOD and ChunkedLOD [Ulrich 02] techniques is an option in which benefits of a precalculated adaptively tessellated mesh can be used in the CDLOD framework with the benefits of a similar continuous LOD algorithm.
This can be achieved by pre-calculating a tessellated triangle mesh, simplifying it for each successive less detailed LOD level, and splitting it into the quadtree. Each vertex can contain additional morph data so that during the LOD transition some vertices can be moved into neighbouring ones in the vertex shader, eliminating triangles and effectively morphing the more detailed LOD mesh into the less detailed one. This technique provides correct continuous three-dimensional distance-based LOD, removes the need for stitching strips, and avoids the major problem that exists in the approach in [Ulrich 02].
The downsides of this approach, compared to the heightmap-based one, are the lengthy data pre-processing step, more difficult mesh compression, and the fact that real-time terrain modification would be possible.
References
Demos, code, data
Additional material can be found online:
Binaries and a small example dataset
http://www.vertexasylum.com/downloads/cdlod/binaries_tools_testdata.exe
Complete source code
http://www.vertexasylum.com/downloads/cdlod/source.zip
Example datasets
http://www.vertexasylum.com/downloads/cdlod/dataset_califmtns.exe,
http://www.vertexasylum.com/downloads/cdlod/dataset_hawaii.exe,
Example animations
http://www.vertexasylum.com/downloads/cdlod/cdlod_calif.wmv,
http://www.vertexasylum.com/downloads/cdlod/cdlod_hawaii.wmv,
http://www.vertexasylum.com/downloads/cdlod/cdlod_params.wmv
Paper revision 1
Originally published in the “journal of graphics, gpu and game tools”:
http://jgt.akpeters.com/papers/Strugar10/
Filip Strugar, filip.strugar@gmail.com, filip.strugar@rebellion.co.uk
|
{"Source-Url": "http://www.vertexasylum.com/downloads/cdlod/cdlod_latest.pdf", "len_cl100k_base": 8057, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 33745, "total-output-tokens": 9387, "length": "2e12", "weborganizer": {"__label__adult": 0.0006818771362304688, "__label__art_design": 0.002300262451171875, "__label__crime_law": 0.0005278587341308594, "__label__education_jobs": 0.0004944801330566406, "__label__entertainment": 0.0002608299255371094, "__label__fashion_beauty": 0.0003402233123779297, "__label__finance_business": 0.0002435445785522461, "__label__food_dining": 0.0005450248718261719, "__label__games": 0.0059051513671875, "__label__hardware": 0.00582122802734375, "__label__health": 0.00054168701171875, "__label__history": 0.0008244514465332031, "__label__home_hobbies": 0.000164031982421875, "__label__industrial": 0.0009622573852539062, "__label__literature": 0.0003554821014404297, "__label__politics": 0.00039005279541015625, "__label__religion": 0.0010004043579101562, "__label__science_tech": 0.137939453125, "__label__social_life": 8.475780487060547e-05, "__label__software": 0.0168914794921875, "__label__software_dev": 0.82177734375, "__label__sports_fitness": 0.0005784034729003906, "__label__transportation": 0.0009279251098632812, "__label__travel": 0.0004229545593261719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38624, 0.05755]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38624, 0.68673]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38624, 0.87006]], "google_gemma-3-12b-it_contains_pii": [[0, 3116, false], [3116, 6886, null], [6886, 9924, null], [9924, 12255, null], [12255, 15353, null], [15353, 15652, null], [15652, 17621, null], [17621, 19915, null], [19915, 23372, null], [23372, 26541, null], [26541, 29060, null], [29060, 33054, null], [33054, 34503, null], [34503, 36431, null], [36431, 38624, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3116, true], [3116, 6886, null], [6886, 9924, null], [9924, 12255, null], [12255, 15353, null], [15353, 15652, null], [15652, 17621, null], [17621, 19915, null], [19915, 23372, null], [23372, 26541, null], [26541, 29060, null], [29060, 33054, null], [33054, 34503, null], [34503, 36431, null], [36431, 38624, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38624, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38624, null]], "pdf_page_numbers": [[0, 3116, 1], [3116, 6886, 2], [6886, 9924, 3], [9924, 12255, 4], [12255, 15353, 5], [15353, 15652, 6], [15652, 17621, 7], [17621, 19915, 8], [19915, 23372, 9], [23372, 26541, 10], [26541, 29060, 11], [29060, 33054, 12], [33054, 34503, 13], [34503, 36431, 14], [36431, 38624, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38624, 0.06957]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
32765831999246e05df2634511e583ab24f2aca0
|
Towards a Framework for Agile Development of Physical Products
Influence of Artifacts and Methods
Annette Isabel Böhmer
Institute of Product Development
Technical University of Munich
Munich, Germany
boehmer@pe.mw.tum.de
Maximilian Meinzinger
Institute of Product Development
Technical University of Munich
Munich, Germany
Rafael Hostettler
Institute of Robotics & Embedded Systems
Technical University of Munich
Munich, Germany
rh@gi.ai
Alois Knoll
Institute of Robotics & Embedded Systems
Technical University of Munich
Munich, Germany
knoll@in.tum.de
Udo Lindemann
Institute of Product Development
Technical University of Munich
Munich, Germany
lindemann@pe.mw.tum.de
Abstract—A typical agile project is characterized by fuzziness. However, this decreases from iteration to iteration by the learning effect of measurable partial results, like prototypes or product increments [6, p. 20]. In the agile process, artifacts are generated by the application of methods. They contain all the information that is required by the product development process and reflect the current state of knowledge. They are thus information carriers. These artifacts can be either physical/tangible or virtual/immaterial. The goal of this work is to evaluate the influence of methods and artifacts on agile projects to serve as a basis to derive an agile framework. To this end, the collected data is structured into an analysis framework allowing for a systematic evaluation. Therefore, the Makeathon Think.Make.Start. was analyzed. During this agile development of innovative products, artifacts are a central element and carrier of information.
Keywords—agile; prototype; makeathon; iterative; framework
I. INTRODUCTION
Innovation is a hardly understood and highly complex system [1, p. 67]. Especially the early phases of innovation processes are characterized by a high uncertainty about the problem and solution space. Agile approaches such as design thinking are recommended in these early stages to mitigate the uncertainty. The focus on customer or user needs facilitates the iterative concretization of the problem-solution fit by empathizing with the user [2, p. 5]. These insights are gained by creating various prototypes that enable interactive user tests. Based on the test variable, prototypes are of different forms and types. They vary from very simple paper or cardboard models, mock-ups, function patterns to fully functional elaborations.
Using specific prototypes, the product becomes more concrete with every iteration. The fuzziness of the project decreases and the requirements become more specific. At the same time, however, the project becomes more immobile. With each iteration, the team's range of options decreases along with the depth and breadth of its decision tree. However, the initially planned solution usually changed in the process due to the gain of knowledge [1].
With Think. Make. Start. [3], a lecture format has been created that follows this iterative methodology for the development of de-novo physical products. Based on data acquired during the fourth instantiation of the format, insights on the role of artifacts created in the process as well as of the methods applied from the agile development corpus were gained. To systemize these insights, they are structured through an analysis framework.
II. FOCUS OF THE STUDY
The focus of this work focuses on the early stages of product development. The product development process is central element of an innovation process and generally includes all basic
activities from the identification of innovative ideas, the development of prototypes to the start of production [4, p. 488]. All activities, from the idea to the practical implementation in the form of market launch, are therefore considered. These are subject to the influence of various disciplines, e.g. organization, marketing or construction [5, p. 3]. They are therefore characterized by high complexity and a priori unknown interdependencies.
One way of counteracting this is to use agile methods throughout the product lifecycle [1, p. 65]. Such agile approaches are widespread, especially in software development [6, p. 18], while the development of physical products is traditionally oriented on traditional and linear approach models [1, p. 84]. However, there are now also approaches to integrating and applying agile methods [7, p 21]. For example, companies are increasingly trying to implement Scrum as a project management method to make their development activities faster and more efficient [8, p. 87].
In practice though, there are investigations which reveal problems and challenges in the transfer of agile methods to physical products. For example, their implementation is poor due to the high organizational effort in companies [9, p. 5]. The actual targets, e.g. shorter development times or lower costs are not achieved as a result [9, p. 2]. Another example is the limited applicability of agile methods due to the physical product design [10, p. 10]. This is illustrated by the example of the creation of a prototype, which can be done quickly and cheaply in software development because of the virtual nature, but not for a physical product.
### III. BACKGROUND
#### A. Artifacts within Agile Product Development
Artifacts are a central element of this work, thus their meaning in the context is further explored. The word "artifact" comes from the Latin and derives from arte = with skill and factum = the made. Depending on the context, different meanings are attributed to the term. For example, an artifact in archeology is an object which has been given its form by human action, whereas in the electronics a disturbance signal is meant [11].
In Unified Modeling Language (UML), an artifact is defined as an element that arises during a development process, during the deployment, or during the application of a software system. This is a physical piece of information, such as a source code file or a table within a database [12, p. 213]. [13, p. 12] defines artifacts beyond the UML as all the results of software development, which are represented in a particular notation (e.g., natural language or programming language). These may also be intermediates. Here, a distinction can be made between material (source texts, drafts, documentation, etc.) and immaterial artifacts (for example, methods, knowledge or concepts) [14, p. 81].
Apart from software development, [15, p. 21] describes an artifact as an object, on which knowledge is stored as information. It is interesting to note that all objects represent, in a certain form, an artifact. For example, even a simple product like a table can be an artifact because reverse engineering can generate information. [16, p. 49], in his definition, includes the origin and use of the artifact. It describes the term as a virtual or physical business object, which can be generated, processed or eliminated in processes.
While the definitions that have so far been used to define the term artifact are very broad, [17, p. 37] restricts the understanding in the context of machine and plant engineering. An artifact is a component of a procedural model. In this case, artifacts describe in document-like form, which components are to be developed within a project and which (intermediate) results are produced. They therefore include what is to be developed with respect to the product.
As illustrates in Fig 1, artifacts are created when certain activities are performed by employees who have a specific role [19, p. 75]. Existing artifacts can also be modified and the use of a given artifact can trigger the need for a new artifact. Influence on the process of production has the use of the method as well as information (e.g., guidelines) or framework conditions (e.g., presence of tools) [18, p. 443].
![Fig. 1. Elements of an agile procedure model (according to [20, p. 17, 21, 18, p. 443]).](image)
Artifacts contain all the information that is required and required by the product development process. They can be either physical / tangible or virtual / immaterial. They are thus information carriers. Under the influence of methods and tools as well as guidelines and frameworks, artifacts are generated, edited or eliminated within a procedural model. Similarly, triggering the need for a new artifact by using an old one is possible. Depending on the respective situation, they flow into the different submodels (roles, activities, sequences).
![Fig. 2. Correlation of artifacts and activities (according to [18, S. 443]).](image)
An action model defines who is responsible for the creation of certain elements (product model) at which time (run model). Likewise, it is specified how the sequence of the procedure is designed (activity model) [21]. Overall, an approach model thus presents "who has to do as what and when". Developer support is provided through the provision of methods and tools. We must always take account of the requirements and general conditions. These may be, for example, externally prescribed standards (e.g., ISO standards) as well as internal specifications (e.g., compliance policies). Incorporating the correlation of artifacts and activities the following correlation of elements within an agile procedure model can be derived (see Figure 2).
B. Agile Product Development Model
Linear approach models are often described as sequential, classic or traditional [22, p. 5, 23, p. 70]. A well-known model, often referred to as a typical example or synonym, is the waterfall model [24, p. 329]. These procedures are characterized by extensive and complete planning at the beginning of the project [22, p. 5].
Evolutionary or iterative approach models developed first. One of the first known approaches was the spiral model developed by Barry W. Boehm [25, p. 63]. In the iterative approach, the development proceeds from a general specification step by step into ever more concrete tasks [26, p. 75]. The entire system is thus developed in several iterations. Planning takes place here only in short-term time horizons, whereby adjustments can be made after each run through experience and learning effects [27, p. 21]. In analogy to linear models, the procedure here is top-down, but jumps and deviations from the standardized procedure are permitted (see Figure 3b) [18, p. 560]. In the iterative approach, the planning goes into the background and the result gains in importance. These results are related to the prototypes built [28, p. 55].
In the case of agile procedures, long-term planning is not the focus, the customer and the end result have the highest priority [28, p. 55]. Since the planning is not planned for long periods, requirements are initially only roughly defined [27, p. 23]. The detailing then takes place during the project. The main focus in the agile approach is the adaptability and flexibility, the unpredictability of certain events is consciously accepted [1, p. 74]. The entire development is divided into iterations within which the individual development phases are passed without a fixed sequence [6, p. 19]. Rather, it is the responsibility of the team [23, p. 47] to complete the required phases. As can be seen in Figure 3c, the individual steps can even be performed in parallel.
An important feature is the provision of prototypes or rather potentially deliverable product components, the so-called product increments [29, p. 36, 30, p. 242]. This step-by-step deployment implies that requirements in the project process are met in turn [28, p. 55]. A typical agile project is characterized by fuzziness. However, this decreases from iteration to iteration by the learning effect of measurable partial results, like prototypes or product increments [6, p. 20].
Furthermore, the procedure for all agile methods is strongly based on the implicit knowledge and skills of the team members [31, p. 7]. If the development and the advance of the project are not sufficiently documented, difficulties can be experienced at a later stage in individual team members [32, p. 333]. In this context, [33, p. 33] note that the implication of implicit
![Fig. 3. Characteristics of procedure models (according to [23, p. 70, 17, p. 47, 27, p. 19]).](image)

knowledge can make a great contribution to the flexibility and speed of agile methods.
In conclusion, it is important to note that a certain degree of documentation is essential in the development of physical products. As a result, a large number of artifacts are created during the innovation process. Since this subject area is currently under investigation, the focus will be on artifacts in the following chapters.
C. Classification of Artifacts
A first approach to classify artifacts is to divide them by individual project phases. [34, p. 8] divide artifacts of agile project management in software development into the following categories: requirements, program code, tests, delivery items, planning and control. These can be divided more generally into planning, requirements and specification, change management, and testing [35, p. 3] Based on this, [36, p. 26] developed a detailed categorization of artifacts:
- Planning (e.g., Sprint Plan)
- Requirements (e.g., Product Backlog)
- Development (e.g., software codes)
- Tests (e.g., test reports)
- Change-Management
- Governance artifacts
The last-mentioned category contains all artifacts that cannot be classified in any of the previous categories. These are, for example, risk assessments or product standards.
Furthermore, an outline can be created to determine what kind of artifact it is. A division of requirement artifacts can be done into three categories according to [37]: Container, individual element or solution element.
A container can be defined as a sort of collection point. These are documents that keep the project together. This can either be a so-called artifact container, which consists of individual or solution elements. An example would be the Product Backlog. Furthermore, a container may also be a generic document that is continuously updated (e.g., a text document for describing the product) [37, p. 136]. Individual elements are either user-oriented (e.g., use-case) or technical (e.g., system requirements). The last kind of artifact is the solution element. These are either concrete (e.g., GUI mockup) or abstract (e.g., a spreadsheet) [37, p. 137].
A further classification of requirements artifacts was made by analyzing different strategies in the implementation of the requirement engineering. Three different directions of impact could be determined, into which the resulting artifacts can also be divided. It is a matter of …
- solution orientation: focus on the customer
- functional orientation: focus on applications and interfaces
- problem-orientation: focus on business and economic needs [38, p. 19]
According to [39] the data collected is classified to one of the three categories: feasibility, viability and desirability. Feasibility measures the design’s technical functionality, desirability its value for the customer as well as the likelihood of purchase, and viability the ability of the designs to fit into time- and budget constraints. Using these three variables, a three-dimensional space is created
IV. RESEARCH DESIGN
A. Think.Make.Start. – An Agile Framework
Think.Make.Start. (TMS) is a practical course at the TUM in cooperation with UnternehmerTUM. TMS brings together 50 students from different backgrounds, such as Mechanical Engineering, Informatics, Computer and Electrical Engineering, School of Management as well as others (Medicine, Communication Management, etc.). The students allocate themselves into teams under the constraint that each team must represent at least three different faculties.
The projects’ topic is freely chosen, but limited to a budget of ~400 EUR. The teams are supported by coaches of the corresponding faculties and have free access to the MakerSpace, a large workshop. TMS is characterized by time pressure, competition and an open community. The students learn agile and traditional methods and principles, but the team- or time-specific application is not predefined. The resulting agile product development approach is inspired by integrating knowledge and methods from different disciplines, using a real synthesis of approaches.
B. Research Question
In order to formulate the components of a best practice framework, it is necessary to understand the role of artifacts and methods on an agile process. Thus, the following research questions or hypotheses are the main focus:
- When applying agile procedures in physical product development, methods are used and artifacts are created.
- There are dependencies and interdependencies between the use of methods and the creation of artifacts.
- A low number of different methods is sufficient to adequately cover the areas of feasibility, desirability, and viability, and to create sufficient artifacts.
- The use of methods and artifacts favors efficient and fast product development.
C. Research Method and Data Used
To approach the research topic, exploratory research was applied due to the unknown nature of the findings. The concept of exploratory research originally came from the social sciences, but is now also found in other areas, such as market research [41, p. 37]. It is used when there is no or little scientific knowledge about certain research subjects, but there is nevertheless a presumption of the existence of interesting elements [42, p. 6]. The most important characteristics and prerequisites for carrying out explorative research are flexibility in the search for data as well as openness and creativity during the research process [41, p. 37]. Frequently used methods are interviews, expert discussions or a literature search [41, p. 22, 42, p. 37].
A combination of qualitative and quantitative methods is used for the little-explored field of agile approach in the
development of physical products. The starting point is a comprehensive literature research in order to create a common and uniform understanding of important basic concepts. The focus is also on agile procedures in the innovation processes of physical products.
The quantitative part is the data taken during the course Think.Make.Start of the Technical University Munich and the UnternehmerTUM, which took place at the UnternehmerTUM MakerSpace in Garching from 05.10 to 18.10.2016. During the course, the student teams were accompanied daily and their procedures documented. On the one hand, this systematically documented all activities such as applying methods, creating artifacts or the use of prototypes. On the other hand, the reasons and motivation for carrying out certain activities were determined by open and spontaneous individual and group interviews.
By combining the qualitative and quantitative results, a symbiosis takes place to establish a first generic concept. This corresponds to the qualitative part of the "partially known phenomenon" from Figure 5. The inductive creation of an agile framework and a logic for the structure of the components creates a qualitative basis on which ultimately a quantitative comparison of the collected data from TMS can take place. This corresponds to the deductive application and verification of the previously developed ideas.
The following diagram summarizes the implementation of the research methods in this work.
Fig. 5. Explorative research of this study according to [43, p. 6].
D. Data Collection
In order to present the results in a structured manner, the derived logic for the structuring of the components is transferred into a new schema (see Figure 6).
Methods and artifacts are now the focus, they take the upper and lower half of the schema. The characterization of the purpose is based on the three defined criteria. The "Why" corresponds to the step in the generic innovation process. Here, a further subdivision is partly made, in order to present the results in a more structured way. For example, the identification consists of the sub-categories "Problem definition and re-definition" as well as "Recognizing needs and synthesis". The "why" is shown in the upper part, the "what" in the lower part of the picture. The origin of the artifacts and methods, in short the corresponding procedure model, is evident by a different coloring. The associated legend is located below the image. Since the "when" refers to the corresponding iteration, it is not shown in the data base.
Why:
Methods
Artifacts
What:
Fig. 6. Logic to fill the theoretic agile framework.
For collecting the data, interviews, observations, and questionnaires, as well as the documentations and presentations of the teams were used. Thereby, the created prototypes, their purpose, and the gained findings, were recorded. Further, the sequence and the links between the prototypes were documented. Therefore, each day the documents provided by the teams were analyzed. Data for describing the prototypes, the hypotheses tested with them as well as the gained findings were documented. In discussions with the team members, missing information and additional prototypes were queried. It proved difficult to gather data on software prototypes, as the programming was done mainly "quick-and-dirty" and thereby the single steps were not documented in a transparent version. Though, the focus of the data collection was on gathering all information possible about physical prototypes.
V. RESULTS
A. Classification of Artifacts
Artifacts and methods can be categorized by their orientation to desirability, feasibility, and viability. Prototypes are divided into methods (manual, laser cut, rapid prototyping) and artifacts (function, geometry, concept). It is important that these orientations can also be assigned to the above-mentioned orientations. This is not the case for containers, since their purpose is to keep the project situation together. Figure 7 gives an overview of the clusters.
It turns out that many artifacts and methods can have different orientations. The affiliation is then dependent on the respective use case, for example the search can theoretically cover all three sub-areas. There is thus a further classification based on the practical use of TMS.
B. Temporal Assignment
The temporal assignment consists of an allocation of the artifacts and methods for the respective iteration. Artifacts are identified when created, used, or modified. Methods in case of application.
With respect to the temporal distribution, with the exception of the last iteration, at least 10 different artifacts and 5 different methods were constantly used, created or modified. Many artifacts and methods are repeatedly in use over several iterations. These are therefore very important for the teams. The number of artifacts is still higher than that of the methods. This is explained by the fact that one method can result in multiple artifacts or artifacts can be created without using a method. The use of artifacts has a clear maximum in iterations 3, 4 and 5. The need for support in the form of codified and visual information (e.g. sketches or mind maps) is therefore initially higher than towards the end of the project when the product itself has already assumed concrete form. In contrast, the method use up to iteration 6 is rather constant. This underpins the hypothesis that a low number of different methods is sufficient to adequately cover the areas of feasibility, desirability, and viability, and to create sufficient artifacts.
C. Frequency of artifacts and methods in the stages of the generic innovation process
The frequency of the artifacts and methods occurring in each case is examined for further analysis. The results are shown in Figure 8. Each iteration is divided into two halves, artifacts are listed on the left and methods on the right.
Overall, the chart shows very clearly that the teams were actually agile during TMS. Partially, several stages of the generic innovation process are traversed in an iteration, for example, in iteration 3 it is even all six. Furthermore, there are also jumps within the process stages. For example, in the iteration 6, the identification of product ideas is repeated, although this activity is actually one of the first in the development of innovation.
The frequency of occurrence of artifacts and methods can be further divided into two larger blocks. The first block refers to the steps identify, generate ideas and select the idea and extends to iteration 4. Characteristic is that the innovation idea is focused here. The second block starts in iteration 4 and covers the areas of requirements / target / solving problem, solution alternatives and concept development as well as elaboration. Here, the concentration on the concrete product is very pronounced. Particularly interesting is the fact that the sub-section...
“requirements / target / clarify problem” can be referred to as a kind of connection or transmission element. The occurrence of artifacts and methods is already apparent from iteration 2 up to the end of the project. As mentioned above, most different artifacts and methods are used in this stage. It is therefore of great importance as a transformation element between the innovation idea and the concrete product.
Finally, it is stated that containers are created in early iterations. This is in the sense of its purpose to create a possibility of overview and cohesion. Furthermore, the production and use of prototypes as an experimental object is constant over all iterations. This is also in the sense of their actual goal of getting regular feedback.
In terms of frequency, the most important artifacts are source codes, questionnaires, hypotheses, sketches, and CAD models. The most common containers are the Lean Canvas and the Scrumboard. The rest of the artifacts are distributed over the iterations without any major conspicuousness. This is due to the fact that they were produced and used on the basis of specific project- or team-dependent characteristics and preferences. For example, a team often worked with a foreign product. The most popular methods are the interview, brainstorming and research. In addition, at least one pivot was carried out in the first 5 iterations. With regard to the experiments, manual prototyping and the use of the laser cutter were used to create conceptual and functional prototypes.
D. Analysis of Interdependecies – Example Bikorsa
In the following the example of team Bikorsa is outlined in detail (see Figure 9). The basic vision of Bikorsa described the development of an innovative new bicycle bag. For this purpose, a brainstorming for identifying customer problems was carried out in iteration 2, which was supplemented by a competition analysis for existing solutions. Furthermore, first interviews took place. The first own concept idea was visualized by sketches and converted into a concept prototype of cardboard by manual prototyping. During the day, the team also created a Lean Canvas to aggregate the findings so far.
An analysis of the first surveys led to the creation of a questionnaire and hypotheses in iteration 3, which were used in interviews. In order to structure the project work better, a Scrum Board was introduced. During a search for possible materials, a new competition analysis took place in parallel. This resulted in a competition list in combination with the results of Iteration 2. For the sake of clarity, mind maps and sketches were created, as well as a genuine bike (external product) for geometrical dimensions was included.
Based on this information, Bikorsa produced a new prototype with a new design. In Iteration 4, the conceptual prototype was used in the interviews. On the basis of the results, the questionnaire and the hypotheses were updated again.
Within the framework of a brainstorming, ideas were gathered as the team had not yet determined the desired functionalities. These were collected in a Mind Map. Due to the abundance of possible functions, the team felt insecure and conducted an expert discussion, which ultimately resulted in a pivot. In concrete terms, this meant that the team was limited to certain functionalities. At the same time, the first consideration was given to the technical feasibility and the necessary components, and a first CAD model for the fixture was made.
In addition, a list of requirements as well as a resource list was developed to collect historical findings. Due to the pivot, Bikorsa developed the product idea again in iteration 5 and restricted the target group. In this context a user story was created. In addition, the existing competition analysis was expanded and evaluated with regard to the new target group. This resulted in a competition matrix.
The product functions now required were collected in a function list and a morphological box. In addition, a part of the team researched solutions to existing brackets to refine the existing CAD model on this basis. The first product increment in the form of a functional fixture could now be presented by Laser Cutting. In addition, programming for the brake light was performed (second increment).

In Iteration 6, a new questionnaire and new hypotheses were designed to test the previously defined reorientation in interviews. As a result, the focus was placed on city bikers who attach importance to safety, user friendliness and design. For the evaluation of the design two different prototypes were built and reactions were tested. Iteration 7 was largely in the sense of technical refinement. Due to problems with the brake light a circuit diagram was created. In addition, Bikorsa designed a landing page. In the end, the previously developed bracket, the brake light and the actual cover could be combined into a functional overall concept. Iteration 8 served to finalize the docking station and the MVP created, while parallel preparations for the final presentation were running.
VI. SUMMARY
This work analyzed the influence of artifacts and methods in agile development projects of physical products. In this area, there is little scientific knowledge, which is why the research methodology of exploratory research has been applied.
In order to make the research area more transparent, an examination of the current state of research was carried out. On the basis of extensive literature researches, important definitions were first made before different forms of expression were taken into consideration in product development. By examining specific approach models that are relevant to Think.Make.Start, a generic innovation process was derived. Due to the thematic importance, an analysis of the importance of artifacts in the product development took place.
Based on the results of the preceding sections, a first step towards an agile framework was developed, which is used in development projects of physical products. A further literature research led to the development of a logic for structuring the components of the framework. This is of great importance since this allows a systematic and uniform mapping of the agile development processes. The logic was then filled on the basis of the theory in order to create a data basis for analysis.
These concepts were then essential for a targeted analysis of the data from TMS. First of all, a comparison of the used artifacts and methods with the data collection from the theory was possible. The resulting logic allowed a systematic investigation of the temporal use, the affiliation to different stages of the generic innovation process as well as procedural models. Furthermore, a grouping into different clusters (desirability, feasibility and viability) was carried out. Finally, an analysis of the interdependencies between methods and artifacts took place. For this purpose, the innovation process of each individual team was presented clearly and structured in a uniform schema in order to identify any similarities or differences. This was supplemented by a textual description of the respective procedures. Finally, research questions or hypotheses were formulated, which are now being reviewed.
Hypothesis 1, 2 and 3 are confirmed. Hypothesis 4 is only partly confirmed. Artifacts and methods are necessary and quite useful. However, no general statement regarding their role can be made yet. Further research efforts are needed here.
VII. DISCUSSION AND OUTLOOK
In order to systematically map the framework of Think.Make.Start, the agile framework was created. This was a necessary prerequisite for analyzing the components of agile development projects in a planned and clear manner. In its entirety, the TMS team's various approaches were outstanding. For applications in other agile projects it should be verified in more detail and adapted as required.
The framework was logically structured using its components. For this purpose, a literature research took place to identify existing approaches. Afterwards a symbiosis and the derivation of one's own logic took place. It should be noted that changes to the generic innovation process or the agile framework must also be adapted to their structure. Nevertheless, the logic provides a very good starting point for systematically mapping the different elements of an agile development.
In order to ensure a targeted analysis of the TMS data, a data base was necessary to control corresponding assignments and comparisons. Such a data base is not available in research and is based on artifacts and methods from the investigated approach models. It can be extended and supplemented at any time by further analysis procedures. However, the data developed here provided an extensive basis for a systematic investigation of the development projects.
Data collection was very broad, including quantitative and qualitative data, e.g. in the course of spontaneous surveys of the team members. This combination of data was finally evaluated in combination with the knowledge and concepts developed in the course of the work.
In particular, the logic for structuring the components of the framework proved to be extremely suitable for obtaining insights and for making first derivations. The analysis of the interdependencies emerged as a more complicated one as a result of the very individual approach of the teams. However, a systematic mapping of each individual development project over the entire iterations provided a suitable foundation for carrying out initial analyzes. Further research efforts are useful here to obtain more detailed insights.
ACKNOWLEDGMENT
Great acknowledgments to the team of UnternehmerTUM and students of TUM for their support and enthusiasm. Besides Technical University of Munich, the Vector foundation, and the Zeidler research foundation funded TMS. In conclusion, a big thank you to the MakerSpace team for their support.
This project has received funding from the European Union's Horizon 2020 Framework Programme for Research and Innovation under Grant Agreement No 720270 (Human Brain Project SGA1)
REFERENCES
Authorized licensed use limited to: Technische Universitaet Muenchen. Downloaded on July 27,2022 at 14:21:30 UTC from IEEE Xplore. Restrictions apply.
|
{"Source-Url": "https://mediatum.ub.tum.de/doc/1521958/4k2ylroaocu3gxg6yma4w3729.Towards_a_framework_for_agile_development_of_physical_products_influence_of_artifacts_and_methods.pdf", "len_cl100k_base": 7033, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 25982, "total-output-tokens": 10231, "length": "2e12", "weborganizer": {"__label__adult": 0.00041747093200683594, "__label__art_design": 0.00141143798828125, "__label__crime_law": 0.0003554821014404297, "__label__education_jobs": 0.00794219970703125, "__label__entertainment": 9.012222290039062e-05, "__label__fashion_beauty": 0.00029015541076660156, "__label__finance_business": 0.0008459091186523438, "__label__food_dining": 0.00041365623474121094, "__label__games": 0.0007162094116210938, "__label__hardware": 0.0015735626220703125, "__label__health": 0.0005679130554199219, "__label__history": 0.0005197525024414062, "__label__home_hobbies": 0.00025343894958496094, "__label__industrial": 0.001224517822265625, "__label__literature": 0.00042939186096191406, "__label__politics": 0.0002541542053222656, "__label__religion": 0.0005774497985839844, "__label__science_tech": 0.0648193359375, "__label__social_life": 0.00016355514526367188, "__label__software": 0.00629425048828125, "__label__software_dev": 0.9091796875, "__label__sports_fitness": 0.0004549026489257813, "__label__transportation": 0.0010089874267578125, "__label__travel": 0.0002562999725341797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42995, 0.02075]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42995, 0.63407]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42995, 0.91907]], "google_gemma-3-12b-it_contains_pii": [[0, 3550, false], [3550, 8588, null], [8588, 12366, null], [12366, 18096, null], [18096, 22433, null], [22433, 25063, null], [25063, 29461, null], [29461, 35741, null], [35741, 42995, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3550, true], [3550, 8588, null], [8588, 12366, null], [12366, 18096, null], [18096, 22433, null], [22433, 25063, null], [25063, 29461, null], [29461, 35741, null], [35741, 42995, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42995, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42995, null]], "pdf_page_numbers": [[0, 3550, 1], [3550, 8588, 2], [8588, 12366, 3], [12366, 18096, 4], [18096, 22433, 5], [22433, 25063, 6], [25063, 29461, 7], [29461, 35741, 8], [35741, 42995, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42995, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
3b3607c9031176cefb5f46b6fd198aa5e449eb00
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-319-33515-5_12.pdf", "len_cl100k_base": 4964, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24387, "total-output-tokens": 7110, "length": "2e12", "weborganizer": {"__label__adult": 0.0004467964172363281, "__label__art_design": 0.0004303455352783203, "__label__crime_law": 0.0003862380981445313, "__label__education_jobs": 0.0027713775634765625, "__label__entertainment": 8.857250213623047e-05, "__label__fashion_beauty": 0.00018668174743652344, "__label__finance_business": 0.0006012916564941406, "__label__food_dining": 0.00032901763916015625, "__label__games": 0.0006546974182128906, "__label__hardware": 0.0004813671112060547, "__label__health": 0.0008063316345214844, "__label__history": 0.00017917156219482422, "__label__home_hobbies": 0.00010877847671508788, "__label__industrial": 0.00031113624572753906, "__label__literature": 0.0005602836608886719, "__label__politics": 0.0003273487091064453, "__label__religion": 0.0004208087921142578, "__label__science_tech": 0.01213836669921875, "__label__social_life": 0.0002925395965576172, "__label__software": 0.0058135986328125, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.0003266334533691406, "__label__transportation": 0.0004839897155761719, "__label__travel": 0.00019073486328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28272, 0.03898]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28272, 0.56159]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28272, 0.89505]], "google_gemma-3-12b-it_contains_pii": [[0, 2406, false], [2406, 5512, null], [5512, 8763, null], [8763, 9995, null], [9995, 12854, null], [12854, 14026, null], [14026, 16098, null], [16098, 18051, null], [18051, 20085, null], [20085, 22963, null], [22963, 26260, null], [26260, 28272, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2406, true], [2406, 5512, null], [5512, 8763, null], [8763, 9995, null], [9995, 12854, null], [12854, 14026, null], [14026, 16098, null], [16098, 18051, null], [18051, 20085, null], [20085, 22963, null], [22963, 26260, null], [26260, 28272, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28272, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28272, null]], "pdf_page_numbers": [[0, 2406, 1], [2406, 5512, 2], [5512, 8763, 3], [8763, 9995, 4], [9995, 12854, 5], [12854, 14026, 6], [14026, 16098, 7], [16098, 18051, 8], [18051, 20085, 9], [20085, 22963, 10], [22963, 26260, 11], [26260, 28272, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28272, 0.06667]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
a1a785e7e42a7cfc493cc1fd457dc4fb002172cd
|
shiny and rmarkdown R packages: Regulatory Compliance and Validation Issues
A Guidance Document for the use of affiliated R packages in Regulated Clinical Trial Environments
September 2020
RStudio PBC
250 Northern Ave
Boston, MA USA 02210
Tel: (+1) 844 448 1212
Email: info@rstudio.com
1. Purpose and Introduction
The purpose of this document is to demonstrate that the shiny and rmarkdown R packages[10], when used in a qualified fashion, can support the appropriate regulatory requirements for validated systems, thus ensuring that resulting electronic records are “trustworthy, reliable and generally equivalent to paper records.” For R, please see the document below:
R: Regulatory Compliance and Validation Issues
A Guidance Document for the Use of R in Regulated Clinical Trial Environments [8]
What qualifies as a record?
Validation guidance is a result of regulation, most notably the US Food and Drug Administration’s 21 CFR Part 11[2]. This regulation was originally written to apply to health records. The definition of a record has subsequently been clarified and extended.
A key consideration for companies subject to this regulation is determining the extent to which analytic software systems such as R, and the outputs they produce (web applications, documents, presentations, etc.) constitute records or derived records subject to Part 11 compliance[3].
RStudio, following the work of the R core foundation and the R Validation Hub, does not consider the outputs created, in whole or in part, by R and R packages as records directly subject to compliance regulations[8]. However, these outputs, and as a result, the software used to create them, should follow the spirit of the regulation and strive to be trustworthy and reliable. Risk-based monitoring was first documented by the FDA in the reflection paper and guidance in 2011. Following the interpretation of FDA guidance, RStudio recommends that organizations consider a risk-based approach to the use of R and R packages[1]. This document outlines important considerations to the risk of using a set of R packages affiliated with RStudio, PBC. Separate from these packages are RStudio’s products, which are developed using a controlled process that consists of distinct development phases as outlined here:
RStudio: Regulatory Compliance and Validation Issues
Risk-Based Approach and Open Source Software
Before covering the specific details of the Software Development Cycle and 21 CFR Part 11 compliance functions related to these affiliated packages, it is worth quickly noting the role of a risk-based approach in open source software. Noting that the use of analytic software is often complex, the FDA has clarified[2] that organizations should take a risk-based approach aimed at ensuring analysis is trustworthy and reliable. Each organization will adopt its own standards and definitions for risk.
This document is intended to provide a reasonable consensus position on the part of RStudio relative to the use of packages (for the shiny and rmarkdown ecosystems) within regulated environments and to provide a common foundation for people to meet their own internal standard operating procedures, documentation requirements, and regulatory obligations. Risk assessment and management are possible. More so, it is our view that at a much deeper and fundamental level, open source software fulfills the role of a “trustworthy and reliable” system far beyond any closed-source, proprietary software. Specifically, open source software is always available, to those interested, to be inspected and reviewed. By its very nature, the availability of open source software is not subject to the rise or fall of specific corporations. Nor is its use, review, and improvement subject to the economic means of the user. As a result, the outputs and methods of open source software are more amenable to being shared, more open to challenges and improvements, and significantly more repeatable and reproducible.
What is an R package?
R packages are extensions of the base R language for loading code, data, and documentation[14]. R packages exist as components in an ecosystem of software used together for analysis. In addition to this document, we advise that users refer to:
Regulatory Guidance for the R Language[8]
Regulatory Guidance for the Tidyverse, Tidymodels, r-lib, and gt R Packages
R Validation Hub[1]
The R Validation Hub resource strives to provide organizations with a risk-based approach to validating R add-on packages that are not expressly addressed in this document or other guidance documents[1]. The R Validation Hub is supported by The R Consortium, Inc., a group organized under an open source governance and foundation model to support the worldwide community of users, maintainers, and developers of R software[4]. As of August 2020, Its members include organizations such as Roche/Genentech, Microsoft, Google, Merck, Janssen Pharmaceutica, and more leading institutions and companies dedicated to the use, development, and growth of R. Such efforts include supporting the R in Pharma conference as
Scope of this Guidance
This document applies to the shiny and rmarkdown R packages maintained by RStudio. These two packages have unique characteristics and considerations compared to the majority of R packages. As opposed to containing statistical routines or analysis methods, these two packages are both generative, enabling users to create their own classes of outputs. Shiny enables users to create web applications. R Markdown allows users to create documents, reports, presentations, and a variety of other output types. Both packages are foundational to a large cohort of additional packages specific to the sub-domain of R they enable, for example, there are many packages that extend shiny. This document discusses the principles that guide the development of these packages and gives considerations for the use of each. However, the packages exist in an ecosystem of dependencies and reverse dependencies, it is incumbent on each user of the packages at any moment in time to qualify their installation - ensuring a complete and compatible cohort of packages and related software.
This document is NOT in any fashion, applicable to any other R-related software or add-on packages. It is important to note that there is a significant obligation on the part of the end-user's organization to define, create, implement and enforce R installation, validation, and utilization related Standard Operating Procedures (SOPs). The details and content of any such SOPs are beyond the scope of this document. This document also does not provide guidance for software or artifacts derived from these foundational packages. For example, a web application developed with the shiny package may require its own validation considerations.
This document is not intended to be prescriptive, does not render a legal opinion, and does not confer or impart any binding or other legal obligation. It should be utilized by the reader and his or her organization as one component in the process of making informed decisions as to how best to meet relevant obligations within their own professional working environment.
RStudio, Inc. makes no warranties, expressed or implied, in this document.
2 Software Development Life Cycle (SDLC)
2.1 Operational Overview
The shiny and markdown R packages in these organizations follow a common development lifecycle[10]. The size of the R user community provides for an extensive review of source code and testing in enterprise settings and all having full access to the source code, enables a superior ability to anticipate and verify the performance and the results produced by the packages discussed within this document.
2.2 Source Code Management
The source code is managed using Git, a widely-used open source version control software. The source is stored in public Git repositories and made available through Github, a Microsoft Affiliate that hosts Git repositories. Specifically:
- R Markdown Source Code: https://github.com/rstudio/rmarkdown
- Shiny Source Code: https://github.com/rstudio/shiny
2.3 Testing and Validation
The packages are tested through a combination of unit tests, CRAN checks\[15\], and integration tests. Unit tests are written to cover specific functions and features provided by each package. The test suites are run in an automated fashion, and the test results, as well as the test coverage (the amount of code tested by the package unit tests), are publicly displayed. CRAN checks, also run automatically, provide a thorough test of whether the package source code can be built and installed across a variety of operating systems. Any package accepted on CRAN must pass a series of automated tests that enforce the CRAN submission policies\[15\]. These checks also account for checks for consistency in function definition and documentation. The results of these tests are available publically for each package. Tests are executed automatically when any changes are proposed to the code, documentation, tests, or metadata in the package. These tests are run across multiple versions of R and multiple operating systems. Finally, packages released on CRAN undergo a test for compatibility with other dependent and reverse dependent packages.
Additionally, the shiny package undergoes a comprehensive integration test with each release, consisting of applying the package update to a wide variety of derived shiny applications and testing for any changes in functionality. These tests take advantage of the automated tooling available for testing web applications built using shiny\[12\].
2.4 Release Cycle
Packages are developed incrementally, following the best practices of the Git version control system. At specific points in time, a package will be released by incrementing the package version number, tagging the commit as a release in Git, archiving the new sources, and submitting the package to CRAN.
For each release, comprehensive release notes are maintained within the repository, and a summary of the major changes and features in a release are documented in a news changelog\[9\]. Releases tend to be accompanied by a summary of the major changes and features in a blog post (see examples for shiny and rmarkdown)\[9\].
2.5 Availability of Current and Historical Archive Versions
The source code for all packages, including every revision to this source code, is maintained in GitHub, a distributed service that provides free and public access to the projects’ Git repositories (see section 2.2).
The released and archived versions of each package are maintained on CRAN, an extensive network of repository mirrors. Archive package versions can be found via the CRAN Package Archive. RStudio maintains a CRAN mirror with current and archived packages at https://cran.rstudio.com. The RStudio mirror uses Amazon Cloudfront to maintain copies of CRAN on servers all over the globe. These copies are updated off of the main CRAN mirror in Austria once per day. RStudio created this mirror to provide a consistently fast option around the world, a reliable option for users, and to provide a rich source of data about R and package usage. In addition, RStudio maintains a history of the CRAN mirror, accessible at https://packagemanager.rstudio.com.
The packages are leased using public licenses that are not subject to commercial control, ensuring they are fundamentally available in a greater sense than any commercial software.
2.6 Maintenance, Support, and Retirement
RStudio understands support and maintenance to encompass a wide range of activities, corresponding to the open nature of the package development and use. Each package provides extensive documentation, both within the package for specific functions, but also outside the package to document best practices, examples, and common usage patterns:
- R Markdown: https://rmarkdown.rstudio.com
- Shiny: https://shiny.rstudio.com
The websites provide articles, galleries of examples, and extensive video and written documentation. Numerous supporting materials exist for learners. The rmarkdown package includes a freely available book that serves as a definitive resource guide and another freely available book that contains practical examples[6][5].
A community forum affiliated with RStudio provides an opportunity for direct Q&A with categories dedicated to both shiny and R Markdown. Specific issues related to the software, such as requests for new features or identification of bugs, can be submitted and tracked on the relevant package GitHub site. For example, the shiny and rmarkdown issue boards[10].
2.7 Qualified Personnel
All development of shiny and rmarkdown occurs through open contributions to the packages source code. Each contribution can be specifically enumerated using the Git version control system. For convenience, the history of contributors is available for each package. For example, the contributors to the rmarkdown package as well as the shiny package. Contributions are peer reviewed in the open, and contributions are considered in light of the testing and style guides previously documented[10].
While a wide range of contributors have enabled the success of these packages, a core set of primary contributors employed by RStudio have relevant professional qualifications such as advanced degrees in related subject matter, peer reviewed publications, conference talks, and/or industry experience. Many members of RStudio hold Ph.D. and/or Master’s degrees from accredited academic institutions and have published extensively in peer reviewed journals. Several have written books on statistical computing technologies and applications using the packages detailed in this document.
2.8 Physical and Logical Security
The physical and logical security of the package source code is handled through GitHub’s disaster recovery plan and security standards.
2.9 Disaster Recovery
The disaster recovery of the package source code is handled through GitHub’s disaster recovery plan.
3 21 CFR Part 11 Compliance Functionality
3.1 Overview
The United States regulation known as Title 21 CFR Part 11, or the “Electronic Records; Electronic Signatures” rule[3], provides information about what constitutes trustworthy and reliable electronic records and electronic signatures. FDA industry guidance for the use of electronic health record data in clinical investigations is here[13]. RStudio continues to monitor FDA regulations and guidelines that pertain to packages covered in this document. People can use RStudio products and packages to build data collection, analysis, and other systems that can be used in compliance with Part 11. Compliance with this regulation ultimately depends on
how a package is installed and used, how users are trained, and other factors. Users need to use packages according to the system requirements, install it according to the installation instructions, and use it via the user documentation. Users should refer to the predicate rule or consult the FDA or its guidance documents to determine whether packages are in compliance with regulatory expectations.
shiny and rmarkdown are R packages that extend the base R language. As a result, these packages inherit the CFR Part 11 compliance guidelines and interpretation documented for the base R language[8]. For this reason, we recommend all compliance considerations for R packages begin with the interpretation and compliance guidelines for R itself.
However, RStudio recognizes that these packages are often used in novel applications that go beyond the "performance of calculations or creation of graphics". Indeed, both shiny and rmarkdown provide frameworks for extending the R programming language to create entire application suites. As such, we provide guidance for how CFR compliance can be approached in the practical application of these packages. However, it is important to note that the R packages often provide only a piece of the final product; they are often combined with other systems and tools for distribution and use. Organizations should take care to validate the entire system, not just individual components. We provide additional guidance for the use of RStudio's professional products which, in conjunction with the R packages, provide a holistic system[11].
Additionally, as mentioned in the introduction, a key consideration is whether the artifacts created using shiny and rmarkdown constitute CFR 11 records. Following the base R guidance, RStudio understands that these outputs are typically not records, and the majority of applications derived from these packages are not tools designed for the creation, maintenance, modification, or deletion of CFR 11 records. As such, CFR 11 does not typically apply directly. Quoting the base R guidance:
R's design and development are focused on reporting, by enabling leading edge statistical analysis and presentation, rather than on data management tasks as illustrated by transaction/data processing and related functionality.
In the following sections, the term record means an electronic record that is interpreted to fall within the remit of Part 11 as defined in FDA Guidance for Industry Part 11, Electronic Records; Electronic Signatures– Scope and Application (2003).
R is not intended to create, maintain, modify or delete Part 11 relevant records but to perform calculations and draw graphics[7].
Organizations using shiny or rmarkdown to create applications that are directly related to the maintenance of CFR 11 records should take care to qualify and validate those systems in their entirety.
3.2 11.10(b) The ability to generate accurate and complete copies of records in both human readable and electronic form suitable for inspection, review, and copying
The shiny package allows users to create web applications that are composed of standard web components such as HTML, JavaScript, and CSS. These web applications can then be combined with a web server and a client browser to generate a human readable, interactive application. The Shiny package is tested to ensure a wide range of compatibility with these components, especially client browsers.
The rmarkdown package is designed to support literate programming, allowing the results of computation to be included alongside prose in a variety of formatted and human readable outputs including RTF, Word, PDF files, HTML files, and presentations. These outputs are typically viewed through a client application such as a web browser or PDF viewer. The outputs are designed to be compatible with a wide variety of nearly ubiquitous client tools.
3.3 11.10(c) Protection of records to enable their accurate and ready retrieval throughout the records retention period
The shiny package creates interactive applications that require both a client browser and an active server running the user code. As such, to enable the ready retrieval of such applications, an organization must use the shiny package in conjunction with a web server designed to host these applications. RStudio provides guidance for RStudio Connect which is one such web server[11].
The rmarkdown package creates a variety of output formats. The resulting outputs are typically stand-alone files or directories of files that can then be stored and retrieved through a validated system of record for files.
3.4 11.10(d) Limiting system access to authorized users
The development of code that uses the shiny and rmarkdown R packages occurs through the use of the R language and as such the guidance for the base R language applies:
*R is an application that runs within the hosting computer environment, which must provide user access controls at hardware and/or operating system levels. The requirement for this section is typically met via system level functionality and is based on user roles, object level security and related security policies*[8].
However, RStudio also recognizes that once written, code that utilizes the shiny and markdown
packages is often deployed. In these cases, the deployed system must provide these access controls.
RStudio provides guidance for RStudio's professional products which implement these authorization constraints in both development and deployment of code using the shiny and rmarkdown frameworks[11].
3.5 11.10(e) Use of secure, computer-generated, time-stamped audit trails to independently record the data and time of operator entries and actions that create, modify, or delete electronic records. Record changes shall not obscure previously recorded information. Such audit trail documentation shall be retained for a period at least as long as that required for the subject electronic records and shall be available for agency review and copying
The shiny package allows users to create interactive web applications. It is incumbent upon the creators of these applications to incorporate logging that generates such time-stamped audit trails.
The rmarkdown package automatically outputs time-stamped logs anytime the package is used to render content. It is incumbent upon the organization to store and/or supplement these default logs.
RStudio provides guidance for RStudio's professional products that implement additional audit and storage capabilities for both the development and deployment of code using the shiny and rmarkdown frameworks[11].
3.6 11.10(f) Use of operational system checks to enforce permitted sequencing of steps and events, as appropriate
Following the base R guidance, RStudio understands this item to mean that effective user technology, processes, and interfaces must be in place to reduce errors made by an operator. Both the shiny and rmarkdown packages are built to include error handling and provide functions for users to capture, diagnose, and customize the behavior of outputs in scenarios where errors occur, as outlined in this document[14].
3.7 11.10(g) Use of authority checks to ensure that only authorized individuals can use the system, electronically sign a record, access the operation or computer system input or output device, alter a record, or perform the operation at hand
Following the base R guidance, RStudio refers to the recommendations in 11.10(d) that understand authority checks here as intimately related to system authorization.
This section inherits the same guidance found in the base R documentation[7].
3.8 11.10(h) Use of device (e.g., terminal) checks to determine, as appropriate, the validity of the source of data input or operational instruction
RStudio understands these checks to be warranted in the case where shiny or rmarkdown are used as a primary-source data management or data entry system. In these cases, it is incumbent on users to implement these quality checks.
Specifically, in the context of shiny applications that perform data management or entry, patterns exist for ensuring the validity of a user 11.10(d) as well as quality checks for entered data. However, the shiny package does not provide for these capabilities directly itself.
3.9 11.10(j) The establishment of, and adherence to, written policies that hold individuals accountable and responsible for actions initiated under their electronic signatures, in order to deter record and signature falsification
Neither the shiny nor rmarkdown packages provide functions for electronic signatures. In cases where applications or outputs derived from these packages are determined, through the application of relevant predicate rules, to require an electronic signature, it would be incumbent on the organization to establish policies and integrate with a 3rd party system.
3.4 11.10(k) Use of appropriate controls over systems documentation
RStudio understands this item to mean that there must be control over who can change and access documentation for these packages. As documented in section 2, the packages provide for specific contribution guidelines including contributions to documentation, and core documentation is maintained and governed in the same version control systems as the source code.
Section 11.30 Controls for Open Systems - the system shall employ procedures and controls designed to ensure the authenticity, integrity and as appropriate the confidentiality of electronic records from the point of their creation to the point of their receipt. Additional measures such as document encryption and use of appropriate digital signature standards to ensure, as necessary under the circumstances record authenticity, integrity and confidentiality
This section inherits the same guidance found in the base R guidance[8].
Key References
|
{"Source-Url": "https://rstudio.com/assets/img/validation-shiny-rmd.pdf", "len_cl100k_base": 4633, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 25660, "total-output-tokens": 6196, "length": "2e12", "weborganizer": {"__label__adult": 0.000736236572265625, "__label__art_design": 0.0005578994750976562, "__label__crime_law": 0.0033245086669921875, "__label__education_jobs": 0.004535675048828125, "__label__entertainment": 0.00016558170318603516, "__label__fashion_beauty": 0.00026726722717285156, "__label__finance_business": 0.0039005279541015625, "__label__food_dining": 0.0007367134094238281, "__label__games": 0.0009479522705078124, "__label__hardware": 0.0012750625610351562, "__label__health": 0.013702392578125, "__label__history": 0.00045418739318847656, "__label__home_hobbies": 0.0002105236053466797, "__label__industrial": 0.0009860992431640625, "__label__literature": 0.0005717277526855469, "__label__politics": 0.0005826950073242188, "__label__religion": 0.000682830810546875, "__label__science_tech": 0.10125732421875, "__label__social_life": 0.00028014183044433594, "__label__software": 0.1763916015625, "__label__software_dev": 0.6865234375, "__label__sports_fitness": 0.0006842613220214844, "__label__transportation": 0.0006337165832519531, "__label__travel": 0.0004401206970214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27734, 0.03483]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27734, 0.17718]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27734, 0.9008]], "google_gemma-3-12b-it_contains_pii": [[0, 290, false], [290, 2502, null], [2502, 5290, null], [5290, 7947, null], [7947, 10501, null], [10501, 12862, null], [12862, 14975, null], [14975, 17859, null], [17859, 20243, null], [20243, 22131, null], [22131, 23872, null], [23872, 24840, null], [24840, 27734, null]], "google_gemma-3-12b-it_is_public_document": [[0, 290, true], [290, 2502, null], [2502, 5290, null], [5290, 7947, null], [7947, 10501, null], [10501, 12862, null], [12862, 14975, null], [14975, 17859, null], [17859, 20243, null], [20243, 22131, null], [22131, 23872, null], [23872, 24840, null], [24840, 27734, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27734, null]], "pdf_page_numbers": [[0, 290, 1], [290, 2502, 2], [2502, 5290, 3], [5290, 7947, 4], [7947, 10501, 5], [10501, 12862, 6], [12862, 14975, 7], [14975, 17859, 8], [17859, 20243, 9], [20243, 22131, 10], [22131, 23872, 11], [23872, 24840, 12], [24840, 27734, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27734, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
da83fdd39a08f72be705d5b0fafa47065d6bce1c
|
CS 297 Report
Context-Sensitive Wiki Help System for Yioop
Advisor: Dr. Chris Pollett
By
Eswara Rajesh Pinapala,
Department of Computer Science,
San Jose State University.
Introduction
A Context-sensitive Online Help system provides targeted information to users based on their context (i.e., where they are, what their current state is, or what job are they performing) with respect to the application. Context-sensitive help can take several forms, usually, as a widget, overlay page, or as a hyperlink to a completely different page or window. The user need not move out of context to get help for his current task. Each help topic intends to be applied for a given context exclusively.
A Wiki-based Help System allows users to collaborate on help content. Most wiki-based support systems are setup as portals, centralized sites where users can get help and contribute to help content. One problem with a wiki-based help system is that the user has to move out of context and search for help from a large pool of help articles.
A Context-Sensitive Wiki Help System combines the aspects of context-sensitive help with the collaborative nature of a wiki. Users get the targeted help based on their context, but they also have ability to edit and contribute to the help content.
The beginning of this writing project will be a study of help content authoring tools with support for context-sensitive help systems [1] [2] and wiki systems. Using the information acquired from the study, I will build a feature set that will be used in the development of the context-sensitive wiki help system for web applications. Also, as part of one of the deliverables, I will also be building a basic wiki editor. I will be using Yioop; an open-source search engine application developed by Dr. Chris Pollett, as a platform to integrate with our context-sensitive wiki based help system.
2. Deliverables
2.1 Deliverable 1: Research on Context-sensitive help
2.1.1 Research on Context-sensitive help authoring tools.
The purpose of this deliverable is to get a clear understanding of how context-sensitive help systems are usually configured and used. Two context-sensitive help content authoring tools were researched to understand the context-sensitive help setup and configuration [1] [2]. The two tools configured were Flare by Madcap Software and Robohelp by Adobe. This research was helpful to understand how to enable help content authors to publish context-sensitive help. In addition to content authoring and publication, it was possible to identify the procedure to guide web developers to easily integrate context-sensitive help into existing web pages.
Studying existing research work on context-sensitive help was another task accomplished as part of this deliverable. Research paper studies were on “Context-Sensitive help for Multimodal Dialogue [1]” published by AT&T researchers and the presentation were on “User-centered Design of Context-sensitive Help [2]”.
Flare by Madcap and Adobe Robohelp are comparable in features and installation. Flare and Robohelp allow the help content authors to easily generate context-sensitive help and integrate the same into web applications. This deliverable throws some light on the findings from the setup and configuration of Madcap Flare and Adobe Robohelp.
Figure 1 shows parts of the Yioop configuration page to insert context-sensitive help content
The first step involved in generating help content is to finalize places on the web page where help needs to be inserted. We intend that the Yioop configuration page will have the context-sensitive help inserted at a few spots identified as shown in figure 1.
Madcap Flare and Robohelp provide a way to organize the help content into individual help content topics. A group of help topics can be identified and placed into a help chapter. The identified help topics are as below:
- Install & configure
- Profile settings
- Advanced settings
- Crawl-robot setup
We have used the help content from the official Yioop help docs to gather appropriate help content for the help topics in the above list.
Figure 2 shows the chapter view in Madcap Flare and Adobe Robohelp.
Header files and Alias files
“Header” files and “Alias” files are plain text files, used to define which context sensitive help content will be mapped to which parts of the webpage. Header files help the content authors to easily manage the relationship between the web pages and help content. Below is an example entry in a header file for Madcap Flare and Adobe Robohelp.
\# define \{TopicName\} \{TopicNumericId\}
Example: \# define CHA1 1000
Alias files, on the other hand, include a mapping between the support chapters and their respective hyperlink. Alias files in combination with Header files serve as an index for the entire help content. The names, which identify the chapters, are assigned a help content page in HTML format. Alias files are standard XML files containing the mapping information in each element.
Example mapping element:
Madcap Flare
<Map Name="CH1A" Link="/path/to/CH1a.htm" />
Adobe Robohelp
<alias name="Crawl_Robot_Setup" link="/path/to/Crawl_Robot_Setup.htm ">
</alias>
Both Alias files and header files together, provide a way to define a mapping to uniquely identify the chapters using their names and identifiers as parameters. The HTML or JavaScript code embedded into the web pages can use the topic identifiers information from the alias and header files. The HTML buttons enable the retrieval of help content page, by using the numeric identifiers as parameters for the JavaScript API function. They also provide the flexibility for the content authors to change the mapping or crosslink the help articles without making changes to the web pages. If the help content author wants to change the content of a help topic or wants to use a new help article altogether, concerned changes could be performed just in the alias file.
Once the help content, along with header files and alias files is created, Madcap Flare and Adobe Robohelp compile and generate all the help content into a single directory. The compiled help content also includes a tiny JavaScript framework that can be embedded right into any web application. The purpose of this JavaScript framework is to provide an easy API to web developers integrate the help content into their web application. Without the JavaScript framework, the developers have to use the header file to build code to integrate context-sensitive help pop-ups into their web application.
The help content folder can be moved to a subdirectory on the Yioop web server root directory. The generated help content pages are now visible at http://yioop.com/csh/. The web page displays context-sensitive help in the form of pop-ups. HTML buttons are used to trigger the help pop-ups.
As described earlier, the unique numeric id (CSHID) is used to trigger the concerned context-sensitive help pop-up, with the support of the JavaScript API provided by Flare. A sample HTML button code looks like below.
\<input type="button" value="?" onclick="FMOpenHelp ('CSHID ', null, null, null);" />
Figure 2 shows Yioop configuration page integrated with context-sensitive help.
2.1.2 Research on Context-sensitive help.
The second part of Deliverable 1 will explain my research on existing context sensitive help systems. The main intention behind studying the existing systems is also to understand the problems and advantages of Context-sensitive help from the past. The first part of the deliverable research was on how context sensitive help systems improved over time [2]. During my research, I discovered that there are mainly two kinds of context sensitive help systems in use. One is reference bases and the other is procedural based. The third type is the contextual help.
Figure 3 shows reference based context-sensitive help in Windows 3.1
Reference based context-sensitive help is a method of dumping all the help content as reference chapters at a single place. Though the reference documentation is provided specific to a page or a window, a problem arises when the user is searching for something specific. The user is
required to search in a pool of help contents. Thus, reference based context-sensitive help requires extra time and effort from the user, so this is not always a good idea.
Procedural help [3], as its name suggests, is a procedure oriented context-sensitive help. Procedural help article usually takes user’s current location and tries to provide help content on how to perform a task. In addition to this, procedural help is also useful to let the user know what kinds of tasks are possible at a specific area.

**Figure 4 shows procedural context sensitive help in Flare [3]**
Contextual help [5], on the other hand, is the additional information on the UI that supplements the UI and provides a brief, up to the point support. Contextual help is usually displayed in a pop-up window or beside some relevant fields. Contextual help embedded in the application interface to makes sure user does not leave the interface.

**Figure 5 shows contextual help in Windows 8**
Currently, all the help content in Yioop is a reference based context-sensitive help system. The idea is to keep the current reference based help content intact and add context sensitive help system which is more focused on contextual help and procedural help.
2.2 DELIVERABLE 2: RESEARCH ON EXISTING WIKI SYSTEMS
2.2.1 MediaWiki
This section of the deliverable will describe the configuration and features of the MediaWiki system. MediaWiki is an open-source Wiki engine written in PHP. Wikipedia is one of the popular real-life applications of MediaWiki [4]. Creating Wiki Pages in MediaWiki is the same way as accessing the Wiki Pages. Every wiki page in MediaWiki has a title that serves as the page’s unique identifier. The users can search for the page or access the wiki page URL directly. If the page exists, MediaWiki renders an editor page for the users to edit the wiki page. If the page does not exist, MediaWiki prompts the user to create a new page. As the page with the title - “New Page” does not exist; MediaWiki will prompt the user to create this new page. Clicking on the Create the page “New page” will take the user to the Wiki editor to add contents to a new page.
Figure 6 shows the wiki editor to create or edit wiki pages.
MediaWiki organizes content using namespaces. MediaWiki ties all like content or like pages into a namespace. Thus, namespaces tie pages or content by their topic. A good example of a name space is “Category” A page can be grouped into an existing category by providing the namespace as part of the wiki source.
Example: [[Category: Games]]
“Category” is the namespace and “Games” are the name of the Category.
The MediaWiki editor supports some basic content styling features like bold text, italic text, underlined text, image insertions, hyperlinks etc., with its basic WYSIWYG editor. In addition to the edited text, users will be able to add a summary to the page or the editing session. The wiki pages can be edited and revised as many times as possible. The edit button can be used to pull up the editor to edit the concerned page.
MediaWiki Revisions
This part of research will be helpful for building the revision controls system in the context-sensitive wiki help system for Yioop. The “page_table” in MediaWiki stores the higher level details of the wiki page, like the title, paged, page_counter, etc.
Figure 7 shows a sample page record.
A table called revision stores all the revisions of all the wiki pages. Below is a figure that shows the schema and different fields I the revision table. Every wiki pages edit inserts a record into the revision table, which consists of information such as the user who created the edit, timestamp of the edit and a reference to the "old_text" column in the text table. This record in the revision table holds information about two things mainly, edit operation and a new wiki text originated from the result of the edit operation. A new record in the revision table always points to the current version of the wiki text. MediaWiki saves the entire wiki content, after the edit, into a data blob field in the database. Again this is similar to other wiki engines where the entire contents of the edited page are stored instead of diff metadata.
Figure 8 shows wiki pages and revision in the database.
2.2.2 TikiWiki
This deliverable will describe the configuration and features of the TikiWiki system. TikiWiki is not just a Wiki system; its real name is "TikiWiki CMS/Groupware." Tiki is a CMS, Wiki system, Blog engine and a webmail system. TikiWiki is written in PHP and requires a web-server that can process PHP server side scripts and a Relational Database like MySQL.
It has a huge code base compared to other CMS/Wiki engines. One of the most popular live sites leveraging TikiWiki is the Mozilla support Site at http://support.mozilla.org/en-US/home.
Figure 9 shows TikiWiki home page view.
Tiki Wiki home page is a default wiki page which we can edit. However, we can create a new wiki page altogether and edit it. We can click on Wiki on the left side bar, which pulls up a menu to create a new wiki page, along with the wiki editor. After writing the content, we can add a comment pertaining to the current edit by entering it into the field - "Describe the change you made." Clicking on save will save the page in the database.
**TikiWiki Revisions**
TikiWiki stores the revisions in the database to enable diff generation and diff comparison for each wiki page. Tiki stores all the wiki pages content in the tiki_pages table. However, it keeps track of version history in a table called tiki_history. When an edit happens to a Wiki page, the page details are inserted as a new record into the tiki_history table with a version number (version). The wiki content, along with the edits, will be inserted as a data blob in the data field. Again, this is related to other wiki engines where the entire contents of the edited page are stored instead of diff metadata. TikiWiki saves the wiki pages and their entire edited content as blobs in the database.
There are various parameters in the tiki_pages table for each of the wiki page record. When a new wiki page is created or edited, a new record will be inserted into the tiki_pages table. The tiki_history table stores all the revision history and the edited wiki page blobs.
### 2.2.3 Fossil Wiki
In this deliverable, we are going to focus on the built-in Wiki system in Fossil SCM. Fossil is a software configuration management (SCM) tool with built-in support for Bug tracking, Wiki system, and a CGI enabled web interface. The software uses SQLite for back-end content storage, which makes all the transaction atomic and fail proof from external disasters. The greatest advantage of Fossil apart from being open-source is that, it is a self-contained single binary file, which has everything to serve the SCM needs. With inherent support for revision control and web interface, Fossil can efficiently implement the features of a Wiki system. A Fossil repository keeps all the built-in wiki pages under version control. Users can create, edit, and delete wiki pages in the wiki section.

**Figure 10 shows fossil wiki interface**
**Version control in Fossil**
On a higher level, there are two states for any fossil repository, global state and local state. Any fossil repositories global state consists of an unordered set of artifacts. An artifact can be a text
file, a binary file, the source code or a Meta artifact that contains information or relationship between other artifacts. Hence Fossil artifacts are usually individual files on the file system.
A local state for a Fossil repository, on the other hand, consists of user preferences, user access details, ticket metadata, etc. The local state is usually specific to the repository where are global state is common to all repositories in a project. The local state does not contain artifacts. Wiki pages are an artifact in Fossil. A fossil artifact is the form that every wiki page edit will take.
Wiki in fossil
The user can edit all the pages in the Fossil wiki system, including the home page of the Wiki system. A landing page is just another wiki page. Fossil uses wiki markup to allow users create standalone wiki pages, use in check-in comments, bug reports and bug report comments. There is no separate wiki markup for most of the functionalities in Fossil wiki. Most of the wiki markup is a restricted set of HTML tags, with some simplified markup for common tasks like formatting text as bold, italic, etc. Every wiki page or attachment in fossil consists of Metadata that describes the artifact.
The other wiki pages in fossil are called events. The events show up in the fossil timeline. Events are usually used to announce releases and can be tagged with textual information.
2.3 Deliverable 3: Building a wiki editor in JavaScript.
In this deliverable, I have developed a prototype of a basic MediaWiki syntax editor. We decided to use MediaWiki markup as the wiki markup for our context-sensitive wiki help. Popularity of MediaWiki and ease of use have driven us towards this decision. I have taken a subset of MediaWiki Markup features and have used in this demo.

Figure 11 shows the wiki editor demo prototype.
I have created an editor that enables the users to markup their text with these features. I have also included experimental preview functionality. The entire wiki editor requires the main JavaScript framework to be included in the target HTML page.
```html
<script type="text/JavaScript" src="js/wikify.js" ></script>
```
The wiki editor requires its parent “Div” to be specified in the target page, something like below:
The “data-” attributes define the wiki editor HTML form configuration parameters. The Javascript framework takes care of the rendering the wiki editor dynamically, provided the target “Div” is defined with proper attributes.
Below are the main JavaScript functions that are used to convert plain text, entered by the user, into wiki markup. There are two main functions for end to end processing, they are `wikify()` and “`getSelection()`.”
Once the user selects some text in the wiki editor, this function is used to get the selected text, its suffix and prefix text. Currently, the text extraction is done for MSIE (older IE browsers) and the newer browsers. Completely different approaches are implemented respectively, to get the text selection. This approach also solves the problem of cross browser compatibility. The “`Wikify`” function is a utility method used to wrap the wiki prefix and suffix to the text selection. Some wiki markup, like bullets and numbered lists need not require the selection to have suffixes inserted. The editor itself is smart enough to make this decision.
2.4 Deliverable 4: Building Feature set for the context-sensitive wiki help.
This deliverable focus will be on the feature set of the context sensitive Wiki help for Yioop. The following are the features that our Context-sensitive wiki help will include, from the front-end user's perspective.
Invoking Help
- Users can invoke help with a dedicated “Help” button along the top right box. Clicking on the help button takes the user into the help mode.
- There are two ways help will be displayed to the user. Contextual help (using tooltips) & procedural help (displayed windowpane). Procedural help will be task-oriented explaining a possible task to be performed and how to perform it.
- There will be predefined areas on the page, to which Help can be provided to; these areas are called “Help points.” For invoking procedural-help, help-points will be used.
- A question mark beside an element will be used to allow the users invoke contextual help.
- For procedural help, help content will be exposed on the right pane. The help panel opens up when the user clicks on any “Help point” (procedural help) on the page.
- If the user clicks on a help tip that was specified to provide contextual help, help will be pictured in a tooltip.
Editing Help content
- For procedural help, Users will be able to edit the help content right in the help window pane itself. An edit button pops up when help content is being displayed. When the help content is being edited, edit button will be hidden.
- For contextual help, the help window will be leveraged to edit the tooltip help content for all the tooltips at the same place. It might be hard to get the tooltips editable directly, so the tooltip’s list can be displayed, which could easily be edited by the users.
• There will be a preview button for previewing the changes made to the help content. The preview is applicable to both procedural as well as contextual help.
Wiki system and Wiki editor features
• The primary portion of the Wiki system is the Wiki-editor. The wiki editor will support all the basic functionalities leveraging the MediaWiki Markup.
• The editor will permit users to upload Images/files that can be used to attach to the help articles. The insertion of files/images will be according to markup specified by MediaWiki markup where Filenames will be utilized to track or insert files into support articles.
• If the editor is being used by the user and if there is inactivity for n (configurable) seconds, the wiki-editor pages will timeout.
• Wiki system will be able to save help content revisions with other metadata that includes a timestamp at which the author of the content performs save operation.
• The wiki system will feature a diff engine that displays the diff between two revisions line-by-line.
3. Conclusion
For Deliverable 1, I researched on Context-sensitive help content generation and Context-sensitive help system usage. Results from Deliverable 1 helped me understand how to generate and integrate context sensitive help articles and organizing context-sensitive help. With my findings from researching different context-sensitive help systems, I was also able to understand what kind of context-sensitive help is required for Yioop.
For Deliverable 2, I researched on wiki engines like MediaWiki, TikiWiki and Fossil to understand how the wiki organization, revision control and markup work in the same. After thorough research and discussions, Dr. Pollett and I have agreed to use MediaWiki markup for Yioop context-sensitive help wiki. I will also develop the other features of the wiki, like revision control, diff management engine and back end storage following the footsteps of MediaWiki.
For Deliverable 3, with the findings and agreement from Deliverable 2, I have started to design a JavaScript framework that will support the front end part of the Wiki interface. I have started by building a Wiki editor prototype from scratch, also making it compatible across several old and new web browsers. The Wiki editor is now production ready, and I have already submitted the wiki editor to be integrated with Yioop.
For the final deliverable, I have come up with a design and feature set that my context-sensitive help system will include. I will use the specs defined in the feature set to start building my context-sensitive help system in CS 298. In the next semester, I will implement the context-sensitive help system including, but not limited to the features specified the feature set.
Wiki Markup in MediaWiki, TikiWiki and Fossil Wiki [6] [7] [8]
<table>
<thead>
<tr>
<th>Wiki functionality</th>
<th>MediaWiki</th>
<th>TikiWiki</th>
<th>Fossil</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Bold Format</strong></td>
<td>'''bold'''</td>
<td><em>bold</em></td>
<td><b>Text in Bold</b></td>
</tr>
<tr>
<td><strong>Italics Format</strong></td>
<td>''italic''</td>
<td>''italic''</td>
<td><i>Italic text</i></td>
</tr>
<tr>
<td><strong>Underline Format</strong></td>
<td><u>underlined</u></td>
<td>===underline===</td>
<td><i>Underlined text</i></td>
</tr>
<tr>
<td><strong>Internal Link</strong></td>
<td>[[a link]]</td>
<td>(Name of page)</td>
<td>![target] or ![target</td>
</tr>
<tr>
<td></td>
<td>[[a link</td>
<td>with title]]</td>
<td>(Name of page</td>
</tr>
<tr>
<td></td>
<td>or (semantic(link to a page))</td>
<td>or, if activated: CamelCase WikiWords</td>
<td></td>
</tr>
</tbody>
</table>
| **External Link** | [http://example.org The title] | [http://example.com] | [http://example.com|label]
| | or [http://example.com] | or [http://example.com|label] |
| | or [http://example.com] | or [http://example.com|label] |
| **Headlines** | ===Section=== | -= Titlebar -= | HTML based : <h1> <h2> <h3> <h4> <h5> <h6> |
| | ===Subsection=== | ! Level 1 | <h1>Heading 1</h1> |
| | ===Sub-subsection=== | !! Level 2 | <h2>Heading 2</h2> |
| | | !!! Level 3 | <h3>Heading 3</h3> |
| | | !!!! Level 4 | <h4>Heading 4</h4> |
| | | !!!!! Level 5 | <h5>Heading 5</h5> |
| **Monospace Format**| <tt>monospace</tt> | -*monospace+- | <tt>monospace</tt> |
| **Strikethrough Format** | <s>strikethrough</s> | --strike-- | <strike>strikethrough</strike> |
| **Superscript Format** | <sup>superscript</sup> | {SUP()}text{SUP} | <sup>superscript</sup> |
| **Subscript Format** | <sub>subscript</sub> | {SUB()}text{SUB} | <sub>subscript</sub> |
| **Aligning Text** | <center>Centered</center> | ::centered text:: | <center>Centered</center> |
| **Text Indentation**| : indented line | leading spaces, if activated. | two leading spaces or tab. |
| **Bulleted Lists** | * Item 1 | * Item 1 | * Bullet |
| | ** Item 1.2 | ** Item 1.1 | Bullets are "*" surrounded by two spaces. |
| | * Item 2 | |
| **Numbered Lists** | # Item 1 | # Item 1 | # Enum |
| | ## Item 1.2 | ## Item 1.1 | Enumerations are "#" surrounded by two spaces. |
| | # Item 2 | # Item 2 | |
| **Definition Lists**| ; term : definition | ;term:definition | |
| **Horizontal Rule** | ---- | ---- | <hr> |
| **No Wiki Formatting** | <nowiki>Text not wiki</nowiki> | ~np~This "text not ===formatted===~/np~ | <verbatim>Text not wiki</verbatim> |
References
|
{"Source-Url": "http://www.cs.sjsu.edu/faculty/pollett/masters/Semesters/Spring14/eswara/cs297_report/CS297_Report.pdf", "len_cl100k_base": 5971, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 36034, "total-output-tokens": 6877, "length": "2e12", "weborganizer": {"__label__adult": 0.0003001689910888672, "__label__art_design": 0.0005130767822265625, "__label__crime_law": 0.0003006458282470703, "__label__education_jobs": 0.007297515869140625, "__label__entertainment": 9.65595245361328e-05, "__label__fashion_beauty": 0.00014901161193847656, "__label__finance_business": 0.0002002716064453125, "__label__food_dining": 0.0002853870391845703, "__label__games": 0.0004401206970214844, "__label__hardware": 0.0005326271057128906, "__label__health": 0.00025177001953125, "__label__history": 0.00026726722717285156, "__label__home_hobbies": 9.769201278686523e-05, "__label__industrial": 0.0002434253692626953, "__label__literature": 0.0003681182861328125, "__label__politics": 0.00019419193267822263, "__label__religion": 0.0003597736358642578, "__label__science_tech": 0.00940704345703125, "__label__social_life": 0.00022590160369873047, "__label__software": 0.02557373046875, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.00014507770538330078, "__label__transportation": 0.0002887248992919922, "__label__travel": 0.0001971721649169922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27648, 0.01667]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27648, 0.22792]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27648, 0.83299]], "google_gemma-3-12b-it_contains_pii": [[0, 176, false], [176, 1883, null], [1883, 3411, null], [3411, 4182, null], [4182, 7156, null], [7156, 8196, null], [8196, 9502, null], [9502, 11613, null], [11613, 13598, null], [13598, 15725, null], [15725, 18021, null], [18021, 20882, null], [20882, 23624, null], [23624, 26751, null], [26751, 27648, null]], "google_gemma-3-12b-it_is_public_document": [[0, 176, true], [176, 1883, null], [1883, 3411, null], [3411, 4182, null], [4182, 7156, null], [7156, 8196, null], [8196, 9502, null], [9502, 11613, null], [11613, 13598, null], [13598, 15725, null], [15725, 18021, null], [18021, 20882, null], [20882, 23624, null], [23624, 26751, null], [26751, 27648, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27648, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27648, null]], "pdf_page_numbers": [[0, 176, 1], [176, 1883, 2], [1883, 3411, 3], [3411, 4182, 4], [4182, 7156, 5], [7156, 8196, 6], [8196, 9502, 7], [9502, 11613, 8], [11613, 13598, 9], [13598, 15725, 10], [15725, 18021, 11], [18021, 20882, 12], [20882, 23624, 13], [23624, 26751, 14], [26751, 27648, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27648, 0.19527]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
7f43d412d1a55b761b1acf76126a2fba617e1871
|
Group Communication idea
- System supports a new abstraction (like an object)
- A “group” consisting of a set of processes (“members”) that join, leave and cooperate to replicate data or do parallel processing tasks
- A group has a name (like a filename)
- … and a state (the data that its members are maintaining)
- The state will often be replicated so each member has a copy
- Note that this is in contrast to Paxos where each member has a partial copy and we need to use a “learner algorithm” to extract the actual current state
- Think of state much as you think of the value of a variable, except that a group could track many variables at once
Group communication Idea
- The members can send each other
- Point-to-point messages
- Multicasts that go from someone to all the members
- They can also do RPC style queries
- Query a single member
- Query the whole group, with all of them replying
- Example: The Vsync system (but there are many such systems)
Animation: A process joins a group
- S starts by creating an endpoint object, attaching upcall handlers. Once the group endpoint is properly configured, S issues a “join” request. Vsync checks to see if the group already exists. If not, p can create a new instance, but in this case, the group is already active.
- P still has its own private variables, but now it is able to keep them aligned with track the versions at Q, R, and S.
CS5412 Spring 2016 (Cloud Computing: Birman)
P’s endpoint
- Just an object that is P’s “portal” for operations involving the group
- The endpoint lets P see events occurring in the group such as members joining, failing (detected slowly via timeout) or leaving (very fast notification), multicasts reporting updates or other events, queries, etc
- But no data is automatically replicated. P provides logic to maintain the data it associates with the group.
Vsync is a **library** for group communication
---
**It Uses a Formal model**
- Formal model permits us to achieve correctness
- Vsync is too complex to use formal methods as a development tool, but does facilitate debugging (model checking)
- Think of Vsync as a collection of modules, each with rigorously stated properties
**It Reflects Sound Engineering**
- Vsync implementation needs to be fast, lean, easy to use
- Developer must see it as easier to use Vsync than to build from scratch
- Seek great performance under “cloudy conditions”
- Forced to anticipate many styles of use
---
CS5412 Spring 2016 (Cloud Computing: Birman)
Vsync makes developer’s life easier
```csharp
Group g = new Group("myGroup");
Dictionary<string, double> Values = new Dictionary<string, double>();
g.ViewHandlers += delegate(View v) {
Console.Title = "myGroup members: " + v.members;
};
g.Handlers[UPDATE] += delegate(string s, double v) {
Values[s] = v;
};
g.Handlers[LOOKUP] += delegate(string s) {
g.Reply(Values[s]);
};
g.Join();
g.OrderedSend(UPDATE, "Harry", 20.75);
List<double> resultlist = new List<double>();
nr = g.Query(ALL, LOOKUP, "Harry", EOL, resultlist);
```
- First sets up group
- Join makes this entity a member. State transfer isn’t shown
- Then can multicast, query. Runtime callbacks to the “delegates” as events arrive
- Easy to request security (g.SetSecure), persistence
- “Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make. User can tell Vsync how strong ordering needs to be.
Vsync makes developer’s life easier
```csharp
Group g = new Group("myGroup");
Dictionary<string,double> Values = new Dictionary<string,double>();
g.ViewHandlers += delegate(View v) {
Console.Title = "myGroup members: " + v.members;
};
g.Handlers[UPDATE] += delegate(string s, double v) {
Values[s] = v;
};
g.Handlers[LOOKUP] += delegate(string s) {
g.Reply(Values[s]);
};
g.Join();
g.OrderedSend(UPDATE, "Harry", 20.75);
List<double> resultlist = new List<double>();
nr = g.Query(ALL, LOOKUP, "Harry", EOL, resultlist);
```
**First sets up group**
Join makes this entity a member. State transfer isn’t shown
Then can multicast, query. Runtime callbacks to the “delegates” as events arrive
Easy to request security (g.SetSecure), persistence
“Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make. User can tell Vsync how strong ordering needs to be.
Vsync makes developer’s life easier
First sets up group
Join makes this a member.
State transfer isn’t shown
Then can multicast, query.
Runtime callbacks to the “delegates” as events arrive
Easy to request security (g.SetSecure), persistence
“Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make. User can tell Vsync how strong ordering needs to be.
```csharp
Group g = new Group("myGroup");
Dictionary<string,double> Values = new Dictionary<string,double>();
g.ViewHandlers += delegate(View v) {
Console.Title = "myGroup members: "+v.members;
};
g.Handlers[UPDATE] += delegate(string s, double v) {
Values[s] = v;
};
g.Handlers[LOOKUP] += delegate(string s) {
g.Reply(Values[s]);
};
g.Join();
g.OrderedSend(UPDATE, "Harry", 20.75);
List<double> resultlist = new List<double>();
nr = g.Query(ALL, LOOKUP, "Harry", EOL, resultlist);
```
Vsync makes developer’s life easier
Group g = new Group("myGroup");
Dictionary <string,double> Values = new Dictionary<string,double>();
g.ViewHandlers += delegate(View v) {
Console.Title = "myGroup members: "+v.members;
};
g.Handlers[UPDATE] += delegate(string s, double v) {
Values[s] = v;
};
g.Handlers[LOOKUP] += delegate(string s) {
g.Reply(Values[s]);
};
g.Join();
g.OrderedSend(UPDATE, "Harry", 20.75);
List<double> resultlist = new List<double>();
nr = g.Query(ALL, LOOKUP, "Harry", EOL, resultlist);
First sets up group
Join makes this entity a member. State transfer isn’t shown
Then can multicast, query. Runtime callbacks to the “delegates” as events arrive
Easy to request security (g.SetSecure), persistence
“Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make. User can tell Vsync how strong ordering needs to be.
Group g = new Group("myGroup");
Dictionary <string,double> Values = new Dictionary<string,double>();
g.ViewHandlers += delegate(View v) {
Console.Title = "myGroup members: " + v.members;
};
g.Handlers[UPDATE] += delegate(string s, double v) {
Values[s] = v;
};
g.Handlers[LOOKUP] += delegate(string s) {
g.Reply(Values[s]);
};
g.Join();
g.OrderedSend(UPDATE, "Harry", 20.75);
List<double> resultlist = new List<double>();
nr = g.Query(ALL, LOOKUP, "Harry", EOL, resultlist);
Vsync makes developer’s life easier
First sets up group
Join makes this entity a member. State transfer isn’t shown
Then can multicast, query. Runtime callbacks to the “delegates” as events arrive
Easy to request security (g.SetSecure), persistence
“Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make. User can tell Vsync how strong ordering needs to be.
But all the replies would be identical!
- Good point…
- In practice, you might have each member do part of the work: 0’th “ranked” process does portion 1 of 4, next does portion 2 of 4.
- They saw identical views so they agree on this ranking. The view has methods to request your rank
Vsync makes developer’s life easier
```csharp
Group g = new Group("myGroup");
Dictionary<string,double> Values = new Dictionary<string,double>();
g.ViewHandlers += delegate(View v)
{
Console.Title = "myGroup members: " + v.members;
};
g.Handlers[UPDATE] += delegate(string s, double v)
{
Values[s] = v;
};
g.Handlers[LOOKUP] += delegate(string s)
{
g.Reply(Values[s]);
};
g.Join();
g.SetSecure(key);
g.OrderedSend(UPDATE, "Harry", 20.75);
List<double> resultlist = new List<double>();
nr = g.Query(ALL, LOOKUP, "Harry", EOL, resultlist);
```
First sets up group
Join makes this entity a member.
State transfer isn’t shown
Then can multicast, query. Runtime callbacks to the “delegates” as events arrive
Easy to request security (g.SetSecure), persistence
“Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make. User can tell Vsync how strong ordering needs to be.
Vsync makes developer’s life easier
First sets up group
Join makes this entity a member. State transfer isn’t shown
Then can multicast, query. Runtime callbacks to the “delegates” as events arrive
Easy to request security (g.SetSecure), persistence
“Consistency” model dictates the ordering seen for event upcalls and the assumptions user can make. User can tell Vsync how strong ordering needs to be.
Group g = new Group("myGroup");
Dictionary <string,double> Values = new Dictionary<string,double>();
g.ViewHandlers += delegate(View v) {
Console.Title = "myGroup members: " + v.members;
};
g.Handlers[UPDATE] += delegate(string s, double v) {
Values[s] = v;
};
g.Handlers[LOOKUP] += delegate(string s) {
g.Reply(Values[s]);
};
g.Join();
g.SetSecure(key);
**g.Send**(UPDATE, "Harry", 20.75);
List<double> resultlist = new List<double>();
nr = g.Query(ALL, LOOKUP, "Harry", EOL, resultlist);
Concept: Query as a “multi-RPC”
- One member asks multiple group members to perform some action
- It could be doing this on behalf of an external client, and it might participate too
- Often group members subdivide the task (but there could be a fault-tolerance benefit to asking 2 or more to do the same work)
It takes a “community”
- A lot of complexity lurks behind those simple APIs
- Building one of your own would be hard
- Vsync took Ken >3 years to implement & debug
What goes on down there?
- Terminology: group create, view, join with state transfer, multicast, client-to-group communication
- This is the “dynamic” membership model: processes come & go
Clients of a group
- Applications linked to Vsync can access a group by joining it as a member, but can also issue requests as a “client” in RPC style.
- One can also build a group that uses a web service standard (SOAP, WCF, REST) to accept requests from web clients that don’t use Vsync at all. Many cloud services can automatically load balance such requests over the set of group members.
- The representative acts as a “proxy” for the client and can issue multicasts or queries on its behalf.
You build your program and link with Vsync
It starts the library (the new guy tracks down any active existing members)
Then you can create and join groups, receive a “state transfer” to catch up, cooperate with others
All kinds of events are reported via upcalls
- New view: View object tells members what happened
- Incoming message: data fields extracted and passed as values to your handler method
Recipe for a group communication system
- Bake one pie shell
- Build a service that can track group membership and report “view changes”
- Prepare 2 cups of basic pie filling
- Develop a simple fault-tolerant multicast protocol
- Add flavoring of your choice
- Extend the multicast protocol to provide desired delivery ordering guarantees
- Fill pie shell, chill, and serve
- Design an end-user “API” or “toolkit”. Clients will “serve themselves”, with various goals...
Role of GMS
- We’ll add a new system service to our distributed system, like the Internet DNS but with a new role
- Its job is to track membership of groups
- To join a group a process will ask the GMS
- The GMS will also monitor members and can use this to drop them from a group
- And it will report membership changes
Group picture... with GMS
P requests: I wish to join or create group "X".
GMS responds: Group X created with you as the only member.
T to GMS: What is current membership for group X?
Q joins, now X = \{p,q\}. Since p is the oldest prior member, it does a state transfer to q.
GMS notices that q has failed (or q decides to leave).
GMS to T: X = \{p\}
CS5412 Spring 2016 (Cloud Computing: Birman)
Group membership service
- Runs on some sensible place, like the first few machines that start up when you launch Vsync
- Takes as input:
- Process “join” events
- Process “leave” events
- Apparent failures
- Output:
- Membership views for group(s) to which those processes belong
- Seen by the protocol “library” that the group members are using for communication support
Issues?
- The service *itself* needs to be fault-tolerant
- Otherwise our entire system could be crippled by a single failure!
- So we’ll run two or three copies of it
- Hence Group Membership Service (GMS) must run some form of protocol (GMP)
Group picture… with GMS
Let’s start by focusing on how GMS tracks its own membership. Since it can’t just ask the GMS to do this purpose. The GMS is a group too. We’ll build it first and then will use it when building reliable multicast protocols. In fact, it will end up using those reliable multicast to replicate membership information for other groups that rely on it.
Approach
- Assume that GMS has members \{p,q,r\} at time t
- Designate the “oldest” of these as the protocol “leader”
- To initiate a change in GMS membership, leader will run the GMS
- Others can’t run the GMS; they report events to the leader
Example:
- Initially, GMS consists of \(\{p,q,r\}\)
- Then \(q\) is believed to have crashed
Failure detection: may make mistakes
- Recall that failures are hard to distinguish from network delay
- So we accept risk of mistake
- If p is running a protocol to exclude q because “q has failed”, all processes that hear from p will cut channels to q
- Avoids “messages from the dead”
- q must rejoin to participate in GMS again
Basic GMS
- Someone reports that “q has failed”
- Leader (process p) runs a 2-phase commit protocol
- Announces a “proposed new GMS view”
- Excludes q, or might add some members who are joining, or could do both at once
- Waits until a majority of members of current view have voted “ok”
- Then commits the change
Proposes new view: \( \{p,r\}[-q] \): “\( p \) and \( r \); \( q \) has left”
Needs majority consent: \( p \) itself, plus one more (“current” view had 3 members)
Can add members at the same time
What if someone doesn’t respond?
- P can tolerate failures of a minority of members of the current view
- New first-round “overlaps” its commit:
- “Commit that q has left. Propose add s and drop r”
- P must wait if it can’t contact a majority
- Avoids risk of partitioning
What if leader fails?
- Here we do a 3-phase protocol
- New leader identifies itself based on age ranking (oldest surviving process)
- It runs an inquiry phase
- “The adored leader has died. Did he say anything to you before passing away?”
- Note that this causes participants to cut connections to the adored previous leader
- Then run normal 2-phase protocol but “terminate” any interrupted view changes leader had initiated
GMS example
The GMS group
\[ V_0 = \{p, q, r\} \]
- New leader first sends an inquiry
- Then proposes new view: \( \{q, r\} [-p] \)
- Needs majority consent: \( q \) itself, plus one more (“current” view had 3 members)
- Again, can add members at the same time
Properties of GMS
- We end up with a single service shared by the entire system
- In fact every process can participate
- But more often we just designate a few processes and they run the GMS
- Typically the GMS runs the GMP and also uses replicated data to track membership of other groups
Use of GMS
- A process t, not in the GMS, wants to join group “myGroup”
- It sends a request to the GMS
- GMS updates the “membership of group myGroup” to add t
- Reports the new view to the current members of the group, and to t
- Begins to monitor t’s health
The GMS group contains p, q, r (and later, s). Processes t and u want to form some other group, but use the GMS to manage membership on their behalf.
Key ideas
- GMS tracks its own membership using a Paxos-like protocol
- Then it replicates the state of the managed groups using that same protocol to also update those
- The group members are “slaves” to the GMS. If they notice an apparent failure, they complain to the GMS and it updates membership.
In fact we’re doing something very similar to Paxos
- The “slot number” is the “view number”
- And the “ballot” is the current proposal for what the next view should be
- With Paxos proposers can actually talk about multiple future slots/commands (concurrency parameter $\alpha$)
- With GMS, we do that too!
- A single proposal can actually propose multiple changes
- First [add X], then [drop Y and Z], then [add A, B and C]...
- In order… eventually 2PC succeeds and they all commit
How does this differ from Paxos?
- Details are clearly not identical, and GMS state isn’t durable.
- Runs with a well-defined leader; Paxos didn’t need one (in Paxos we often prefer to have a single leader but correctness is ensured with multiple coordinators).
- Very similar guarantees of ordering and if we added logging, durability too. (Vsync SafeSend adds this logging).
- Isis GMS protocol predates Paxos. It “bisimulates” Paxos, meaning that each can simulate the other.
We have our pie shell
- Now we’ve got a group membership service that reports identical views to all members, tracks health
- Can we build a reliable multicast?
Unreliable multicast
- Suppose that to send a multicast, a process just uses an unreliable protocol
- Perhaps IP multicast
- Perhaps UDP point-to-point
- Perhaps TCP
- ... some messages might get dropped. If so it eventually finds out and resends them (various options for how to do it)
Concerns if sender crashes
- Perhaps it sent some message and only one process has seen it
- We would prefer to ensure that
- All receivers, in “current view”
- Receive any messages that any receiver receives (unless the sender and all receivers crash, erasing evidence...)
CS5412 Spring 2016 (Cloud Computing: Birman)
An interrupted multicast (GMS not shown)
- A message from q to r was “dropped”
- Since q has crashed, it won’t be resent
Terminating an interrupted multicast
- We say that a message is *unstable* if some receiver has it but (perhaps) others don’t
- For example, q’s message is unstable at process r
- If q fails we want to terminate unstable messages
- Finish delivering them (without duplicate deliveries)
- Masks the fact that the multicast wasn’t reliable and that the leader crashed before finishing up
How to do this?
- Easy solution: all-to-all echo
- When a new view is reported
- All processes echo any unstable messages on all channels on which they haven’t received a copy of those messages
- A flurry of $O(n^2)$ messages
- Note: must do this for all messages, not just those from the failed process. This is because more failures could happen in future
- p had an unstable message, so it echoed it when it saw the new view
First an “internal” view shows up,
Flush protocol finishes the multicast. Now it looks reliable
Then “redeliver” the new view, this time visible to the application layer
Event ordering
- We should *first* deliver the multicasts to the application layer and *then* report the new view.
- This way all replicas see the same messages delivered “in” the same view.
- Some call this “view synchrony”
State transfer
- At the instant the new view is reported, a process already in the group makes a checkpoint
- Sends point-to-point to new member(s)
- It (they) initialize from the checkpoint
After re-ordering, it looks like each multicast is reliably delivered in the same view at each receiver.
Note: if sender and all receivers fails, unstable message can be “erased” even after delivery to an application.
This is a price we pay to gain higher speed.
What about ordering?
- It is trivial to make our protocol FIFO wrt other messages from same sender
- If we just number messages from each sender, they will “stay” in order
- Concurrent messages are unordered
- If sent by different senders, messages can be delivered in different orders at different receivers
- This is the protocol called “Send”
When is Send used?
- The protocol is very fast
- Useful if ordering really doesn’t matter
- Or if all the updates to some object are sent by the same process. In this case FIFO is what we need
- Send is not the right choice if multiple members send concurrent, conflicting updates
- In that case use g.OrderedSend()
Other options?
- OrderedSend: used if there might be concurrent sends
- SafeSend: Most conservative but also quite costly. A version of Paxos (topic of next lecture)
What does this give us?
- A second way to implement state machine replication in which each member has a complete and correct state
- Notice contrast with Paxos where to learn the state you need to run a decision process that reads $Q_R$ copies
- Vsync replica is just a local object and you use it like any other object (with locking to prevent concurrent update)
- Paxos has replicated state but you need to read multiple process states to figure out the value
- This makes Vsync faster and cheaper
Vsync versus Paxos
- Vsync offers control over message ordering and durability. Paxos has just one option.
- By default, Vsync is a multicast layer that just delivers messages and doesn’t log them.
- But you can log group states in various ways, including exactly what Paxos does.
How can Vsync offer Paxos?
- Via the SafeSend API mentioned last time
- SafeSend is a genuine Paxos implementation
- But it does have some optimizations
- And it has an unlogged mode. For Paxos durability you need to enable the logged feature.
- In normal Paxos we don’t have a GMS
- With a GMS the protocol simplifies slightly and we can relax the quorum rules
- SafeSend includes these performance enhancements but they don’t impact the correctness or properties of sol’n
Virtual synchrony is a “consistency” model:
- **Synchronous runs:** indistinguishable from non-replicated object that saw the same updates (like Paxos)
- **Virtually synchronous runs** are indistinguishable from synchronous runs
Is Vsync harder to use? Paxos was hard...
- We mentioned that just sticking Paxos in front of a set of file or database replicas is tempting, but a mistake
- The protocol might “decide” something but this doesn’t mean the database has the updates
- Surprisingly tricky to ensure that we apply them all
- Vsync: apply update when multicast delivered
- This is safe and correct: all replicas do same thing
- But it does require a state transfer to add members: we need to make a new DB copy for each new member
- Can we do better?
Durability options
- Normal configuration of Vsync is optimized for “in-memory” applications.
- State transfer: make a checkpoint, load it into a joining process, to initialize a joining group member
- Checkpoint/reload can be used to make an entire group remember its state across shutdowns
- SafeSend, the Vsync version of Paxos, can be asked to log messages. This gives a stronger durability guarantee than with checkpoint/restart.
State transfer worry
- If my database is just a few Mbytes... just send it.
- But in the cloud we often see databases with tens of Gbytes of content!
- Copying them will be a very costly undertaking.
Out-of-Band (OOB) technology
- Allows copying big state by replication of memory-mapped files, very efficient
- There is a clever way to integrate OOB transfers with state transfer
- Effect is that with a bit more effort, Vsync won’t need to send big objects through its multicast layer
Vsync DHT
- The system also has a fancy key-value store
- Runs in a group and shards the data
- One-hop get and put: no indirect routing needed!
- Can even put or get multiple key-value pairs at a time, and there is a way to request totally ordered, consistent get and put: gives a form of atomicity
- Then you can do “aggregated query” operations to leverage the resulting parallel computing opportunity
GridCloud: Example Vsync application
GridCloud Cloud-hosted high-assurance system to monitor the electric power grid sponsored by the Department of Energy ARPA-E program
Goal
Demonstrate a cloud-scale monitoring infrastructure able to host “smart grid” applications: the code that will make the power grid “smart”
Use cases
- Routine balancing of loads and generation
- Grid protection
- Analysis and adaptation after topology changes
- Integration of renewable energy
Challenges
Cloud lacks consistency, assurance, and timing guarantees. Industry demands very strong control over data flow with provable security.
Status
We’re using Isis2 to manage a structure in which replicated data permits high assurance reactive smart-grid monitoring and control. GridCloud features state estimation and GridStat software from Washington State University.
GridCloud enables
Large-scale, distributed power system applications in the cloud
Low-latency and high-reliability using replicated data streams and application processes
*Key GridCloud technologies
- ISIS and Dmake to manage a large number of processes running in the cloud
- GridStat to provide multi-cast and rate filtering
- Hierarchical Linear State Estimator (HLSE) as example application
The HLSE uses EC2 Cloud resources to quickly and reliably do full-system SE 5 times per second with thousands of PMU data streams as input
Definitions
PMU
A sensor (synchronized) used to measure voltage and phase angle of a power bus
Linear State Estimator (LSE)
Code that computes the state of a power grid using PMU data as input
2014 goal: Demonstrate GridCloud in a real-world setting, such as the regional transmission network for the North East USA
Who used Vsync?
- We used it to build the GridCloud platform itself
- In particular, in a system management tool and in a special real-time file system
- But we used it internally to those solutions
- GridCloud users benefit from Vsync without directly needing to use Vsync.
- They just use the file system to store their data
- And their programs are automatically restarted if needed
Summary
- Group communication offers a nice way to replicate an application
- Replicated data (without the cost of quorums)
- Coordinated and replicated processing of requests
- Automatic leader election, member ranking
- Automated failure handling, help getting external database caught up after a crash
- Tools for security and other aspects that can be pretty hard to implement by hand
|
{"Source-Url": "http://www.cs.cornell.edu/courses/cs5412/2016sp/slides/XIV%20-%20Virtual%20Synchrony.pdf", "len_cl100k_base": 6322, "olmocr-version": "0.1.53", "pdf-total-pages": 65, "total-fallback-pages": 0, "total-input-tokens": 89638, "total-output-tokens": 8836, "length": "2e12", "weborganizer": {"__label__adult": 0.00021445751190185547, "__label__art_design": 0.00021028518676757812, "__label__crime_law": 0.00017440319061279297, "__label__education_jobs": 0.0003838539123535156, "__label__entertainment": 5.251169204711914e-05, "__label__fashion_beauty": 7.390975952148438e-05, "__label__finance_business": 0.00019490718841552737, "__label__food_dining": 0.00020420551300048828, "__label__games": 0.00032711029052734375, "__label__hardware": 0.0010042190551757812, "__label__health": 0.00019049644470214844, "__label__history": 0.00014317035675048828, "__label__home_hobbies": 7.05718994140625e-05, "__label__industrial": 0.0003800392150878906, "__label__literature": 0.00011211633682250977, "__label__politics": 0.00014531612396240234, "__label__religion": 0.00023448467254638672, "__label__science_tech": 0.01348876953125, "__label__social_life": 6.181001663208008e-05, "__label__software": 0.011627197265625, "__label__software_dev": 0.97021484375, "__label__sports_fitness": 0.00017905235290527344, "__label__transportation": 0.0003597736358642578, "__label__travel": 0.000141143798828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26488, 0.0037]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26488, 0.15326]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26488, 0.8822]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 670, false], [670, 993, null], [993, 1475, null], [1475, 1890, null], [1890, 2532, null], [2532, 3448, null], [3448, 4362, null], [4362, 5263, null], [5263, 6153, null], [6153, 7045, null], [7045, 7334, null], [7334, 8261, null], [8261, 9172, null], [9172, 9484, null], [9484, 9649, null], [9649, 9839, null], [9839, 10340, null], [10340, 10745, null], [10745, 11224, null], [11224, 11554, null], [11554, 11957, null], [11957, 12343, null], [12343, 12592, null], [12592, 12616, null], [12616, 12965, null], [12965, 13215, null], [13215, 13308, null], [13308, 13651, null], [13651, 13976, null], [13976, 14174, null], [14174, 14456, null], [14456, 14898, null], [14898, 15162, null], [15162, 15458, null], [15458, 15728, null], [15728, 15878, null], [15878, 16183, null], [16183, 16675, null], [16675, 17158, null], [17158, 17320, null], [17320, 17615, null], [17615, 17940, null], [17940, 18062, null], [18062, 18455, null], [18455, 18820, null], [18820, 19061, null], [19061, 19287, null], [19287, 19479, null], [19479, 19744, null], [19744, 20097, null], [20097, 20421, null], [20421, 20589, null], [20589, 21097, null], [21097, 21381, null], [21381, 21867, null], [21867, 22097, null], [22097, 22638, null], [22638, 23078, null], [23078, 23281, null], [23281, 23571, null], [23571, 23984, null], [23984, 25693, null], [25693, 26089, null], [26089, 26488, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 670, true], [670, 993, null], [993, 1475, null], [1475, 1890, null], [1890, 2532, null], [2532, 3448, null], [3448, 4362, null], [4362, 5263, null], [5263, 6153, null], [6153, 7045, null], [7045, 7334, null], [7334, 8261, null], [8261, 9172, null], [9172, 9484, null], [9484, 9649, null], [9649, 9839, null], [9839, 10340, null], [10340, 10745, null], [10745, 11224, null], [11224, 11554, null], [11554, 11957, null], [11957, 12343, null], [12343, 12592, null], [12592, 12616, null], [12616, 12965, null], [12965, 13215, null], [13215, 13308, null], [13308, 13651, null], [13651, 13976, null], [13976, 14174, null], [14174, 14456, null], [14456, 14898, null], [14898, 15162, null], [15162, 15458, null], [15458, 15728, null], [15728, 15878, null], [15878, 16183, null], [16183, 16675, null], [16675, 17158, null], [17158, 17320, null], [17320, 17615, null], [17615, 17940, null], [17940, 18062, null], [18062, 18455, null], [18455, 18820, null], [18820, 19061, null], [19061, 19287, null], [19287, 19479, null], [19479, 19744, null], [19744, 20097, null], [20097, 20421, null], [20421, 20589, null], [20589, 21097, null], [21097, 21381, null], [21381, 21867, null], [21867, 22097, null], [22097, 22638, null], [22638, 23078, null], [23078, 23281, null], [23281, 23571, null], [23571, 23984, null], [23984, 25693, null], [25693, 26089, null], [26089, 26488, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26488, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26488, null]], "pdf_page_numbers": [[0, 0, 1], [0, 670, 2], [670, 993, 3], [993, 1475, 4], [1475, 1890, 5], [1890, 2532, 6], [2532, 3448, 7], [3448, 4362, 8], [4362, 5263, 9], [5263, 6153, 10], [6153, 7045, 11], [7045, 7334, 12], [7334, 8261, 13], [8261, 9172, 14], [9172, 9484, 15], [9484, 9649, 16], [9649, 9839, 17], [9839, 10340, 18], [10340, 10745, 19], [10745, 11224, 20], [11224, 11554, 21], [11554, 11957, 22], [11957, 12343, 23], [12343, 12592, 24], [12592, 12616, 25], [12616, 12965, 26], [12965, 13215, 27], [13215, 13308, 28], [13308, 13651, 29], [13651, 13976, 30], [13976, 14174, 31], [14174, 14456, 32], [14456, 14898, 33], [14898, 15162, 34], [15162, 15458, 35], [15458, 15728, 36], [15728, 15878, 37], [15878, 16183, 38], [16183, 16675, 39], [16675, 17158, 40], [17158, 17320, 41], [17320, 17615, 42], [17615, 17940, 43], [17940, 18062, 44], [18062, 18455, 45], [18455, 18820, 46], [18820, 19061, 47], [19061, 19287, 48], [19287, 19479, 49], [19479, 19744, 50], [19744, 20097, 51], [20097, 20421, 52], [20421, 20589, 53], [20589, 21097, 54], [21097, 21381, 55], [21381, 21867, 56], [21867, 22097, 57], [22097, 22638, 58], [22638, 23078, 59], [23078, 23281, 60], [23281, 23571, 61], [23571, 23984, 62], [23984, 25693, 63], [25693, 26089, 64], [26089, 26488, 65]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26488, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
9a6a28a98637c1ac820309ffa74287df14ba32d4
|
Difflog: Learning Datalog Programs by Continuous Optimization
Mukund Raghothaman
rmukund@cis.upenn.edu
Richard Zhang
rmzhang@cis.upenn.edu
Xujie Si
xsi@cis.upenn.edu
Kihong Heo
kheo@cis.upenn.edu
Mayur Naik
mhnaik@cis.upenn.edu
Abstract
Statistical relational models that combine logical and statistical reasoning offer a variety of benefits. A central problem in using such models concerns learning rich logical rules from data. Existing approaches are hindered by employing discrete reasoning during learning. We propose DIFFLOG, an approach and system based on continuous reasoning. It extends Datalog, a popular logic programming language, to the continuous domain by attaching numerical weights to individual rules. This allows us to apply numeric optimization techniques such as gradient descent and Newton’s method to synthesize Datalog programs, which we formalize as the combinatorial problem of selecting rules from a soup of candidates. Our approach is inspired by the success of continuous reasoning in machine learning, but differs fundamentally by leveraging provenance information that is naturally obtained during Datalog evaluation. On a suite of 10 benchmarks from different domains, DIFFLOG can learn complex programs with recursive rules and relations of arbitrary arity, even with small amounts of noise in the training data.
1 Introduction
Logical (discrete) and statistical (continuous) modes of reasoning have complementary benefits: the logical part promises interpretability, extensibility, and correctness guarantees, while the statistical part offers better robustness in the presence of noise and uncertainty. Many models have been proposed to leverage the benefits of both modes without suffering their drawbacks; they are studied in the field of statistical relational learning, and include stochastic logic programs [30], robust logic [45], probabilistic relational models [22], Bayesian logic [24], Markov logic networks [40], probabilistic soft logic [5], and probabilistic Prolog [39].
The two central problems of interest in the study of these models are inference and learning. While remarkable strides have been made in the area of inference, however, there is a dearth of techniques in the area of learning. Prominent learning efforts include inductive logic programming (ILP) [28] and program synthesis [17]. However, in these efforts, the influence of each logical rule considered during learning is discrete: it is either present or absent, which undermines the benefits of continuous reasoning. In contrast, considerable advances have been made in machine learning by virtue of using continuous reasoning. Recent ILP systems such as δILP [13] and NEURALLP [47] have demonstrated the promise of leveraging this style of reasoning, but they are fundamentally limited to learning rules of a constrained form, such as fixed arity relations and no recursion.
In this paper, we propose a novel approach and system called DIFFLOG that employs continuous reasoning to learn rich logical rules from data. DIFFLOG can learn complex programs with recursive rules and relations of arbitrary arity even with small amounts of noise in the training data. We target Datalog, a declarative logic programming language that has witnessed applications in a variety of domains including bioinformatics [19, 41], big-data analytics [18, 42, 43], natural language
We evaluate DIFFLOG on a suite of 10 benchmarks from knowledge discovery and program analysis, and compare its performance and accuracy to two state-of-the-art systems: NEURALLP [47], which also leverages continuous reasoning, and METAGOL [9], which is based on discrete reasoning. DIFFLOG successfully learns the correct program on 8 of the 10 benchmarks, while METAGOL and NEURALLP are only able to handle 3 and 4 benchmarks respectively, and even then, learn a program with non-zero test error.
**Illustrative Example.** We illustrate DIFFLOG using the example of learning a popular program analysis called Andersen’s pointer analysis [4], shown in Figure 1. This analysis reasons about the flow of information in pointer-manipulating programs at compile-time. It is central to statically detecting a broad range of software bugs, proving their absence, enabling advanced compiler optimizations, and so on [44]. For instance, consider the C program in Figure 1(a). It is evident that pointer b1 points to (i.e., holds the address of) c1 due to the statement b1 = &c1. However, it is not evident that d and f may also point to c1 in some execution of the program. In fact, this problem is undecidable—for instance, the “(...)” can be arbitrary code—and any pointer analysis necessarily over-approximates, i.e., it derives spurious points-to facts; for instance, Andersen’s analysis incorrectly concludes that b2 points to c1. This information is represented as a points-to graph shown in Figure 1(b) where true (resp. false) points-to facts are denoted by solid (resp. dashed) edges.
Our goal is to learn the rules of Andersen’s analysis expressed by the Datalog program in Figure 1(d) from the input/output data shown in Figure 1(c). The input data (also called extensional database or EDB) represents relevant facts about the input C program, such as tuples `addr(p, q)` for statements of the form `x = &y`, tuples `load(x, y)` for statements of the form `x = *y`, and so on. The output data (also called intentional database or IDB) represents the exact points-to information, namely, tuples `pt(x, y)` denoting that `x` may point to `y`. These are tuples corresponding to the solid edges in Figure 1(b).
There are several challenges in learning the above program. First, it includes self-recursive and mutually-recursive rules. For instance, rule `R_2` states that if the program contains a statement `p = q` and if we have deduced that `q` points to `r`, then we may deduce that `p` points to `r`. Second, the rules follow patterns with subtle variations, making it challenging to determine the space of candidate rules.
to consider. For instance, rules $R_1$, $R_2$, and $R_3$ follow the ubiquitous chain pattern whose general form is $orall x, y (r_0(x, y) \leftarrow r_1(x, t_1), r_2(t_1, t_2), \ldots, r_{n-1}(t_{n-1}, y))$. But rule $R_4$ does not obey this pattern.
Many existing approaches do not support recursion, only support binary relations, and only target rules that have a constrained form, such as the chain pattern. In contrast, DIFFLOG supports recursive rules and relations of arbitrary arity. Moreover, it generates a rich soup of candidate rules through a procedure called $k$-augmentation (see Sec. 3.3). It starts out with the chain pattern and applies up to $k$ edits to generate increasingly rich variants; when $k = \infty$, the soup contains all possible Datalog rules, although a small $k$ suffices in practice. For instance, the pattern of rule $R_4$ is generated with $k = 3$; the three edits correspond to the three differences of $R_4$ compared to $R_3$.
Yet another challenge highlighted in the example is that, due to the undecidability of pointer analysis, no Datalog program can possibly capture the exact points-to information for every C program. In this case, we should still learn rules that best approximates the training data. For instance, even though tuples $pt(b_1, c_2)$ and $pt(b_2, c_1)$ are excluded from the labeled output, we should learn the Andersen’s analysis rules despite the fact that they end up deriving those tuples. This is possible only by leveraging continuous reasoning. Existing approaches based on discrete reasoning fail to generate any rules in such cases. Finally, the training data may contain noise in the form of mislabeled tuples; similarly, in such cases, we should learn rules that best explain the training data.
DIFFLOG satisfies all of the above criteria, and generates the depicted Datalog program in 500 iterations of gradient descent in a total of 1.5 minutes. In contrast, NEURALLP, which does not support recursion nor the non-chain pattern rule $R_4$, learns an approximate program that has 42.8% RMS error even on the training data. Finally, METAGOL [9] and ZAATAR [3] are severely limited in their ability to scale in the presence of recursion and increasing dataset size. As a result, METAGOL times out on 6 of the benchmarks on which DIFFLOG successfully learns the intended program.
2 Related Work
The most closely related to our work is Cohen et al. [7, 8, 47]. Their NEURALLP framework [47] was the first to propose a differentiable approach to learning the structure of logical rules. However, their approach targets a limited logic called TensorLog [7, 8] which does not support recursive rules, focuses on unary and binary relations, and only targets rules having the chain pattern. In contrast, DIFFLOG can support recursive rules, relations of arbitrary arity, and rules with richer patterns. Also, the underlying techniques for supporting efficient computation of gradients are fundamentally different: they leverage back-propagation in deep neural networks whereas we employ forward propagation based on provenance information. This difference makes DIFFLOG and NEURALLP complementary in their strengths: while NEURALLP can learn from very large datasets, but with only limited patterns, DIFFLOG requires the training dataset to be much smaller, but can learn complex patterns in this data.
Another recent differentiable approach is $\delta$ILP [13]. It is capable of supporting restricted forms of recursion but is otherwise limited to binary relations, among other restrictions such as a fixed number of rules for each relation and a fixed number of literals per rule body. Moreover, they attach weights to each combination of rules, rather than to the rules directly. The number of these combinations grows exponentially as a function of the number of rules that define each relation. As a result, they only focus on problems where there are at most two rules that define each relation, thus limiting their expressive power.
The field of inductive logic programming (ILP) has extensively studied the problem of inferring logic programs from examples, as surveyed by Muggleton and De Raedt [28]. The literature on ILP spans a number of foundational theoretical concepts, including $\theta$-subsumption [33], relative generalization [34], refinement [46], and others [12, 26, 27, 31]. Based on these theoretical results, a number of practical ILP systems have been developed [29, 37, 38, 48]. The most significant difference between our approach and ILP lies in the use of continuous versus discrete reasoning, which affects the scalability of rule learning and resilience to noise. Besides, ILP systems, whose goal is to primarily explain a phenomenon from vast amounts of mined data, are not adept at learning recursive rules that can only be inferred through deep inspection.
The field of program synthesis has targeted the problem of synthesizing recursive programs [2, 14, 20, 21, 32, 35]. Most of these works focus on recursive functional programs that manipulate
We begin with a brief overview of Datalog, as presented in [1]. We assume a collection of relations \( R \), each contributing to the query synthesis problem: Given the input tuples \( I \) for each of their \( v_i \), at fixpoint, all of these trees may be compactly represented by a derivation graph such as that shown in Figure 2.
Figure 2: A portion of the derivation graph induced by applying the Datalog program of Figure 1 to the specified input relations. \( R_1(b, 1) \) refers to the grounded constraint obtained by instantiating rule \( R_1 \) with \( p = b_1 \) and \( q = c_1 \), and similarly for all other rule instances.
recursive data structures. Datalog programs recursively traverse relations (hypergraphs). None of the functional techniques have been applied to this domain. ZAATAR [3] employs an SMT constraint solving approach to synthesize Datalog programs. It suffers from poor scalability because of discrete reasoning employed via the use of a theorem prover to search the space of Datalog programs and their derivation graphs.
3 Our Framework
3.1 Problem description
We begin with a brief overview of Datalog, as presented in [1]. We assume a collection of relations \( \{P, Q, \ldots\} \). Each relation \( P \) has an arity \( k \), and is a set of tuples \( P(v_1, v_2, \ldots, v_k) \), where \( v_1, v_2, \ldots, v_k \) are constants. Examples include the relations \( pt(p, q), addr(p, q), \text{etc.} \), and the constants \( a_1, a_2, \text{etc.} \) from Figure 1. Some relations (EDB) are explicitly provided as part of the input, while the remaining relations (IDB) are implicitly specified by a collection of rules, each of the form:
\[ P_h(u_h) := P_1(u_1), P_2(u_2), \ldots, P_k(u_k), \]
where \( P_h \) is an output relation, and \( u_h, u_1, u_2, \ldots, u_k \) are vectors of variables of appropriate length. Each rule is a universally quantified logical formula, and is read from right-to-left, with the “\( \leftarrow \)" operator treated as implication. For example, rule \( R_3 \) from Figure 1 may be read as: “If there is a statement in the program of the form \( p = *_q \text{load}(p, q) \), and \( q \) may point to \( r \ (pt(q, r)) \), and \( r \) may point to \( s \ (pt(r, s)) \), then \( p \) may itself point to \( s \ (pt(p, s)) \)".
Instantiating a rule’s variables yields a grounded constraint \( g \) of the form \( P_h(v_h) \wedge \cdots \wedge P_k(v_k) \Rightarrow P_h(v_h) \). In other words, given the set of antecedent tuples, \( A_g = \{P_1(v_1), P_2(v_2), \ldots, P_k(v_k)\} \), the rule produces the conclusion \( c_g = P_h(v_h) \). To determine the value of the output relations, we repeatedly apply rules to the known facts and accumulate additional conclusions until nothing further can be derived. Each output tuple is therefore witnessed by at least one derivation tree leading back to the input tuples—at fixpoint, all of these trees may be compactly represented by a derivation graph such as that shown in Figure 2.
Given a Datalog program, and a valuation of its input relations \( I \), the query evaluation problem asks to determine the valuation of its output relations \( O \). In this paper, we are interested in the query synthesis problem: Given the input tuples \( I \), output tuples \( O \), and a set of candidate rules \( R \), can we find a subset of rules, \( D \subseteq R \), such that the output of \( D \) on \( I \) is equal to \( O \)?
3.2 DIFFLOG: Extending Datalog with continuous semantics
As a first step to solving the query synthesis problem, we generalize the idea of rule selection. Instead of a binary decision, we associate each rule \( R \) with a numerical weight \( w_R \in [0, 1] \). One possible way to visualize these weights is as the extent to which they are present in the learned program \( D \). Naturally, associating weights with individual rules induces numerical values \( v_t \) for each of their conclusions \( t \): the key design decision is in fixing how the rule weights \( w_R \) determine tuple values \( v_t \).
Traditionally, a tuple is produced by a Datalog program if there exists some grounded constraint, all of whose antecedents are true, and of which it is the conclusion. Stated differently, the truth value...
\( b_t \) of the tuple \( t \) is the disjunction over all possible rule instantiations \( g \) such that \( c_g = t \), of the conjunction of its antecedents \( A_g = \{t_1, t_2, \ldots, t_k\} \):
\[
b_t = \bigvee_g (b_{t_1} \land b_{t_2} \land \cdots \land b_{t_k}).
\]
(1)
The central idea behind \textsc{Difflog} is to switch the boolean operations \( \lor \) and \( \land \) in the above equation with the arithmetic operations \( \max \) and \( \times \). Combined with the idea to associate rules with weights, we define the value \( v_t \) associated with a tuple \( t \) as follows:
\[
v_t = \max_{\tau} (w_{g_1} \times w_{g_2} \times \cdots \times w_{g_n}),
\]
(2)
where \( \tau \) ranges over all derivation trees with conclusion \( t \), and \( g_1, g_2, \ldots, g_n \) are the grounded constraints appearing in this tree, and where \( w_g \) is the weight of the associated rule for each grounded constraint \( g \).
For example, the tuple \( t_1 = \text{pt}(b1, c1) \) in Figure 2 is produced by one application of rule \( R_1 \), and the tuple \( t_2 = \text{pt}(d, c1) \) is produced by two applications of \( R_1 \) and one application of \( R_3 \). Thus, if their weights are initialized to \( w_{R_1} = 0.9 \) and \( w_{R_3} = 0.8 \) respectively, then the corresponding tuples values are \( v_{t_1} = 0.9 \) and \( v_{t_2} = 0.9 \times 0.9 \times 0.8 = 0.648 \).
Replacing the operations \( (\lor, \land) \) with \( (\max, \times) \) corresponds to interpreting the Datalog program over the Viterbi semiring instead of the traditional Boolean semiring. As a consequence of this, and because all the rule weights are bounded \( (0 \leq w_R \leq 1) \) it follows that Equation 2 is well-defined, even in pathological situations where a tuple may have infinitely many derivation trees [16]. Furthermore, when appropriately instrumented, classical algorithms to solve Datalog programs, such as the seminaive evaluator, also work for \textsc{Difflog}. Finally, we can show that the output values \( v_t \) are continuous functions of the rule weights \( w_R \), and that the provenance of a tuple provides an efficient mechanism to compute the gradient of \( v_t \) with respect to the rule weights, as described in Sec. 3.3.
On the other hand, note that the semantics of \textsc{Difflog} does not form a probability space: while we compute values \( v_t \) for each tuple \( t \), we do not generalize them to combinations of tuples. In particular, we have no analogue for quantities such as \( \Pr(t_1 \land \neg t_2) \). This choice is deliberate: while the data complexity of determining \( v_t \) for a fixed \textsc{Difflog} program can be shown to be polynomial in the size of the input tuples \( I \), the complexity of determining \( \Pr(t) \) is \#P-complete for even the simplest classes of queries over probabilistic databases [10].
### 3.3 Learning \textsc{Difflog} programs by numerical optimization
We evaluate \textsc{Difflog} programs using a modified version of the seminaive algorithm for Datalog [1]. At a high level, at each time step \( x \in \{0, 1, 2, \ldots\} \), the evaluator maintains an association between output tuples \( t \) and their current candidate values \( v_t^x \). The algorithm repeatedly considers instantiations \( g \), all of whose antecedents \( A_g = \{t_1, t_2, \ldots, t_n\} \) satisfy \( \nu_{t_i} \geq 0 \), and updates the candidate value for \( t \):
\[
v_t^{x+1} = \max(v_t^x, v_{t_1}^x \times v_{t_2}^x \times \cdots \times v_{t_n}^x).
\]
(3)
To be able to compute the gradients \( \nabla v_t \) with respect to the rule weights \( w_R \), we also maintain a version of the provenance polynomial \( p_t \) for each tuple [16]. Informally, the provenance describes how the program concluded that \( t \) is an output tuple. We label each input tuple with the polynomial \( p_t^0 = 1 \), indicating that their value is independent of any rule weight. Subsequently, after each application of Equation 3, we update \( p_t^{x+1} \) as follows:
\[
p_t^{x+1} = \begin{cases}
p_t^x & \text{if } v_t^{x+1} = v_t^x, \\
w_g \times v_{t_1}^x \times v_{t_2}^x \times \cdots \times v_{t_n}^x & \text{otherwise,}
\end{cases}
\]
(4)
where \( g \) is the same grounded constraint referred to in the value update expression. Observe that, due to the semantics of the \( \max \) function, it suffices to track the lineage of tuples along the winning branch, and hence the provenance polynomial \( p_t \) reduces to a compact product of rule weights.
The learning problem for \textsc{Difflog} can then be seen as determining the value of rule weights \( w_R \) which causes the greatest agreement between the expected tuple values \( l_t = \mathbb{I}(t \in O) \), and the values \( v_t \) produced by the \textsc{Difflog} program. We cast this as an optimization problem for the L2 loss, \( f(w) = \sum_t (v_t - l_t)^2 \), and optimize for the optimal values using Newton’s method. To avoid
pathological behavior associated with multiplication by zero, we further constrain rule weights $w_R \in [0.01, 0.99]$. We stop the optimization process once the L2 loss drops below 0.01, or once the optimizer has performed 500 iterations, or when the magnitude of the gradient is zero.
Our ultimate goal is to learn discrete logic programs through continuous optimization. As a final step, we therefore reinterpret the produced DIFFLOG program as a Datalog program by only retaining those rules $R$ which (a) have weight $w_R > w_0$, for some cutoff value $w_0$, and (b) which are useful, i.e. which contribute to the provenance of some output tuple. The cutoff value $w_0$ is chosen so as to minimize L2 error on the training dataset. It is a straightforward observation that if all rule weights $w_R \geq 0$, then $v_t > 0$ iff $t$ is emitted as an output tuple. The second condition further reduces the number of rules in the learned program. As we shall demonstrate in Section 4.1, this is important in improving generalization.
3.4 Implementation details
We now describe some additional details involved in implementing DIFFLOG. These involve: (a) the computation of the initial soup of candidate rules $R$, and (b) optimizations to allow the DIFFLOG evaluation algorithm to reduce the time needed for each iteration of the optimizer.
The effectiveness of the DIFFLOG search depends on the expressiveness of the set of candidate rules $R$. In our experiments, we obtain this set by a process of augmentation, which we now describe. Our motivation is that the rules of Datalog programs tend to be structurally similar to each other, and that small syntactic modifications of one plausible candidate rule can yield another. We therefore start with a set of seed rules, and repeatedly replace relations, variables, and insert additional variables into the bodies of the candidate rules to produce new candidate rules. We keep all candidate rules which are thus obtained, and which are at an edit distance of at most 5 from the following “chain rules”:
$$P_1(x, y) := P_2(x, y),$$
$$P_1(x, y) := P_2(y, z), P_3(z, w),$$
Apart from the seminaive algorithm, the most significant performance optimization we have implemented in DIFFLOG is a restricted form of eager projection commonly employed in relational databases. Given the set of input tuples $I$, the evaluator repeatedly instantiates rules $R$ to produce grounded constraints $g$. Starting with a single empty instantiation $V$ of the variables with weight 1, the evaluator iterates over the literals of the rule, and unifies each variable valuation with each tuple in the present relation, to produce a set of extended variable valuations. Consider the rule $P(x, w) := P(x, y), P(y, z), P(z, w)$, encountered, for example, while learning the transitive closure of a graph. After processing the first two literals, $P(x, y)$, and $P(y, z)$, the set of valuations will have associations for the variables $x, y, z$. Notice however, that $y$ does not appear in any subsequent literal of the rule. We therefore drop $y$ from each valuation currently under consideration. Each new valuation thus obtained may be associated with multiple previous valuations: the weight of the new valuation is therefore set to the maximum of all obsolete contributing valuations. This valuation collapse significantly improves the performance of the DIFFLOG evaluator.
4 Experimental Evaluation
The goals of our experiments are: (a) to determine the accuracy of the DIFFLOG learning algorithm, and compare it to previously published tools in the literature, (b) to estimate how sensitive DIFFLOG is to noise in the training data, and whether it can still learn the correct program in the presence of varying amounts of noise, and (c) to measure the scalability of the training process, as a function of the number of candidate rules used for training. To this end, we test DIFFLOG on a suite of 10 benchmarks, 5 of which are from the domain of knowledge discovery, and the remaining from automatic program analysis. We list the essential characteristics of these benchmarks in Table 1.
4.1 Accuracy of learned programs
We present the test error achieved by DIFFLOG on our benchmarks in Table 2. We also compare it to the baseline algorithms, NEURALLP [47] and METAGOL [9]. All algorithms were run with a timeout of 6 hours on a server with 128 GB of memory and 3 GHz AMD Opteron 6220 processors running Linux.
Table 1: Benchmark characteristics. The first five benchmarks are from the domain of knowledge discovery while the remaining five are from program analysis.
<table>
<thead>
<tr>
<th>Benchmark</th>
<th># Relations</th>
<th># Rules</th>
<th># Training tuples</th>
<th># Test tuples</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Input</td>
<td>Output</td>
<td>Expected Candidates</td>
<td>Input</td>
</tr>
<tr>
<td>Path</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>16</td>
</tr>
<tr>
<td>Ancestor</td>
<td>2</td>
<td>2</td>
<td>4</td>
<td>38</td>
</tr>
<tr>
<td>Animals</td>
<td>9</td>
<td>4</td>
<td>4</td>
<td>76</td>
</tr>
<tr>
<td>Samegen</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>154</td>
</tr>
<tr>
<td>Knights Move</td>
<td>4</td>
<td>1</td>
<td>1</td>
<td>40</td>
</tr>
<tr>
<td>Path</td>
<td>4</td>
<td>1</td>
<td>4</td>
<td>27</td>
</tr>
<tr>
<td>Escape</td>
<td>4</td>
<td>3</td>
<td>6</td>
<td>26</td>
</tr>
<tr>
<td>Modref</td>
<td>7</td>
<td>5</td>
<td>10</td>
<td>30</td>
</tr>
<tr>
<td>1-Call Site</td>
<td>7</td>
<td>2</td>
<td>4</td>
<td>94</td>
</tr>
<tr>
<td>Polysite</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>289</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>RMS Error on Test Set</th>
</tr>
</thead>
<tbody>
<tr>
<td>Path</td>
<td>0</td>
</tr>
<tr>
<td>Ancestor</td>
<td>0.45</td>
</tr>
<tr>
<td>Animals</td>
<td>0.53</td>
</tr>
<tr>
<td>Samegen</td>
<td>0.48</td>
</tr>
<tr>
<td>Knight Move</td>
<td>0</td>
</tr>
<tr>
<td>Andersen</td>
<td>0</td>
</tr>
<tr>
<td>Escape</td>
<td>0.39</td>
</tr>
<tr>
<td>Modref</td>
<td>0</td>
</tr>
<tr>
<td>1-Call Site</td>
<td>0.61</td>
</tr>
<tr>
<td>Polysite</td>
<td>0.31</td>
</tr>
</tbody>
</table>
Table 2: Test error achieved by DiffLog and the baseline learning algorithms on our benchmarks. N/A denotes “not applicable”. Timeout is 6 hours.
3.2.0. Notice that DiffLog is able to learn the correct program for all but two of our benchmarks. In contrast, because of the constrained form of the rules mandated by NeuralLP—relations of arity two, and only non-recursive chain rules—it is not applicable to many of our benchmarks. On the other hand, the combinatorial algorithm employed by Metagol frequently times out.
The results in Table 2 were obtained after reinterpreting the rules of the learned DiffLog program as a traditional Datalog program, as discussed in Section 3.3. In Figure 3, we explore this process in more detail. The output of the DiffLog learning algorithm may be viewed as a ranked list of rules. We plot the training and test RMS error achieved by each prefix of this ranked list. First, observe that optimum training and test errors is simultaneously achieved by approximately the same prefixes of the ranked list. Second, observe the long stretch of zero training error: in this region, for larger values of $k$, rules that did not produce any output tuples in the training dataset—and were consequently regarded by the optimization algorithm as harmless—start producing output tuples in the test set, thereby resulting in steadily increasing test error. Third, observe that, because we have not pruned the rules to only include “useful” tuples, test error does not drop to 0 at any point during the run. These observations demonstrate the importance of our reinterpretation heuristic from Section 3.3: we only keep those rules whose weights are above the given threshold, and which are useful in producing output tuples in the training dataset. As a sort of Occam’s razor, this serves as a sort of regularization and prevents overfitting.
4.2 Sensitivity to training noise
Next, we measured the ability of DiffLog to learn programs in the presence of noise. We flipped the truth values of a randomly selected subset of the output tuples in the training data, and measured the final training error and resulting test error of the learned DiffLog program.
We present these results in Table 3. Observe that, in the presence of 1% noise, even though the training error is non-zero, the optimizer still learns the correct program, and achieves zero test error. In the presence of larger amounts of noise, the optimizer finds it increasingly difficult to fit the training data. However, in all these cases, the program it learns has low error on the uncorrupted test set. We therefore conclude that DiffLog is able to generalize even from moderately noisy data.
4.3 Scalability of training process
Finally, we studied the scalability of the DiffLog learning process. We measured two quantities: first, the time taken to solve the DiffLog program and compute gradients, i.e. the time per iteration of numerical optimization, and second, the number of iterations needed to converge to the final solution. We measured both quantities as a function of the number of candidate rules in $R$. We present these plots in Figure 4 and 5 respectively. Observe that both quantities increase approximately linearly with the size of $R$, suggesting that we can successfully learn complex programs from a large number of candidate rules.
5 Conclusion
We presented an approach and system called DiffLog to learn Datalog programs from input-output data. Inspired by the success of continuous reasoning in machine learning, DiffLog extends Datalog semantics with numerical weights on individual rules, which enables us to apply numeric optimization techniques such as gradient descent and Newton’s method to synthesize Datalog programs. Our approach leverages provenance information that is naturally obtained from Datalog evaluation in order to efficiently forward-propagate the gradient through the rules to learn the weights. We demonstrated that DiffLog is capable of learning complex Datalog programs with recursive rules and relations of arbitrary arity, even with small amounts of noise in the training data. It thereby targets a richer class of logic programs than state-of-the-art systems, including those based on discrete reasoning as well as those based on continuous reasoning.
In future work, we plan to extend DiffLog to address useful Datalog extensions such as aggregation and stratified negation. Our formulation of the problem as selecting rules from a soup of candidates facilitates supporting such extensions. Another important direction concerns handling black-box predicates, including so-called invented predicates that are constructed using Datalog rules themselves, as well as foreign functions that are constructed outside the Datalog evaluation sub-system.
References
|
{"Source-Url": "http://www.cis.upenn.edu/~mhnaik/papers/difflog.pdf", "len_cl100k_base": 7587, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38153, "total-output-tokens": 10638, "length": "2e12", "weborganizer": {"__label__adult": 0.00040340423583984375, "__label__art_design": 0.0004606246948242187, "__label__crime_law": 0.0005121231079101562, "__label__education_jobs": 0.0017061233520507812, "__label__entertainment": 0.00011897087097167967, "__label__fashion_beauty": 0.00021326541900634768, "__label__finance_business": 0.0003066062927246094, "__label__food_dining": 0.00048828125, "__label__games": 0.0008444786071777344, "__label__hardware": 0.0010232925415039062, "__label__health": 0.0007104873657226562, "__label__history": 0.0002887248992919922, "__label__home_hobbies": 0.00018310546875, "__label__industrial": 0.0006055831909179688, "__label__literature": 0.0004413127899169922, "__label__politics": 0.0003476142883300781, "__label__religion": 0.0005431175231933594, "__label__science_tech": 0.0927734375, "__label__social_life": 0.00012862682342529297, "__label__software": 0.00960540771484375, "__label__software_dev": 0.88720703125, "__label__sports_fitness": 0.00035834312438964844, "__label__transportation": 0.0006723403930664062, "__label__travel": 0.0002081394195556641}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39987, 0.04193]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39987, 0.60609]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39987, 0.88513]], "google_gemma-3-12b-it_contains_pii": [[0, 3391, false], [3391, 6022, null], [6022, 11079, null], [11079, 15341, null], [15341, 20312, null], [20312, 24770, null], [24770, 28694, null], [28694, 31306, null], [31306, 34908, null], [34908, 38098, null], [38098, 39987, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3391, true], [3391, 6022, null], [6022, 11079, null], [11079, 15341, null], [15341, 20312, null], [20312, 24770, null], [24770, 28694, null], [28694, 31306, null], [31306, 34908, null], [34908, 38098, null], [38098, 39987, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39987, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39987, null]], "pdf_page_numbers": [[0, 3391, 1], [3391, 6022, 2], [6022, 11079, 3], [11079, 15341, 4], [15341, 20312, 5], [20312, 24770, 6], [24770, 28694, 7], [28694, 31306, 8], [31306, 34908, 9], [34908, 38098, 10], [38098, 39987, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39987, 0.14535]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
27c1bf93d6c06f37510076068340494be308abd7
|
Introducing programmability and automation in the synthesis of virtual firewall rules
Original
Availability:
This version is available at: 11583/2844332 since: 2020-10-19T08:32:20Z
Publisher:
IEEE
Published
DOI:10.1109/NetSoft48620.2020.9165434
Terms of use:
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
IEEE postprint/Author's Accepted Manuscript
©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collecting works, for resale or lists, or reuse of any copyrighted component of this work in other works.
(Article begins on next page)
Introducing programmability and automation in the synthesis of virtual firewall rules
Daniele Bringhenti, Guido Marchetto, Riccardo Sisto, Fulvio Valenza, Jalolliddin Yusupov
Dip. Automatica e Informatica, Politecnico di Torino, Torino, Italy, Emails: {first.last}@polito.it
Abstract—The rise of new forms of cyber-threats is mostly due to the extensive use of virtualization paradigms and the increasing adoption of automation in the software life-cycle. To address these challenges we propose an innovative framework that leverages the intrinsic programmability of the cloud and software-defined infrastructures to improve the effectiveness and efficiency of reaction mechanisms. In this paper, we present our contributions with a demonstrative use case in the context of Kubernetes. By means of this framework, developers of cybersecurity appliances will not have any more to care about how to react to events or to struggle to define any possible security tasks at design time. In addition, automatic firewall ruleset generation provided by our framework will mostly avoid human intervention, hence decreasing the time to carry out them and the likelihood of errors. We focus our discussions on technical challenges: definition of common actions at the policy level and their translation into configurations for the heterogeneous set of security functions by means of a use case.
Index Terms—network functions virtualization, firewall, automatic programmability, cloud networking, formal methods
I. INTRODUCTION
The networking field is currently facing a deep revolution based on virtualization. In the decade which has just ended, innovative paradigms shook the traditional vision of networks as a mesh of heterogeneous functions providing different services. The first time when networking embraced virtualization is represented by the born of Software-Defined Networking (SDN) [1], [2]. The main pillars of this technology are the decoupling between data plane and control plane, centralization of all the control plane functions in a software module that is referred to as SDN controller, abstraction between the specificity of user applications and the generality of controller interfaces. More recently, Network Functions Virtualization (NFV) [3], [4] introduced the possibility to create network functions as software programs and to make them run as traditional virtual machines or containerized applications, supervised by a software MANagement and Orchestration (MANO) module [5]. Physical middleboxes have been thus progressively replaced by general-purpose servers where the programs implementing the network functions can run.
Among the contributions brought over, automatic (re)programmability of the network functions are nowadays becoming feasible, with respect to the traditional troubles coming from a manual function configuration [6]. On one side, a fundamental novelty provided by SDN has been the reactive generation and deployment of forwarding rules by the controller onto the data plane switches. Whenever a packet which does not match any switch rule is received, it is forwarded to the controller, so that it can take the best decision according to the internal logic and consequently generates rules for all the network switches which would have to manage packets with the same characteristics in the future. On the other side, if the network functions are implemented in the NFV fashion, MANO can automatically manage their life-cycle and deployment, so as to optimize either resource consumption for the underlying physical infrastructure or availability of the provided services.
Although many organizations are migrating virtual machine (VM)-based applications to containers, virtualization is still present in data centers and public clouds. We are also seeing new ways of integrating virtualization with containers and Kubernetes (K8s) to provide innovative solutions to new problems. In other words, virtual machines are also becoming part of the cloud-native architecture – this concept is called container-native virtualization. Kubernetes is an example of the fulfillment of SDN and NFV paradigms, which is an open-source system for automating deployment, scaling, and management of virtualized applications. It significantly simplifies the works of network and service administrators.
However, in an environment like Kubernetes, where multiple software processes run in parallel, the correct global management becomes more difficult than what traditionally was with single hardware devices. The increase of complexity, unfortunately, contributed to raising the number of cyberattacks, which became more variable by having the possibility to exploit new kinds of breaches. In particular, misconfiguration of network functions has become more critical, because more variable factors must be considered when enforcing a correct security defence against both external and internal attacks. This statement is confirmed by recent surveys, such as Verizon’s most recent study [7]. In this report, the misconfigurations have been identified as the third most critical threat in cloud environments, that can lead to catastrophic breaches.
In light of these observations, the challenge we propose to face is to effectively exploit the benefits provided by the virtual networking paradigms, minimizing the impact of their beforehand illustrated drawbacks. With this aim, we designed a framework based on the innovative methodology presented in [8], based on Maximum Satisfiability Modulo Theories (MaxSMT), and we integrated it in the context of Kubernetes. The proposed approach automatically configures virtual
firewalls, where a consistent number of configuration errors are traditionally performed. Moreover, we will particularly describe how this methodology is effectively introduced in the framework architecture of ASTRID (Addressing Threats for virtualizedD services), which is an EU H2020 Project [9].
The remainder of this paper is structured as follows. In Section II, the most related works are described, so that the main differences with respect to the methodology proposed in this paper are illustrated. In Section III, first, the general architecture of the ASTRID framework is presented. Then the focus will be shifted on the methodology for the automatic firewall configuration, present inside the Security Controller, the central component of the ASTRID framework which enforces security in cloud-based networks. In Section IV, additional details about the implementation will be provided, alongside with a validation based on the framework’s application in a realistic scenario. Finally, Section V briefly concludes the paper and describes the planned future works.
II. RELATED WORK
The focus of this paper is centered on the automatic configuration of firewalls in Kubernetes framework. Therefore, we briefly introduce the main characteristics of the Kubernetes framework and then we report the main works related to the automatic firewalls configuration.
As shown in Fig. 1, a Kubernetes cluster is composed of multiple nodes, which can be virtual or physical. A Pod is a minimal management unit and can accommodate one or more containers. Each Pod is protected by a packet filter (i.e., FW in Fig 1). Pods are assigned with network addresses and are allocated to nodes. Containers inside a Pod share resources, such as volumes where they can write and read data. Clients contact the cluster through another firewall, which distributes requests to nodes according to load balancing rules. The proxy receives requests from this firewall and forwards them to Pods. Each node has a proxy installed. If a Pod is replicated, the proxy distributes the load among the replicas. The kubelet is a component that manages Pods, containers, images and other elements in the node. The kubelet forwards data on the monitoring of containers to the main node, which acts when necessary. In this framework, one of the main key points concerns the correct and consistent configuration of this graph of firewalls that protect the access to each container.
In literature, an automatic configuration of firewalls is a challenge where research has been partially carried out. However, most of the works describe either technique which can be only applied to traditional networks (e.g., with hardware firewalls), or mathematical models that do not have a correspondent implementation proving their feasibility and effectiveness. Moreover, only a limited subset of them enrich the computed configurations with optimality or formal correctness assurance [10].
The three papers which gave birth to this research trend have been [11], [12] and [13]. In particular, Firmato [11] represents a vital milestone, because it is the first approach based on policy refinement that is able to automatically synthesize a firewall configuration, by exploiting an entity-relationship model for the representation of the security policies and the network topology. Nevertheless, its limitations are evident: the most critical is that it has been validated on a single centralized firewall, instead of a distributed security architecture. The other two works ([12], [13]) added the possibility to configure a distributed firewall as the main novelty. However, all these three works exclusively target traditional networks, and do not offer either optimality or formal verification.
Formal mathematical models have been, instead, presented in [14] and [15], where formal methodologies are used to automatically compute firewall configuration. However, in both cases these techniques work only in specific cases, not related to virtualized networks: [14] follows the syntax of IPChains and Cisco’s PIX, whereas [15]’s technique has been validated only with SCADA-firewall configuration. Besides, optimization is overlooked in both these two works.
A recent work which, with respect to all the others, specifically targets NFV-based networks is [16], [17]. The proposed approach is the first step toward a security policy aware NFV management, with the introduction of a specific module, called Security Awareness Manager (SAM), into frameworks which provide NFV MANO, such as OpenMANO. This module performed a complete refinement of high-level, user-specified network security policies into the low-level configuration of virtual network functions, using optimization models defined for each function type. There are limitations in this work, though: the achieved results are not formally verified and little information is provided about how firewall policies are managed, since this paper provides a comprehensive approach for multiple security function types. Anyhow, it shows how, despite its drawbacks, virtualization is altogether characterized by features which can be positively and efficiently exploited in the automatic programmability of next-generation computer networks.
Finally, the proposed work integrates the automatic configur-
ration approach, presented in [8], into Kubernetes. Specifically, the solution in [8] adopts a formal approach based on the MaxSMT problem, which provides formal assurance about the correctness of the solution. More details will be provided in the next sections.
### III. Approach
This section presents the design of the ASTRID framework and presents a generic workflow to illustrate the main functionalities. Next, our proposed approach is presented as a Security Controller component that resides in the ASTRID framework.
#### A. ASTRID framework
The term orchestration is commonly being used in the IT field. In the NFV and microservice system, there is Service Orchestration for Service, and in the Cloud system, there is Cloud Orchestration for cloud resource description. With the development and maturity of container technology, more and more enterprises and individuals choose to containerize traditional applications or directly develop container-based cloud-native applications and then run applications on the container platform. Faced with a complex container operating environment, the needs for container orchestration have raised. In general, container orchestration is responsible for the lifecycle scheduling of containers, and it improves container usage by managing container clusters. There are currently three major industry giants such as Kubernetes, Docker Swarm, and Apache Mesos. They belong to the category of DevOps infrastructure management tools and are called “container orchestration engines”.
But when developers enter the world of orchestration, one thing that needs special attention is security. Various blogs, videos, books, and tutorials teach developers how to use these solutions, but only a few mention the need to add security controls to protect applications in the cluster. Moreover, if the underlying infrastructure of the cloud is unreliable (or configured in a vulnerable manner), for instance, there is no way to guarantee the security of a Kubernetes cluster built on this foundation.
The main goal of the AddreSsing ThReats for virtualIseD services (ASTRID) project is to address these technological gaps in the scope of cloud infrastructures. The project proposes a novel cyber-security framework to provide situational awareness for cloud applications and NFV services. The overall workflow of the framework is presented in Fig. 2. According to the workflow, the ASTRID framework allows software and service developers to provide a description of the service request, which is enriched with security policies by security provider entity. The Security Orchestrator component of the framework is in charge of reaction, creation, delivering end-to-end services.
The scope and the contribution of this work are associated with the Initialization and Reaction phase provided in Fig. 2. The Security Controller that is in charge of this phase in the workflow. It is one of the most valuable parts of the run-time subsystem, conceived to automate as much as possible the behaviour of the security functions, in the control plane. In the next subsection, we describe the component in detail.
#### B. Security Controller
The Security Controller has been developed on the basis of the methodology presented in [8]. It incorporates programmability and automation in the synthesis of virtual firewall rules from a user-provided security policy. With this respect, the Security Controller works in close coordination along with the service orchestrator. The service orchestrator is in charge of providing a description of the service graph as well as the infrastructure information. The infrastructure information includes the actual number of launched virtual network functions and parameters assigned after the enforcement process such as IP and port addresses. After receiving the required data from the orchestrator, the controller performs an automatic translation from the high-level policy to low-level configuration parameters for firewall network functions. This process of automatic configuration is formally proven to meet these security policies as a part of this analysis. The security controller formulates the problem of the automatic configuration of firewall rule tables as the Maximum Satisfiability Modulo Theories (MaxSMT) problem. It is a basic constraint optimization problem that we use to provide two main features: i) high assurance in the correctness of the
computed solutions, thanks to the intrinsic formal correctness-by-construction paradigm; ii) optimality of the solution, by minimizing the number of automatically generated firewall rules, with the purpose to improve the filtering operations.
To this day, optimization problems are modeled by Integer Programming (IP) languages. At the same time, most of them are NP-hard classes, and large-scale integer problems are difficult to solve. Moreover, none of the variations of the IP formulation are able to model the problem of automatic firewall configuration having in mind the verification of end-to-end reachability. This is due to the less expressive power of the approaches compared to the Constraint Satisfaction Problem (CSP) representations. An instance encoding of CSP, MaxSMT in our case, is defined by a set of variables, a set of possible values (or domains) for each variable, and a set of soft and hard constraints, each constraint involving one or more variables. A MaxSMT solver determines for a given instance whether it is possible to assign a value to each variable from its respective domain in order to satisfy all hard constraints and an optimal number of soft constraints simultaneously.
The Security Controller translates the input service graph into a MaxSMT instance by means of a set of First-Order Logic formulas. In a nutshell, these formulas will be converted to boolean variables in Conjunctive Normal Form, eventually. In addition to the topological definition of the service graph, each network function of the service graph will be translated into the abstract model according to the guidelines given by Verigraph [19]. This allows us to provide a higher level of assurance that the automatically generated configuration parameters of the firewall will satisfy the security policies in the presence of complex network functions. The level of abstraction of these models covers all the forwarding behavior of the network and their configuration parameters that are already defined. Instead, we model the firewall network function by introducing soft constraints over variables, which then will be decided to satisfy or not by the MaxSMT solver. These variables represent the IP and port addresses to be autoconfigured in order to satisfy the end-to-end policies. Initially, these variables are set to false, which means that a firewall does not contain any rule. If the policy requires that the firewall must block the traffic, it must falsify the soft constraint in favor of satisfying the policy requirement. Hence, the policy requirement is modeled as a hard constraint, which means it must be always satisfied. In this way, the solver tries to minimize the falsifying constraints in the formula and satisfy the hard constraints. This is the definition of the optimization problem we pursue to solve.
In order to represent the reachability policies by means of hard constraints, we introduce the concept of packet flows between endpoints. The first constraint we assert is that the network function model defined in the service graph must forward a packet flow if it receives a packet flow. This constraint must be true under the functional behavior of the network device. For instance, this is true if a firewall network function does not contain any rule that drops packets. The second constraint states that the packet flow sent from a source node must be received by the destination node. Other constraints include the forwarding path definitions and static configuration parameters of network functions.
This concludes the fact that IP formulation of the same problem would be limited to a set of constraints over binary, integer, or real variables. Instead, the approach presented in this paper allows us to model the problem and using very expressive constraints. These constraints include configuration parameters of network functions, forwarding behavior of the service graph, and complex security policies, in addition to the automatic configuration constraints of the problem. Therefore, existing IP algorithms are not comparable to our algorithm for that class of problems. In the next section, we demonstrate our approach by means of a representative scenario.
IV. USE CASE SCENARIO
This section presents our framework in greater detail with a practical use case and motivates our design decisions. For the sake of simplicity, we focus our attention on a specific component of the ASTRID framework, the Security Controller, and emphasize the fact that the interaction between other components is performed by means of a REST API. We expose a number of resource endpoints to the Security Orchestrator, which will use to deliver the service graph and infrastructure information and to retrieve the automatically generated firewall rules. We underline the fact this methodology can be extended to more general scenarios than the ASTRID framework. In fact, the Security Controller is a standalone web service application, which makes it possible to be easily incorporated into existing cloud platforms and orchestrators.
We consider the scenario where an administrator predefines the logical service graph presented in Fig. 3a and feeds it to the dashboard of the ASTRID framework. This service graph represents a realistic scenario where the nginx web server is made public to the Internet and functions as a reverse proxy to fetch dynamic data from multiple instances of nodejs and apache servers. In this case, both servers can acquire data from a mysql database. As we can see from the figure, reachability policies required by the use case are rather obvious (i.e., highlighted with arrows). Instead, the isolation property required by the service graph is not evident. For instance, all the communications, which are not highlighted with arrows must be isolated. Considering the fact that each service in the graph is associated with a firewall, firewalls are preconfigured with deny-all rules, in order to satisfy this policy. This ensures that all other interactions within the service graph must be isolated, except the ones predefined by the user (i.e., arrows).
A Service Orchestrator of the ASTRID framework is in charge of deploying the service graph onto the infrastructure and generating the enriched service graph shown in Fig. 3b. During this enforcement phase, all the services are assigned with corresponding IP addresses and ports where these services can be reached. It is important to highlight that the multiple instances of the services are deployed in separate Pods and each will have its own IP addresses. In this scenario,
the user specified to have two instances of the nodejs server to handle the load. To illustrate the complexity introduced by this simple use case, we included all the links connecting each service in the infrastructure in Fig. 3b. Taking into account the deny-all rules of each firewall of the service, we can assure that there is no reachability between the Pods in this phase. Although, we have specified the user policy that needs to be satisfied by means of the arrows in the figure. As an example, apache server needs to be configured to allow traffic from itself to a mysql database and allow communication from nodejs. However, it needs to be isolated from each instance of the nodejs servers.
Without the Security Controller, an administrator of the infrastructure must manually configure each firewall. This process of manual configuration of each firewall is error-prone and time-consuming. This scenario motivates the use of the Security Controller presented in this paper, in order to automatically generate firewall configurations for each service and provide formal assurance that the network policy defined by the user is satisfied. To obtain the low-level configuration of each firewall component, the Security Controller accepts as input the infrastructure information and logical service graph as described in Section III. Infrastructure information contains the IP and port addresses of each service that is shown in Fig. 3b. This information is required to define the firewall rules, which allows to block specific packet flows involving specific Pods. In the next step, the Security Controller automatically generates an output with a low-level configuration of each firewall component. As an example, we present the partial output format and the actual configuration parameters generated by the Security Controller in Listing 1. In this prototype evaluation experiment, we use a machine with 3.40 GHz Intel i7-6700 CPU and 32GB of RAM. The average time needed for the overall procedure is less than a second. We need to emphasize the fact that for most service requests, the time required to schedule VMs to be several orders of magnitude larger than the this computation time.
Listing 1 shows the configuration parameters generated for the firewall component of the mysql service. It includes all the neighbors of the firewall in the infrastructure network and firewall rule entries. According to the output, we need to configure the firewall with 3 entries.
```
Listing 1: Automatic Configuration Output for mysql
1 <node name="172.20.1.34" functional_type="FIREWALL">
2 <neighbour name="172.20.1.14"/>
3 <neighbour name="172.20.1.13"/>
4 <neighbour name="172.20.1.31"/>
5 <neighbour name="172.20.1.32"/>
6 <neighbour name="172.20.1.33"/>
7 <configuration name="mysql" description="172.20.1.14">
8 <firewall defaultAction="DENY">
9 <elements>
10 <action>ALLOW</action>
11 <source>172.20.1.13</source>
12 <destination>172.20.1.14</destination>
13 <protocol>ANY</protocol>
14 <src_port>ANY</src_port>
15 <dst_port>ANY</dst_port>
16 </elements>
17 <elements>
18 <action>ALLOW</action>
19 <source>172.20.1.11</source>
20 <destination>172.20.1.14</destination>
21 <protocol>ANY</protocol>
22 <src_port>ANY</src_port>
23 <dst_port>ANY</dst_port>
24 </elements>
25 <elements>
26 <action>ALLOW</action>
27 <source>172.20.1.12</source>
28 <destination>172.20.1.14</destination>
29 <protocol>ANY</protocol>
30 <src_port>ANY</src_port>
31 <dst_port>ANY</dst_port>
32 </elements>
33 </configuration>
34 </node>
```
The first rule states that the packets arriving from the Pod with an IP address 172.20.1.13 need to be allowed. The rest
of the rules are associated with the two instances of the `nodejs` server of the service graph. Due to the default action set by the firewall in line 8, Listing 1, all the other packets arriving from the network is dropped. For instance, intruders from the Internet are not able to access the `mysql` database in accordance with these rules. This, in fact, ensures the satisfiability of the initial service graph policy defined by the user. Eventually, the output file generated by the Security Controller is sent back to the Context Broker, which is in charge of translating the low-level configuration of each firewall into a vendor-specific format of the firewall.
An important feature of the Security Controller is in the possibility to have firewalls without any configuration as in the use case or with partial configuration, giving to the tool itself the task of providing the missing configurations as an output. The tool generates the configuration with the objective of satisfying all the requested policies while minimizing the number of generated rules in order to achieve it. In the case of partial configuration, a firewall may include static rule entries that will not be changed in the output. This is useful when the service graph is updated according to an event when a Pod is terminated or a new Pod has been created to handle the overhead to the service. In this scenario, in order not to recompute the configuration parameters of all the other services, we can provide their rules in a static manner, meaning that they can be left unchanged. This process not only generates a set of configuration parameters but also provides an optimal set of rules to satisfy the user policy. Optimality is achieved by minimizing the number of rules inside each firewall to improve the performance of the virtual network functions.
V. CONCLUSION AND FUTURE WORKS
In this paper, we illustrated the benefits which the introduction of automatic programmability would bring for the synthesis of firewall rule sets in virtual networks, in the respect of NFV and cloud infrastructures with special emphasis on Kubernetes. In particular, the role of the presented automated methodology in the ASTRID framework architecture has been described, with an emphasis on the contributions provided to the Security Controller. We formulate the problem of automatic firewall configuration as a MaxSMT instance and solve it to provide reachability assurance between endpoints. As possible future works, we are currently planning to introduce programmability for other kinds of network security functions, such as intrusion detection systems and security devices for channel protection (e.g., VPN gateways). Moreover, we plan to provide automatic configuration settings in the presence of minor changes in the initial service graph without solving the problem from scratch. As the initial results show promises in smaller instances, we plan to evaluate the model in larger scale scenarios.
ACKNOWLEDGMENT
This work has been partially supported by the EU H2020 Projects ASTRID (Grant Agreement no. 786922) and Cyber-}
Sec4Europe (Grant Agreement no. 830929).
REFERENCES
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2844332/e384c432-7c62-d4b2-e053-9f05fe0a1d67/main.pdf", "len_cl100k_base": 5664, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21964, "total-output-tokens": 7780, "length": "2e12", "weborganizer": {"__label__adult": 0.0003788471221923828, "__label__art_design": 0.0004634857177734375, "__label__crime_law": 0.000736236572265625, "__label__education_jobs": 0.0007891654968261719, "__label__entertainment": 0.0001417398452758789, "__label__fashion_beauty": 0.00017547607421875, "__label__finance_business": 0.0004973411560058594, "__label__food_dining": 0.00035762786865234375, "__label__games": 0.0006165504455566406, "__label__hardware": 0.0022602081298828125, "__label__health": 0.0008339881896972656, "__label__history": 0.0003509521484375, "__label__home_hobbies": 0.00013148784637451172, "__label__industrial": 0.0006961822509765625, "__label__literature": 0.0003304481506347656, "__label__politics": 0.00042724609375, "__label__religion": 0.0004444122314453125, "__label__science_tech": 0.36376953125, "__label__social_life": 0.00016248226165771484, "__label__software": 0.036956787109375, "__label__software_dev": 0.58837890625, "__label__sports_fitness": 0.0002608299255371094, "__label__transportation": 0.0005779266357421875, "__label__travel": 0.0002193450927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34047, 0.06896]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34047, 0.41616]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34047, 0.88869]], "google_gemma-3-12b-it_contains_pii": [[0, 1412, false], [1412, 7066, null], [7066, 12388, null], [12388, 16820, null], [16820, 23422, null], [23422, 27074, null], [27074, 34047, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1412, true], [1412, 7066, null], [7066, 12388, null], [12388, 16820, null], [16820, 23422, null], [23422, 27074, null], [27074, 34047, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34047, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34047, null]], "pdf_page_numbers": [[0, 1412, 1], [1412, 7066, 2], [7066, 12388, 3], [12388, 16820, 4], [16820, 23422, 5], [23422, 27074, 6], [27074, 34047, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34047, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
1e72d690decccc81abf50311d7473ea72e47bbcd
|
Carnegie Mellon University
Research Showcase @ CMU
Software Engineering Institute
11-2008
CMMI High Maturity Measurement and Analysis Workshop Report: March 2008
Robert W. Stoddard
Carnegie Mellon University, rws2@sei.cmu.edu
Dennis Goldenson
Carnegie Mellon University, dg@sei.cmu.edu
David Zubrow
Carnegie Mellon University, dz@cmu.edu
Erin A. Harper
Carnegie Mellon University, eah@sei.cmu.edu
Follow this and additional works at: http://repository.cmu.edu/sei
This Technical Report is brought to you for free and open access by Research Showcase @ CMU. It has been accepted for inclusion in Software Engineering Institute by an authorized administrator of Research Showcase @ CMU. For more information, please contact research-showcase@andrew.cmu.edu.
# Table of Contents
Acknowledgments ii
Abstract iii
1 Introduction 1
1.1 Overcoming Barriers to High Maturity 1
1.2 High Maturity Practices Workshop Series 3
2 High Maturity Workshop Series Kickoff 4
2.1 Workshop Participants and Goals 4
2.2 Workshop Structure 4
2.3 Summary of Presentations 4
3 Future Workshops 8
References 9
Acknowledgments
Thanks are due in particular to the individuals from the leading high maturity organizations who participated in the first CMMI High Maturity Measurement and Analysis Workshop: Steve Austin, Dan Bennett, Eileen Bozzolo, Rushby Craig, Brooke Eiche, Rick Hefner, Gregory Kaszuba, Neal Mackertich, Kent McClurg, John Miller, Diane Mizukami-Williams, Alice Parry, Lynn Penn, Roz Singh, and Rick Welch.
Special thanks also go to Will Hayes, Mike Konrad, Larry McCarthy, and Rusty Young.
Abstract
Organizations are increasingly looking for guidance on what it takes to implement Capability Maturity Model® Integration (CMMI®) high maturity practices and how to sustain their momentum for improvement. As high maturity organizations work to improve their use of measurement and analysis, they often look to examples of successful implementations for guidance. In response to the need for clarification and guidance on implementing measurement and analysis in the context of high maturity processes, members of the SEI’s Software Engineering Measurement and Analysis (SEMA) initiative organized a workshop at the 2008 SEPG North America conference to bring leaders in the field together at a forum on the topic. Other workshops will be held as part of an ongoing series to allow high maturity organizations to share best practices and case studies.
1 Introduction
More and more organizations are striving for and reaching high maturity status, yet there is still an insufficient shared understanding of which measurement and analysis related practices are appropriate for high maturity organizations. Although Capability Maturity Model® Integration (CMMI®) provides high-level guidance, some organizations struggle to find an effective path to high maturity, and those that have reached it must persist in evolving their efforts in the spirit of continuous improvement. As a result, organizations are increasingly looking for guidance on what it takes to reach CMMI high maturity status and how to keep improving once they get there.
1.1 Overcoming Barriers to High Maturity
The Software Engineering and Measurement and Analysis (SEMA) initiative at the Software Engineering Institute (SEI) works with organizations to develop, evolve, and evaluate measurement and analysis practices. SEMA researchers have identified several barriers to CMMI high maturity during their work in the field. In this section, these barriers are described and solutions are presented.
Examples and Case Studies Are Needed
SEMA launched two new measurement training offerings in 2007 and 2008: Improving Process Performance Using Six Sigma (IPPSS) and Designing Products and Processes Using Six Sigma (DPPSS). The intent of the courses is to expand the use of statistical modeling, including various forms of regression, simulation, and probabilistic modeling. These courses teach the use of logistic and dummy variable regression in addition to traditional simple linear multiple regression so practitioners can use modeling techniques that support both continuous and discrete data types.
Students attending the courses sought a wider array of industry examples showing how different kinds of process performance models could lead to better performance outcomes. If a wider set of examples is collected, future course updates could include example modules that are more closely related to the domain and frame of reference of the students.
Also in recent years, a number of CMMI high maturity consultants, Lead Appraisers, and sponsors have questioned the business value of statistically based process performance models. To accelerate the community sharing of benefit information to address these concerns, SEMA decided to collect a compelling set of benefit experiences and example business cases for the development and use of process performance models.
Misconceptions About Process Performance Models (PPMs) Need to Be Dispelled
During client work by the SEMA team in 2005-2008, it became clear that a misconception existed about process performance models. Most clients believed that the chief barrier to modeling was the need for advanced knowledge of statistics. However, during the past two years, students in the
---
1 Additional information about the SEMA initiative is available at http://www.sei.cmu.edu/sema/.
2 See http://www.sei.cmu.edu/products/courses/p49b.html and http://www.sei.cmu.edu/products/courses/p56b.html for additional information about these courses.
IPPSS and DPPSS courses have reaffirmed that the domain knowledge used to identify the proper set of factors (y’s and x’s) remains the greatest challenge, not statistical knowledge. The courses include job aids that minimize the memorization of statistics and statistical theory. Using these job aids, students are almost unanimous in the assessment that domain knowledge remains the greatest challenge. Students still gain a sufficient understanding of statistics to recognize and avoid common misuses and know when to ask for help from coaches. They are also asked to seek out mentors in the workplace who possess the expertise to solve real-time problems and drive compelling business improvements. This coaching and mentoring structure is taken from the Six Sigma realm in which people in a hierarchy of “belts” coach one another. Coaching is the single aspect that has best enabled the successful use of Six Sigma over the past 20 years.
Adopted Process Performance Models Needs to Be Accelerated
SEMA researchers realized community adoption of process performance models needed to be accelerated to meet immediate business needs and show return on investment for CMMI business improvement. Thus, instead of waiting the projected five to seven years for statistically based process performance models to become more widely adopted, SEMA aims to help the community achieve significant adoption in the next two to three years. An accelerated schedule will also be in keeping with planned CMMI model changes and the rollout of additional CMMI constellations. Of special note, discrete event simulation, an effective modeling approach, has already been widely adopted by the services community to predict things like cycle time, workflow bottlenecks, wait times, and queue lengths.3
Experienced Coaches and Information Sharing Are Needed
Compared to the size of the CMMI Lead Appraiser community, the size of the CMMI high maturity coaching and mentoring community is very small. During 2007 and 2008, it became apparent that a number of SEI clients who wanted to pursue CMMI high maturity needed hands-on coaching related to the development of process performance models. The need for coaches and mentors knowledgeable about CMMI high maturity topics could rapidly exceed the need for appraisal services, and SEI Partners offering high maturity coaching could be in much greater demand.
The SEI is establishing an SEI CMMI-Six Sigma Belt certification program that will provide Black Belt and Master Black Belt coaches via an SEI Partner list. This approach will provide a venue for small- to medium-sized organizations to network and learn from others that are also developing process performance models.4
Lead Appraisers Need Experience Evaluating High Maturity Measurement Activities
The CMMI High Maturity Lead Appraiser Oral Exam is an opportunity for appraisers to demonstrate their knowledge of required topics, discuss professional experiences, and show their under-
3 Many examples of the use of discrete event simulation in the services community can be found at http://www.processmodel.com/resources/samplemodels.html http://www.allbusiness.com/3470945-1.html?query=%22discrete+event+simulation%22+services&x=0&y=0, and http://search-www.isixsigma.com/cgi-bin/ss_query?related=0&keys=case+%2B%22discrete+event+simulation%22+%2Bservice&sitenbr=130985463.
For further information, see Moving Up the CMMI Capability and Maturity Levels Using Simulation [Raffo 2008].
4 A brief description of this program is available in an SEI Partner Network newsletter [SEI 2008].
standing of the roles and responsibilities of a High Maturity Lead Appraiser. During the first year the exam was given, it became apparent that many Lead Appraisers were disadvantaged in evaluating evidence during CMMI High Maturity SCAMPIs. Many lacked direct experience in conducting process performance modeling and had not observed high maturity organizations performing process performance modeling. As a result, they had little frame of reference to evaluate evidence of process performance modeling during the SCAMPI A’s. From this perspective, workshops would illuminate an entire landscape of modeling that participants could use to further their professional development and assist clients seeking additional guidance on process performance modeling best practices.
1.2 High Maturity Practices Workshop Series
A series of twice-yearly SEI workshops has been planned to address the challenges and community needs described in Section 1.1 by encouraging organizations to share their experiences throughout the wider community. The workshop format was selected to allow organizations to share lessons learned in deployment, adoption, and institutionalization of CMMI process performance baselines and models with the goal of improving the practice of and value added by measurement and analysis in high maturity organizations.
2 High Maturity Workshop Series Kickoff
2.1 Workshop Participants and Goals
The focus of the first workshop was building and using CMMI process performance models. Participation was limited to a small group of organizations who were early adopters of process performance models and baselines and was by invitation only. Representatives from Hill Air Logistics Center, Lockheed-Martin, Northrop Grumman, and Raytheon attended.
The main goals of the workshop were to
- allow CMMI high maturity organizations to share best practices and case studies
- identify ways to develop CMMI high maturity measurement and analysis practices and accelerate their adoption
- enable networking among practitioners
2.2 Workshop Structure
The workshop was scheduled for two days and was held in conjunction with the SEPG North America conference March 17-20, 2008, in Tampa, Florida. The workshop began with an SEI presentation summarizing the interpretation of process performance models and baselines and an overview of the SEMA CMMI high maturity project. This presentation is available at http://www.sei.cmu.edu/sema/presentations/hmworkshop.pdf.
Each organization gave a 20-minute presentation summarizing its past experiences and future plans related to the following topics:
- barriers faced
- lessons learned in the deployment, training, adoption, and institutionalization of CMMI process performance baselines and models
- best practices and examples of valid, practical methods for implementing process performance models and baselines
- data quality and integrity issues
- plans for modeling over the next three to six months, including the nature of the performance outcomes and drivers most likely to be investigated
- suggestions for subject matter to include in future SEI state-of-the-practice studies
2.3 Summary of Presentations
Barriers Faced
The organizations noted that they faced challenges in the following areas:
1. establishing the value of developing and using process performance models (PPMs) and baselines (PPBs)
2. convincing project managers to collect new measures to be used for their PPMs and PPBs
3. retaining consistent operational definitions as the scope of their measurement and analysis activities expand
The participants expressed a strong consensus about these barriers. As with any new initiative or tool adoption requiring significant investment, the business value of statistically based process performance models must be communicated. Without immediate help in this area, many participants felt that their efforts to convince management and the organization to collect additional measures, with consistent operational definitions, would be an uphill battle. Although participants agreed that domain knowledge was the greatest challenge, there remains a moderate degree of effort to create models. Several participants recounted that the effort to create individual models took several weeks or months. For some, this was unexpected as they incorrectly believed that modeling was a one-pass approach warranting only several hours of effort.
Lessons Learned
Among the lessons learned, the workshop attendees noted that
• useful PPMs and PPBs require domain and statistical knowledge. Neither alone is sufficient.
• coaching and mentoring are critical elements of the adoption strategy when developing and using the PPMs. This includes guidance on possible decisions and actions associated with results from PPMs and PPBs.
Participants echoed the need for domain knowledge in developing models. Some even shared experiences in which statistical experts lacking sufficient domain knowledge created models that had little value to the organization and its projects. For this reason, many participants noted that their organizations were striving to involve a variety of domain experts in developing models.
A number of organizations noted that their modeling experts were located centrally in the organization rather than at the project level. This unfortunately created challenges in domain relevancy and made it difficult to have a project-level focus in the models to aid actual project execution.
All of the organizations represented in the workshop made use of some form of structured coaching and mentoring. Although this manifested predominantly in the form of Six Sigma belts, several of the organizations used other methods of coaching and mentoring for modeling. When questioned by the SEI group, there was little recognition of the possible need for upward coaching and mentoring. The SEI group shared the notion of upward mentoring as a possible improvement in guiding middle and upper management in CMMI high maturity practices, specifically in the use of process performance modeling to manage the organization and projects. For mentoring to be successful, process and behavioral changes are needed from both project personnel and management. For example, upper management may benefit from coaching to enable them to correctly interpret and use at the organizational level the results of the analyses conducted at the project level.
Best Practices
Participants discussed what they considered to be best practices and tips for successful implementation of process performance models and baselines, including
- providing education and tools to support modeling and analysis
- verifying data integrity before using the data for PPBs and PPMs
- performing product simulation and analysis in addition to process simulation and analysis
Almost all participating organizations provided training in statistics as well as electronic tools for statistical analysis and modeling. Some organizations preferred to use Microsoft Excel spreadsheets for analysis and Excel add-ons for modeling. Most organizations used different tools to create management reports because the statistical tools used to conduct the analyses could not produce polished graphs for management slide presentations.
A number of the participating organizations instituted a series of significant work flow checks and balances to ensure data integrity because experience had taught them that modeling is almost impossible with noisy and corrupt data. Finally, the SEI team was surprised at the degree of simulation modeling employed by several of the participating organizations. This might have resulted from the influence of systems engineers with backgrounds in simulation modeling.
Data Issues
Data quality and integrity problems that are encountered at lower maturity levels continue to threaten the potential value from PPMs and PPBs. Some examples include
- inconsistent operational definitions, which wreak havoc on modeling attempts
- data collection that is done manually, making it subject to human error
- problems aggregating and disaggregating data
- missing context information to go with the collected data
- consistency problems arising from decentralized databases
These issues are not new. The SEI has heard of similar issues with organizations at all CMMI maturity levels. As organizations make greater analytical use of their measures, they find out how many data integrity issues exist in their data.
Many participants recounted the need to revamp their measurement and analysis programs as they progressed up to CMMI maturity levels 4 and 5. They generally agreed that their measures were not at the proper level of granularity to support decision making, especially at the project level. They also recounted experiences in which aggregating measures from across the organization was hampered by the lack of context information needed for proper segmentation and stratification.
Modeling Plans
Participating organizations planned to use the following modeling techniques in the next three to six months:
- Bayesian methods to calculate control limits during statistical management
- regression analysis to model and predict customer satisfaction
• measurement system evaluations to identify the degree of noise in data due to the measurement process
• discrete event simulation to facilitate Lean Six Sigma improvements
Again, the SEI team was surprised by the variety of modeling techniques participating organizations planned to use. This demonstrates that the community is not fearful of statistical or modeling techniques. What may be the challenge, as discussed earlier, is that the community needs to learn how to more effectively apply process performance modeling in support of project execution. In this manner, the organization will have institutional learning and the ability to affect real-time execution of projects towards successful outcomes.
Subject Matter for State-of-the-Practice Studies
Workshop participants suggested the following topics of interest for future SEI state-of-the-measurement-practice studies:
• adoption and use of measurement and analysis related to high maturity practices, particularly the use of PPMs and PPBs
• balancing statistical and domain expertise to develop and sustain the value and use of PPMs and PPBs
• bases for choosing critical subprocesses to place under statistical control
• ways in which to develop a collection of useful PPMs
• data archeology (i.e., creating baselines from paper records for previously unmeasured attributes)
• data quality and integrity
The SEI team reconfirmed topics for the 2008 State of the Measurement Practice survey using the suggestions of workshop participants. Due to the need for an unusually detailed survey of high maturity organizations, the SEI decided to conduct two annual surveys on the state of the practice in measurement: 1) a survey for the general community, and 2) a survey targeting CMMI high maturity organizations. In this manner, each subpopulation could be given information pertinent to their perspectives and needs.
3 Future Workshops
The SEMA initiative plans to hold high maturity measurement workshops semi-annually to allow invited attendees to continue sharing their experiences and lessons learned in the adoption, development, and use of measurement and analysis in high maturity settings.
Participants from the first workshop and representatives from other CMMI high maturity organizations were invited to submit proposals for presentations at the next workshop, which will be held in Denver immediately following the CMMI Technology Conference in November 2008. Those accepted will discuss their current measurement and analysis procedures and initial results. SEI experts will offer additional guidance on high maturity topics and present pertinent results from the 2008 SEI State of the Measurement Practice survey.
Planned work products from the next workshop include
- thorough case descriptions of process performance models and their outcomes in high maturity organizations
- break-out working session reports with recommendations, for example on reducing barriers to effective training, staffing, management support, the alignment of modeling to business goals, and using different analytic forms of modeling
- requirements definitions for a possible SEMA course on the coaching, adoption, institutionalization, and evolution of CMMI process performance models and baselines
- plans for a coordinated empirical study of common performance outcomes and associated controllable and uncontrollable drivers of those outcomes
Subsequent workshops will be open to a larger group of CMMI high maturity organizations. Organizations wishing to participate in future workshops must be willing to document and share their experiences with the use of measurement and analysis methods in relation to high maturity practice. To ensure high value workshops, the SEMA team will continue to screen submissions prior to accepting an organization’s request to participate.
References
URLs are valid as of the publication date of this document.
[Raffo 2008]
[SEI 2008]
SEI Partner Network Newsletter 5, 2 (May 2008).
<table>
<thead>
<tr>
<th>Report Documentation Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. AGENCY USE ONLY</td>
</tr>
<tr>
<td>(Leave Blank)</td>
</tr>
<tr>
<td>2. REPORT DATE</td>
</tr>
<tr>
<td>November 2008</td>
</tr>
<tr>
<td>3. REPORT TYPE AND DATES</td>
</tr>
<tr>
<td>COVERED</td>
</tr>
<tr>
<td>Final</td>
</tr>
<tr>
<td>4. TITLE AND SUBTITLE</td>
</tr>
<tr>
<td>CMMI High Maturity Measure</td>
</tr>
<tr>
<td>ment and Analysis Worksh</td>
</tr>
<tr>
<td>op Report: March 2008</td>
</tr>
<tr>
<td>5. FUNDING NUMBERS</td>
</tr>
<tr>
<td>FA8721-05-C-0003</td>
</tr>
<tr>
<td>6. AUTHOR(S)</td>
</tr>
<tr>
<td>Robert W. Stoddard II, Dennis R. Goldenson, Dave Zubrow, & Erin Harper</td>
</tr>
<tr>
<td>7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)</td>
</tr>
<tr>
<td>Software Engineering Institute</td>
</tr>
<tr>
<td>Carnegie Mellon University</td>
</tr>
<tr>
<td>Pittsburgh, PA 15213</td>
</tr>
<tr>
<td>8. PERFORMING ORGANIZATION REPORT NUMBER</td>
</tr>
<tr>
<td>9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)</td>
</tr>
<tr>
<td>HQ ESC/XPK</td>
</tr>
<tr>
<td>5 Eglin Street</td>
</tr>
<tr>
<td>Hanscom AFB, MA 01731-2116</td>
</tr>
<tr>
<td>10. SPONSORING/MONITORING AGENCY REPORT NUMBER</td>
</tr>
<tr>
<td>11. SUPPLEMENTARY NOTES</td>
</tr>
<tr>
<td>12A DISTRIBUTION/AVAILABILITY STATEMENT</td>
</tr>
<tr>
<td>12B DISTRIBUTION CODE</td>
</tr>
<tr>
<td>13. ABSTRACT (MAXIMUM 200 WORDS)</td>
</tr>
<tr>
<td>Organizations are increasingly looking for guidance on what it takes to implement Capability Maturity Model® Integration (CMMI®) high maturity practices and how to sustain their momentum for improvement. As high maturity organizations work to improve their use of measurement and analysis, they often look to examples of successful implementations for guidance. In response to the need for clarification and guidance on implementing measurement and analysis in the context of high maturity processes, members of the Software Engineering Measurement and Analysis (SEMA) initiative organized a workshop at the 2008 SEPG North America conference to bring leaders in the field together at a forum on the topic. Other workshops will be held as part of an ongoing series to allow high maturity organizations to share best practices and case studies.</td>
</tr>
<tr>
<td>14. SUBJECT TERMS</td>
</tr>
<tr>
<td>High maturity measurement and analysis, workshop, CMMI, process performance models</td>
</tr>
<tr>
<td>15. NUMBER OF PAGES</td>
</tr>
<tr>
<td>17</td>
</tr>
<tr>
<td>16. PRICE CODE</td>
</tr>
<tr>
<td>17. SECURITY CLASSIFICATION OF REPORT</td>
</tr>
<tr>
<td>Unclassified</td>
</tr>
<tr>
<td>18. SECURITY CLASSIFICATION OF THIS PAGE</td>
</tr>
<tr>
<td>Unclassified</td>
</tr>
<tr>
<td>19. SECURITY CLASSIFICATION OF ABSTRACT</td>
</tr>
<tr>
<td>Unclassified</td>
</tr>
<tr>
<td>20. LIMITATION OF ABSTRACT</td>
</tr>
<tr>
<td>UL</td>
</tr>
</tbody>
</table>
NSN 7540-01-280-5500
Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. Z39-18 298-102
|
{"Source-Url": "http://repository.cmu.edu/cgi/viewcontent.cgi?article=1290&context=sei", "len_cl100k_base": 5159, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 31141, "total-output-tokens": 5837, "length": "2e12", "weborganizer": {"__label__adult": 0.0004830360412597656, "__label__art_design": 0.0009169578552246094, "__label__crime_law": 0.0005707740783691406, "__label__education_jobs": 0.0325927734375, "__label__entertainment": 0.00013744831085205078, "__label__fashion_beauty": 0.0003554821014404297, "__label__finance_business": 0.005115509033203125, "__label__food_dining": 0.0005974769592285156, "__label__games": 0.0007233619689941406, "__label__hardware": 0.0009450912475585938, "__label__health": 0.0009450912475585938, "__label__history": 0.0007333755493164062, "__label__home_hobbies": 0.0002799034118652344, "__label__industrial": 0.0016345977783203125, "__label__literature": 0.00066375732421875, "__label__politics": 0.00045371055603027344, "__label__religion": 0.0006136894226074219, "__label__science_tech": 0.145263671875, "__label__social_life": 0.0004208087921142578, "__label__software": 0.0226593017578125, "__label__software_dev": 0.78173828125, "__label__sports_fitness": 0.0005021095275878906, "__label__transportation": 0.00106048583984375, "__label__travel": 0.0003921985626220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25277, 0.03817]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25277, 0.07077]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25277, 0.93437]], "google_gemma-3-12b-it_contains_pii": [[0, 765, false], [765, 765, null], [765, 765, null], [765, 1118, null], [1118, 1618, null], [1618, 2478, null], [2478, 2478, null], [2478, 5603, null], [5603, 9190, null], [9190, 10526, null], [10526, 12562, null], [12562, 15616, null], [15616, 18423, null], [18423, 20309, null], [20309, 22268, null], [22268, 22724, null], [22724, 22724, null], [22724, 25277, null]], "google_gemma-3-12b-it_is_public_document": [[0, 765, true], [765, 765, null], [765, 765, null], [765, 1118, null], [1118, 1618, null], [1618, 2478, null], [2478, 2478, null], [2478, 5603, null], [5603, 9190, null], [9190, 10526, null], [10526, 12562, null], [12562, 15616, null], [15616, 18423, null], [18423, 20309, null], [20309, 22268, null], [22268, 22724, null], [22724, 22724, null], [22724, 25277, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25277, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25277, null]], "pdf_page_numbers": [[0, 765, 1], [765, 765, 2], [765, 765, 3], [765, 1118, 4], [1118, 1618, 5], [1618, 2478, 6], [2478, 2478, 7], [2478, 5603, 8], [5603, 9190, 9], [9190, 10526, 10], [10526, 12562, 11], [12562, 15616, 12], [15616, 18423, 13], [18423, 20309, 14], [20309, 22268, 15], [22268, 22724, 16], [22724, 22724, 17], [22724, 25277, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25277, 0.24457]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
1d8de5a3053bb7d33d641d1ee3f3f80e4eba0bcd
|
[REMOVED]
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/38051473/CADE_26_paper_35_2.pdf", "len_cl100k_base": 6433, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 30711, "total-output-tokens": 8111, "length": "2e12", "weborganizer": {"__label__adult": 0.0004198551177978515, "__label__art_design": 0.0008425712585449219, "__label__crime_law": 0.0005855560302734375, "__label__education_jobs": 0.0017004013061523438, "__label__entertainment": 0.0001646280288696289, "__label__fashion_beauty": 0.00022089481353759768, "__label__finance_business": 0.0005192756652832031, "__label__food_dining": 0.0006189346313476562, "__label__games": 0.0009083747863769532, "__label__hardware": 0.0010204315185546875, "__label__health": 0.0024967193603515625, "__label__history": 0.0003654956817626953, "__label__home_hobbies": 0.00015628337860107422, "__label__industrial": 0.0008440017700195312, "__label__literature": 0.0005660057067871094, "__label__politics": 0.00034689903259277344, "__label__religion": 0.0006327629089355469, "__label__science_tech": 0.306396484375, "__label__social_life": 0.0001729726791381836, "__label__software": 0.02056884765625, "__label__software_dev": 0.6591796875, "__label__sports_fitness": 0.0004549026489257813, "__label__transportation": 0.0006432533264160156, "__label__travel": 0.0002372264862060547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33702, 0.02447]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33702, 0.5547]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33702, 0.89669]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2907, false], [2907, 6856, null], [6856, 9349, null], [9349, 13026, null], [13026, 15973, null], [15973, 19480, null], [19480, 22098, null], [22098, 23670, null], [23670, 27557, null], [27557, 30646, null], [30646, 33702, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2907, true], [2907, 6856, null], [6856, 9349, null], [9349, 13026, null], [13026, 15973, null], [15973, 19480, null], [19480, 22098, null], [22098, 23670, null], [23670, 27557, null], [27557, 30646, null], [30646, 33702, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33702, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33702, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2907, 2], [2907, 6856, 3], [6856, 9349, 4], [9349, 13026, 5], [13026, 15973, 6], [15973, 19480, 7], [19480, 22098, 8], [22098, 23670, 9], [23670, 27557, 10], [27557, 30646, 11], [30646, 33702, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33702, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
56e394e77d1401d592e0417b1c617105769715fa
|
[REMOVED]
|
{"Source-Url": "http://static-curis.ku.dk/portal/files/162751796/airs_cache_cr_70.pdf", "len_cl100k_base": 4321, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22184, "total-output-tokens": 5822, "length": "2e12", "weborganizer": {"__label__adult": 0.0003972053527832031, "__label__art_design": 0.0003802776336669922, "__label__crime_law": 0.0006837844848632812, "__label__education_jobs": 0.0026702880859375, "__label__entertainment": 0.00019860267639160156, "__label__fashion_beauty": 0.00026607513427734375, "__label__finance_business": 0.001220703125, "__label__food_dining": 0.0005245208740234375, "__label__games": 0.0008463859558105469, "__label__hardware": 0.0012121200561523438, "__label__health": 0.0013599395751953125, "__label__history": 0.0005731582641601562, "__label__home_hobbies": 0.00012385845184326172, "__label__industrial": 0.0005936622619628906, "__label__literature": 0.0010080337524414062, "__label__politics": 0.00038814544677734375, "__label__religion": 0.0005140304565429688, "__label__science_tech": 0.420166015625, "__label__social_life": 0.0001512765884399414, "__label__software": 0.04437255859375, "__label__software_dev": 0.52099609375, "__label__sports_fitness": 0.0003964900970458984, "__label__transportation": 0.0007524490356445312, "__label__travel": 0.0003368854522705078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19719, 0.06158]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19719, 0.27602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19719, 0.84525]], "google_gemma-3-12b-it_contains_pii": [[0, 794, false], [794, 3253, null], [3253, 6560, null], [6560, 11566, null], [11566, 14672, null], [14672, 16599, null], [16599, 19719, null]], "google_gemma-3-12b-it_is_public_document": [[0, 794, true], [794, 3253, null], [3253, 6560, null], [6560, 11566, null], [11566, 14672, null], [14672, 16599, null], [16599, 19719, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19719, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19719, null]], "pdf_page_numbers": [[0, 794, 1], [794, 3253, 2], [3253, 6560, 3], [6560, 11566, 4], [11566, 14672, 5], [14672, 16599, 6], [16599, 19719, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19719, 0.08696]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
dfe8b6e0c73b94a2a7fe8f26beddb6680c90b9fa
|
Operating Experience of Programs and Changing Demand Profile – Consideration of Paths
Wolfgang D. Ehrenberger
Hochschule Fulda, Marquardstraße 35, D - 36039 Fulda
(Tel ++49 661 9640 325; email: Wolfgang.D.Ehrenberger@informatik.hs-fulda.de)
Abstract: Since more and more software exists, it is economically important to estimate whether or not operating experience gained with earlier software applications can be used in new applications. Normally new applications have another demand profile than the earlier applications had. For safety-related applications quantitative relationships are required. This contribution derives formulae that can be used to estimate the failure probability of the software in the new environment. In contrast to the work of other authors the present considerations are not based on software modules, but on execution paths. The related inaccuracies are taken into account. An example is given, as well as a method for getting and storing the path characteristics. The pre-requisites that have to be met in order to make the derived formulae applicable are mentioned.
1. INTRODUCTION
In safety-related software applications the question about the confidence that can be placed in pre-existing software becomes more and more important: Should software that requires licensing be developed for the current project from scratch, or is it better to use software that has been used for a certain time in other applications? It seems to be clear that certain standard functions, as they are usually provided by compilers, should not be re-written, but rather taken from the related library. Similar views are common on operating systems. Meanwhile a large amount of frequently used application software exists and therefore some general thoughts seem to be in order. We ask: What data are needed to accept that software in another application? One will usually accept software if the new demand profile is identical or at least very similar to the pre-existing application profile and whose number of successful executions or runs is large. If the new profile differs from the old one, a quantitative estimation about the effect of the differences is helpful.
1.1 Characteristic of this paper
This contribution discusses the arising questions. It is based on the theory of stratified sampling, which has been known for a long time as e.g. given by (Saifuddin, 2009) or (de Vries, 1986) but seems to have not been recognized yet by the software community. In this contribution software is considered as a set of paths. A path is a possible execution of the software from its starting point to its end point. Each path is recognized as a stratum.
If we consider probabilistic software verification it is problematic to see software as a composition of individual modules, e.g. subroutines, functions, methods or objects. I think it is better to consider software as a composition of paths: Because it is the paths that are really executed. The modularised view is probably more suitable for hard wired equipment. But software has a clear advantage over hardware: It does not necessarily get less reliable, if it gets larger. So we are better off, if we code complicated functions in software. The disadvantage of software in contrast to hardware is the possibly far reaching effect of one programming fault along any possibly extended computing path. The considerations of this contribution take care of that because they focus on execution sequences and not on code parts.
1.2 Literature
Statistical testing and operating experience and the possible conclusions that can be drawn from them have fascinated researchers since a long time.
(Littlewood and Strigini, 1993) consent with a view of the British authority who is responsible for licensing nuclear power plants, which says, it is impossible to verify or validate failure probabilities per demand, that are lower than 10⁻⁴ for software. The application area was reactor protection systems, which have to operate basically on demand, e.g. for shutting the reactor down, and are called only rarely. The limitation was not claimed for frequently called functions¹
¹ “function” is used for something that happens or has to happen, not as it is used by some programming languages, as e.g. C
such as the storing of an editor or the starting of a passenger car.
(Littlewood, 2013) and (Butler and Finelli, 1999) explain that it was impossible to demonstrate high reliabilities of software by probabilistic testing. Their results are based on a pure black-box view. The mathematical foundations are correct and demonstrated carefully; so are the conclusions. The present contribution, however, does not rely on a black-box view, but assumes a certain knowledge on the internals of the software, the knowledge of its paths. If the paths are known, more precise and more optimistic statements can be made and one can conclude from the software behaviour in one demand profile about its behaviour in another. As far as I know, only few computer scientists have dealt with the inner structure of software that is to be certified probabilistically. Among these are (Kuball, May and Hughes a, b, c 1999) and (Söhnlein et alii, 2010). The earlier quoted reservations against the demonstration of high reliabilities by probabilistic means mainly rest upon the infeasible high numbers of required test cases or testing times. (Littlewood and Wright, 1997) describe thoroughly how these numbers or times are to be derived. In contrast to the following they also consider the appearance of failures. Of particular interest is their proof of the equivalence of Bayesian and frequentistic thinking. A related demonstration is also found in (Ehrenberger et alii 1985).
1.3 Overview
The following chapter 2 discusses the principles of stratified sampling of software. Chapter 3 considers the effect of inaccuracies in the data that form the basis of the calculations. Chapter 4 gives an example, chapter 5 deals with the acquisition of the necessary data and Chapter 6 contains the conclusive remarks, and indicates limitations of the method. The appendix lists the prerequisites that are necessary to do the mentioned calculations. I believe that these prerequisites are so demanding that the related effort will only pay off, if the software has to deal with safety applications.
2. MONOLITHIC AND COMPOSED SOFTWARE
2.1 Prerequisites and basic formula
Ideal assumptions are made, in particular: No failure has occurred in the past. Then we get for the upper limit \( \overline{p} \) of the failure probability per demand \( p \) after \( n \) operational runs, such that \( p < \overline{p} \) with a known probability, i.e. a known degree of significance \( \alpha \):
\[
\overline{p} = -\frac{\ln \alpha}{n}
\]
\( \alpha = 1 - \text{level of confidence}. \) The confidence interval refers to one side. See also (IEC 61508-7, 2010).
2.2 Monolithic System
We start with a system that is taken as a unit, as a black box; it does not have any known sub structure like modules or paths. It holds for the failure probability of the total system after \( n_i \) successful runs to the degree of significance \( \alpha \):
\[
\bar{p}_i = -\frac{\ln \alpha}{n_i};
\]
ideal conditions are assumed. The subscript \( t \) indicates that the whole software and all runs are meant. It is expected that the view on the system does not influence its failure probability; i.e. that the failure probability given by (1) is also received as calculation result, if we consider the software as being composed of paths.
2.3 Composed System, stratified sampling
Each program can be thought of as being composed of a set \( \{N\} \) of paths. See also formulae (7) as an example.
**Definition:** A path consists of the statements that are traversed during a possible run through a program from its start to its end; it ends, when it has no further effects on other code parts; if a path ends, a new path can begin.
**Assumption:** The demand profile of a program or program part is described by the usage of its paths.
We define further:
- \( N \) number of paths,
- \( n_i \) number of runs (or traversals) of path \( i \).
The total number of runs \( n \) of the system equals the sum of the number of runs of all paths
\[
n = \sum_{i=1}^{N} n_i.
\]
The probability of running path \( i \) is \( \pi_i = \frac{n_i}{n} ; \sum_{i=1}^{N} \pi_i = 1 \).
The upper limit of failure the probability of path \( i \) is
\[
\bar{p}_i = -\frac{\ln \alpha}{n_i} \quad \text{(1a)}
\]
If a path is executed without any failures, we get, considering the one-sided confidence interval:
The number of (fictitious) failures of path \( i \) during \( n_i \) runs equals the probability of selecting that path times the probability of failure per run times the number of runs of that path.
So \( \text{number_of_failures_of_path}_i = \pi_i \times \bar{p}_i \times n_i \). See also the theory of stratified sampling, e.g. in (Saifuddin, 2002) or (de Vries, 1986).
The total number of failures of the system is the probability of failure of the individual run \( p_i \) times the total number of runs \( n_i \); it is also the sum of the failures of the individual paths; therefore we get for a software system that consists of \( N \) paths:
\[
n_i \times \bar{p}_i = \sum_{i=1}^{N} n_i \times \bar{p}_i \times \pi_i
\]
(2)
This relation holds for the case of “no failure”. We always consider the upper limits from (1a) for the probabilities. We get:
\[
\overline{p}_i = \sum_{i=1}^{N} n_i \times \bar{p}_i \times \pi_i / n_i = \sum_{i=1}^{N} \pi_i^2 \times \bar{p}_i
\]
(3)
During the derivation of (2) and (3) no assumption has been made about any old or new profile. Both formulae are valid during the phase of gaining operating experience and during any new application of the software. Normally all $\bar{p}_i$ are larger than $\tilde{p}_i$, because we assume 0 failures and a one-sided confidence interval and because the number of runs for gaining $\bar{p}_i$ is larger than the number of runs for gaining any of the $\tilde{p}_i$.
2.4 The new profile
What is different between the old and the new operation is just the set of the $\pi_i$. This set represents the operation profile; it changes between the old and the new operation. Regarding the operating experience the $\pi_i$ of the old operational profile have to be taken, for any new application the $\pi_i$ of the new application have to be taken. The $\tilde{p}_i$, however, do not change between the old and the new operation. So, (3) can also be used for the new profile.
There are some conditions, however: All paths must be known explicitly, as well as their transition numbers in both the old and the future application. If these numbers are not known, a conservative estimate is needed. Therefore the paths as such and the number of their traversals must be recorded during previous operations. If they are not known for the future operation, they must be conservatively estimated.
Also: The data values occurring within one path execution must be sufficiently closely similar in the old and the intended application. If this does not hold, sub paths must be defined that reflect the differing data values.
3. INACCURACIES
If the individual $n_i$ are not exactly known, we have to deal with related uncertainties. These can occur to both the old and the new operation profile. We are interested in a conservative estimation. From (1a) we get for each path of the experienced profile:
$$\bar{p}_i \leq -\frac{\ln \alpha}{n_i \cdot \delta_{i_{\min \_old}}} = -\frac{\ln \alpha}{n_i \cdot \delta_{i_{\min \_old}}}$$
For the new profile we take the largest possible $\delta_i$ and the smallest possible $\delta_i$ and we estimate:
$$\pi_{i_{\max \_new}} = \frac{n_i \cdot \delta_{i_{\max \_new}}}{n_i \cdot \delta_{i_{\min \_new}}} = \frac{n_i \cdot \delta_{i_{\max \_new}}}{n_i \cdot \delta_{i_{\min \_new}}} = \frac{n_i \cdot \delta_{i_{\max \_new}}}{n_i \cdot \delta_{i_{\min \_new}}} \geq \frac{n_i \cdot \delta_{i_{\max \_new}}}{n_i \cdot \delta_{i_{\min \_new}}} \geq \frac{n_i \cdot \delta_{i_{\max \_new}}}{n_i \cdot \delta_{i_{\min \_new}}} \geq \frac{n_i \cdot \delta_{i_{\max \_new}}}{n_i \cdot \delta_{i_{\min \_new}}}$$
$$\pi_{i_{\min \_new}} = \frac{n_i \cdot \delta_{i_{\min \_new}}}{n_i \cdot \delta_{i_{\max \_new}}} = \frac{n_i \cdot \delta_{i_{\min \_new}}}{n_i \cdot \delta_{i_{\max \_new}}} = \frac{n_i \cdot \delta_{i_{\min \_new}}}{n_i \cdot \delta_{i_{\max \_new}}} = \frac{n_i \cdot \delta_{i_{\min \_new}}}{n_i \cdot \delta_{i_{\max \_new}}}$$
The three last equalities are valid, if:
$$\frac{1}{\delta_{i_{\min \_new}}} = \frac{\delta_{i_{\min \_new}}}{\delta_{i_{\min \_new}}} = \Delta$$
$\pi_{i_{\min \_estimated}}$ stands for the new selection probability of path i without consideration of the uncertainties. We always assume $\delta_{\min < 1}$ and $\delta_{\max > 1}$. For the new operation we get from (4):
$$\pi_{i_{\min \_new}} = \sum \pi_{i_{\max \_new}} \cdot p_{i_{\max}} = \sum \left(\Delta^2 \pi_{i_{\min \_estimated}} \cdot \tilde{p}_{i_{\max}} \right)$$
This makes it possible to consider the influence of inaccuracies of the observation of the past operational runs and of the estimation of the future demands. If also $|\delta_{i_{\min \_old}}| = \Delta$, it holds:
$$\pi_{i_{\min \_new}} = \sum \pi_{i_{\max \_new}} \cdot \tilde{p}_{i_{\max}}$$
$$= \Delta^2 \sum \pi_{i_{\min \_estimated}} \cdot \tilde{p}_{i_{\max}}$$
Obviously the inaccuracies of the knowledge of the new demand profile dominate the inaccuracies of the result, as they occur to the $\Delta^{th}$ power. Table 1 shows examples, based on (5), i.e. without taking into account the influence of the $\delta_{i_{\min \_old}}$.
<table>
<thead>
<tr>
<th>Table 1. Effect of inaccuracies of the knowledge of the new demand profile ($\pi_{i_{s}}$, factor)</th>
</tr>
</thead>
<tbody>
<tr>
<td>$\Delta$</td>
</tr>
<tr>
<td>Effect on $\pi_{i_{\min _new}}$</td>
</tr>
</tbody>
</table>
The table gives the very worst case, as it assumes derivations to the worse by all paths. But in reality an overestimation of the number of runs of one path will result in an underestimation of the number of runs of another path. See also Table 3 versus Table 2.
4. EXAMPLE
Fig. 1 gives an example of a code fragment. The fragment has 4 paths:
```
Basic Block B11, n_{11} runs
Basic Block B12, n_{12} runs
Basic Block B21, n_{21} runs
Basic Block B22, n_{22} runs
```
Fig. 1. Flow Diagram: Code fragment with 2 branches consisting of 4 basic blocks, each traversed $n_{ij}$ times; 4 paths; $n_{11} + n_{12} = n_i = n_{21} + n_{22}$.
Path 1 = \{B11, B21\}, Path 2 = \{B11, B22\},
Path 3 = \{B12, B21\}, Path 4 = \{B12, B22\}. \tag{7}
In total 30000 operational runs are considered; \( \alpha \) is assumed to be 0.05. (1) gives \( \tilde{p}_i = 10^{-5} \). We assume the individual paths have the number of runs of Table 2. The \( \pi_i \) and \( \tilde{p}_i \) are calculated; the latter ones at a level of significance of 0.05 after (1a); the end result is calculated by (3), leading to the same value of \( \tilde{p}_i \).
Table 2. Operating experience of the code fragment of Fig. 1
<table>
<thead>
<tr>
<th>Path 1</th>
<th>Path 2</th>
<th>Path 3</th>
<th>Path 4</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>( n_i )</td>
<td>12 000</td>
<td>6 000</td>
<td>9 000</td>
<td>3 000</td>
</tr>
<tr>
<td>( \pi_{\text{old}} )</td>
<td>0.4</td>
<td>0.2</td>
<td>0.3</td>
<td>0.1</td>
</tr>
<tr>
<td>( \tilde{p}_i )</td>
<td>2.5*10^{-4}</td>
<td>5*10^{-4}</td>
<td>3.3*10^{-4}</td>
<td>10^{-3}</td>
</tr>
</tbody>
</table>
If the demand profile of the new application differs from the old one, the \( n_i \) of Table 3 might apply, resulting in the other figures of Table 3. Note that the \( \tilde{p}_i \) of the paths do not change, \( \tilde{p}_i \) however increases significantly.
Table 3. New operation profile of the code fragment of Fig.1
<table>
<thead>
<tr>
<th>Path 1</th>
<th>Path 2</th>
<th>Path 3</th>
<th>Path 4</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>( n_i )</td>
<td>300</td>
<td>3000</td>
<td>2700</td>
<td>24000</td>
</tr>
<tr>
<td>( \pi_{\text{new}} )</td>
<td>0.01</td>
<td>0.1</td>
<td>0.09</td>
<td>0.8</td>
</tr>
<tr>
<td>( \tilde{p}_i )</td>
<td>2.5*10^{-2}</td>
<td>5*10^{-3}</td>
<td>3.3*10^{-4}</td>
<td>10^{-3}</td>
</tr>
</tbody>
</table>
If the concerned \( n_{i, \text{new}} \) are not well known, the new \( p_i \) might still require a correction according to Table 1. If they were inaccurate by 20\%, \( \tilde{p}_i \) would be too optimistic by a factor of 2.
It should be noted that the results of both Table 2 and Table 3 are not gained by a calculation based on the failure probabilities of the individual basic blocks as they can be calculated by using (1a) in connection with their traversal numbers. See Table 4.
Table 4. Failure probabilities of the Basic Blocks, old profile
<table>
<thead>
<tr>
<th>Basic Block</th>
<th>B11</th>
<th>B12</th>
<th>B21</th>
<th>B22</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>( n_i )</td>
<td>18 000</td>
<td>12 000</td>
<td>21 000</td>
<td>9 000</td>
<td>60 000</td>
</tr>
<tr>
<td>( \tilde{p}_i )</td>
<td>1.67*10^{-4}</td>
<td>2.5*10^{-4}</td>
<td>1.4*10^{-4}</td>
<td>3.5*10^{-4}</td>
<td></td>
</tr>
<tr>
<td>( \pi_{\text{old}} )</td>
<td>0.6</td>
<td>0.4</td>
<td>0.7</td>
<td>0.3</td>
<td></td>
</tr>
</tbody>
</table>
Table 4 demonstrates: There is no easy way to derive a failure probability for the total software from the failure probabilities of basic blocks or modules.
We can remark: Since \( n_{i1} + n_{i2} = n_i = n_{i1} + n_{i2} \), it holds for the upper part and the lower part of the basic blocks of the figure:
\[
\tilde{p}_i = -\ln \alpha \frac{n_i}{\pi_i} = \tilde{p}_{i11\text{with}i12} = \tilde{p}_{i21\text{with}i22} = \sum_{j=1}^{2} (\pi_{ij})^2 \cdot \tilde{p}_{ij} = \sum_{j=1}^{2} (\pi_{ij})^2 \cdot \tilde{p}_{ij} \tag{8}
\]
5. COLLECTION OF DATA
A program has usually thousands, if not millions of paths. In order to use the here mentioned theory, the data collection has to be nearly exhaustive. It has at least to be able to consider all paths in principle. Each path has to be characterized as such and the number of its traversals counted. It is suggested to store the characterisations and the traversals in a tree.
5.1 Storing
All basic blocks of the code are instrumented with an operation that can characterize the related path. Such an instrumentation can use a floating point number for each basic block that is connected with the so far gained result by one of the primitive operators \( p \in \{+,-,\ast,\} \). The so gained path characteristics are used to address a node of an AVL tree (Adelson-Velskii, 1962) during the phase of gaining the operation experience. The node of the tree could have the following shape:
```c
struct pathCharacteristic {
double characterizingNumber;
unsigned long numberOfPathRuns;
struct pathCharacteristic *left;
struct pathCharacteristic *right;
};
```
As AVL trees are always well balanced, the effort of inserting into the right place in the tree increases only logarithmically with the size of the tree. It would only take about 17 steps for 100 000 paths and only about 20 steps for one million paths. The same applies for finding a path-related node for counting the runs. After each path traversal its numberOfPathRuns is increased by 1.
The new operation profile should then be simulated using the same software. A comparison of the gained new number of runs to the old ones would enable to estimate upper limits of the failure probabilities in the new environment.
5.2 Example of an instrumentation
The software of Fig. 1 gets the following code lines in addition to the already existing ones.
Starting point of the program, at the beginning of the main function:
```c
double characterizingNumber = 1.0;
```
And then in
### Basic Block 1:
```c
characterizingNumber += 2.0;
```
### Basic Block 2:
```c
characterizingNumber -= 3.0;
```
### Basic Block 3:
```c
characterizingNumber *= 5.0;
```
### Basic Block 4:
```c
characterizingNumber /= 7.0;
```
The resulting value of the characterizingNumber forms the argument of the subroutine that inserts into the tree at path end. May be, it is not necessary to use only prime numbers for calculating the characterising number.
6. CONCLUSIONS
It may well happen that the effort needed to implement the considerations of this contribution comes up to the effort for a “normal” verification procedure on the basis of a software analysis, related tests and formal proofs. Never the less even high reliability claims can be supported by this method. But it is not thought that white-box testing strategies could be completely omitted.
Should failures occur during the operating experience and have they not been removed, a special consideration is needed. Related one-sided intervals can be gained by applying the tables of the Poisson distribution. But a program that is known to contain faults will normally not be allowed for safety applications.
Meeting all the requirements that are connected with the method presented can be costly. If they cannot be met, deterministic reasoning is required to demonstrate that the violation does not have any effect or only a limited and tolerable one. As far as I know, no number of pre-use runs or test runs required by (1) can never be reduced. Using this method does not guarantee success in licensing at lower cost. Using it, however, always results in a warm feeling supplementing the results of other verification efforts; and sometimes it leads to a quantitative reliability claim. Its main area of application is probably allowing widely used software packages in new environments.
REFERENCES
Littlewood, B. (2013) The Problem of Assessing Software Reliability ... when you really need to depend on it; no date, no written source, from internet 2013
APPENDIX A
COLLECTION OF PRE-REQUISITES AND ASSUMPTIONS
The following rules, requirements and assumptions are not in systematic order, as their criticality is usually project dependent.
R1 The code of the pre-existing version(s) and the code of the version for the future application shall be identical.
R2 No failures must occur during pre-operation.
R3 Sequence and number of runs of any path must not influence any future run.
R4 The distribution of input data that are processed in one path is approximately equal between the experience gathering period and the future operation period.
R5 If the operational experience is simulated by tests, the individual test runs must be independent from each other.
R6 Observation of information gathering is so strict and complete that any possible failure is recognised.
R7 A specification existed that allowed to decide whether or not any result was correct of incorrect.
R8 The paths shall be identified for the old and the future demand profile.
R9 For estimating the demand profile for a new application model checking can perhaps help.
R10 Each path shall have at least 2 runs.
R11 The $\tilde{p}_i$ and $\pi_i$ shall be evaluated or conservatively estimated for each path $i$.
R12 Concatenation of paths or modules is allowed, if they do not interact; in this case the largest $\tilde{p}_i$ of the chain shall be taken.
R13 If interacting modules are concatenated, the paths from start to end shall be taken.
R14 Paths whose correctness has been proven, may be counted with $p_i = 0$.
R15 It is recommendable to verify the effect of loops with varying repetition number deterministically.
R16 Separate considerations are required for complicated logical expressions or for complicated algorithms.
R17 Some aspects, e.g. events that shall be triggered by the software at a specific future date, need deterministic verification and white box testing.
R18 During the pre-use phase where the experience is gathered, no failure masking is allowed. See e.g. (Bishop 1987)
|
{"Source-Url": "https://folk.ntnu.no/skoge/prost/proceedings/ifac2014/media/files/2387.pdf", "len_cl100k_base": 6344, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 24793, "total-output-tokens": 7519, "length": "2e12", "weborganizer": {"__label__adult": 0.00034046173095703125, "__label__art_design": 0.00039458274841308594, "__label__crime_law": 0.00045228004455566406, "__label__education_jobs": 0.0007500648498535156, "__label__entertainment": 7.003545761108398e-05, "__label__fashion_beauty": 0.00013840198516845703, "__label__finance_business": 0.0003082752227783203, "__label__food_dining": 0.0003898143768310547, "__label__games": 0.0006504058837890625, "__label__hardware": 0.0013036727905273438, "__label__health": 0.00069427490234375, "__label__history": 0.0002620220184326172, "__label__home_hobbies": 0.00012421607971191406, "__label__industrial": 0.0005669593811035156, "__label__literature": 0.00032258033752441406, "__label__politics": 0.0002180337905883789, "__label__religion": 0.00039577484130859375, "__label__science_tech": 0.0753173828125, "__label__social_life": 8.785724639892578e-05, "__label__software": 0.00946044921875, "__label__software_dev": 0.90673828125, "__label__sports_fitness": 0.0002799034118652344, "__label__transportation": 0.0005154609680175781, "__label__travel": 0.00019812583923339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26059, 0.04445]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26059, 0.40211]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26059, 0.89482]], "google_gemma-3-12b-it_contains_pii": [[0, 4279, false], [4279, 9639, null], [9639, 14693, null], [14693, 20021, null], [20021, 25164, null], [25164, 26059, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4279, true], [4279, 9639, null], [9639, 14693, null], [14693, 20021, null], [20021, 25164, null], [25164, 26059, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26059, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26059, null]], "pdf_page_numbers": [[0, 4279, 1], [4279, 9639, 2], [9639, 14693, 3], [14693, 20021, 4], [20021, 25164, 5], [25164, 26059, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26059, 0.095]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
28871a4efc3df73f9eda24410f23118e98d73428
|
Package ‘thematic’
October 14, 2022
Title Unified and Automatic 'Theming' of 'ggplot2', 'lattice', and 'base' R Graphics
Version 0.1.2.1
Description Theme 'ggplot2', 'lattice', and 'base' graphics based on a few choices, including foreground color, background color, accent color, and font family. Fonts that aren't available on the system, but are available via download on 'Google Fonts', can be automatically downloaded, cached, and registered for use with the 'showtext' and 'ragg' packages.
URL https://rstudio.github.io/thematic/,
https://github.com/rstudio/thematic#readme
Depends R (>= 3.0.0)
Imports utils, graphics, grDevices, grid, farver, rlang, scales, rstudioapi (>= 0.8), rappdirs, ggplot2 (>= 3.3.0)
Suggests lattice, stats, withr, sysfonts, showtext, Cairo, systemfonts, ragg, knitr, rmarkdown, htmltools, shiny (> = 1.5.0), bslib, testthat, gganimate, vdifffr (> = 1.0.0), svglite, jsonlite, curl
License MIT + file LICENSE
Encoding UTF-8
RoxygenNote 7.1.1
Collate 'auto.R' 'base.R' 'cache.R' 'gfonts.R' 'ggplot.R' 'globals.R'
'hooks.R' 'imports.R' 'knitr.R' 'lattice.R' 'onLoad.R'
'thematic-save-plot.R' 'thematic.R' 'utils.R'
'view-shinytest.R'
NeedsCompilation no
Author Carson Sievert [aut, cre] (<https://orcid.org/0000-0002-4958-2844>), Barret Schloerke [aut] (<https://orcid.org/0000-0001-9986-114X>), Joe Cheng [aut], RStudio [cph]
Maintainer Carson Sievert <carson@rstudio.com>
Description
Auto theming is really only "guaranteed" to work inside of a shiny runtime. In any other context, auto theming is based on a set of heuristics, which won’t fit every use case. As a workaround, this function allows one to configure both a preference for specific auto values (e.g., bg, fg, etc) as well as the priority that certain information should receive.
Usage
```r
auto_config(
bg = NULL,
fg = NULL,
accent = NULL,
font = NULL,
priority = c("shiny", "config", "bslib", "rstudio")
)
```
```r
auto_config_set(config)
```
```r
auto_config_get()
```
Arguments
- `bg` a background color.
- `fg` a foreground color.
auto_resolve_theme
accent
a color for making certain graphical markers 'stand out' (e.g., the fitted line color for `ggplot2::geom_smooth()`). Can be 2 colors for lattice (stroke vs fill accent).
font
a `font_spec()` object. If missing, font defaults are not altered.
priority
the order of priority to use when resolving auto values. Possible values include:
- "shiny": use `shiny::getCurrentOutputInfo()` values (if any) to resolve auto values.
- "config": use the values provided to this function (if any) to resolve auto values.
- "bslib": use `bslib::bs_get_variables()` values (if any) to resolve auto values (only relevant when knitr is in progress).
- "rstudio": use `rstudioapi::getThemeInfo()` values (if any) to resolve auto values.
config
a `auto_config()` object.
Details
Configuring auto theming behavior is especially useful for developers of a custom rmarkdown output document that wish to have more sensible auto theming behavior for users of the document. In particular, by having the output document call `auto_config_set()` "pre-knit" with the document's styling preferences (and restoring the old defaults "post-knit"), users of the output document can then simply call `thematic_on()` within their document to use those preferences.
Call this function with no arguments to get the current auto defaults.
Value
a config (list-like) object.
Examples
```r
old_config <- auto_config_set(auto_config("black", "white"))
thematic_with_theme(
thematic_theme(), (
plot(1:10, 1:10)
))
auto_config_set(old_config)
```
### auto_resolve_theme
**Resolve auto values**
Resolves 'auto' values based on the current execution environment and configuration (i.e., `auto_config_get()`).
**Usage**
```r
auto_resolve_theme(theme)
```
Arguments
theme: a thematic_theme() object.
Value
The theme object with resolved 'auto' values.
See Also
auto_config_set()
Examples
old_config <- auto_config_set(auto_config(bg = "black", fg = "white"))
# Resolving auto values in local theme objects
theme <- thematic_theme()
theme[c("bg", "fg")]
theme <- auto_resolve_theme(theme)
theme[c("bg", "fg")]
# By default, auto values are resolved when accessing
# global theme options
thematic_on()
thematic_get_option("bg", resolve = FALSE)
thematic_get_option("bg")
thematic_off()
auto_config_set(old_config)
---
font_cache_set
Control the directory used for font caching
Description
The default directory used for font caching is system dependent; and thus, not very portable from machine to machine. Use this function to move thematic’s cache to a new path. This is primarily useful for making font cache relative to a shiny app directory, so that, when the app is deployed, the cache deploys with it.
Usage
font_cache_set(path, cleanup = FALSE)
Arguments
path a filepath for the new cache directory.
cleanup whether or not to remove font files from the previously used caching directory
(after copying to the new location).
Value
Returns the previously used caching directory.
See Also
thematic_on(), font_spec()
Examples
## Not run:
font_cache_set("my_app")
shiny::runApp("my_app")
## End(Not run)
---
font_spec Font specification
Description
Specify a collection of font families. The first font family supported by the relevant device (i.e.,
the device that is open, or will be opened, at plotting time) is used by thematic. If a given font
family is not supported by the default, but is a Google Font and install = TRUE, the font will be
downloaded, cached, and registered for use the showtext and rgg packages.
Usage
font_spec(
families = ",",
scale = 1,
install = is_installed("ragg") || is_installed("showtext"),
update = FALSE,
quiet = TRUE
)
Arguments
families a character vector of font families.
scale numerical constant applied to font sizes.
install whether to download and register font families available via Google Fonts (but unavailable to R). After a successful download, fonts are cached (in a directory which can be managed via `font_cache_set()`), and registered for use with the `showtext` and `ragg` packages. If installation fails with a valid internet connection, you may need to fetch the latest Google Font information prior to installation (i.e., set `update = TRUE`).
update if TRUE, the latest Google Fonts are fetched and any out-dated font cache is updated. Fetching the latest fonts requires a Google Font API key (one is bundled with the package, but you can set your own via an environment variable, `GFONT_KEY`).
quiet whether to suppress download messages.
Value the input arguments as a list.
See Also
`thematic_save_plot()`, `thematic_on()`, `font_cache_set()`
---
okabe_ito A color-blind safe qualitative colorscale (Okabe-Ito)
Description
This is the default qualitative colorscale in `thematic_on()`
Usage
`okabe_ito(n = NULL)`
Arguments
n number of colors.
Value
a vector of color codes.
References
https://jfly.uni-koeln.de/color/
See Also
`thematic_on()`
sequential_gradient
Control parameters of the sequential colorscale
Description
Controls the default weighting and direction of the color gradient derived from the fg, bg, and accent color (defined in thematic_on()).
Usage
sequential_gradient(fg_weight = 0.9, bg_weight = 0, fg_low = TRUE, n = 30)
Arguments
fg_weight
a number (between 0 and 1) defining much of the fg color should be mixed into the colorscale.
bg_weight
a number (between 0 and 1) defining much of the bg color should be mixed into the colorscale.
fg_low
if TRUE (the default), the fg color is used for the low end of the colorscale (rather than the high end).
n
number of color codes.
Value
a list of options for passing to the sequential argument of thematic_on().
Examples
# Gradient from fg to accent
fg <- sequential_gradient(1, 0)
thematic_on("black", "white", "salmon", sequential = fg)
ggplot2::qplot(1:10, 1:10, color = 1:10)
# Gradient from accent -> bg
bg <- sequential_gradient(0, 1)
thematic_on("black", "white", "salmon", sequential = bg)
ggplot2::qplot(1:10, 1:10, color = 1:10)
# Gradient from mix(accent, fg, 0.5) -> mix(accent, bg, 0.5)
mix <- sequential_gradient(0.5, 0.5)
thematic_on("black", "white", "salmon", sequential = mix)
ggplot2::qplot(1:10, 1:10, color = 1:10)
# Use fg (instead of bg) for high end of scale
mix_flip <- sequential_gradient(0.5, 0.5, fg_low = FALSE)
thematic_on("black", "white", "salmon", sequential = mix_flip)
ggplot2::qplot(1:10, 1:10, color = 1:10)
thematic_on
Enable (or disable) simplified theming of R graphics.
Description
A unified interface for theming ggplot2, base, and lattice graphics based on a handful of styling options. In some cases (most notably in a shiny runtime), these options can automatically resolve to relevant CSS styles (see the "Auto theming" section below).
Usage
thematic_on(
bg = "auto",
fg = "auto",
accent = "auto",
font = NA,
sequential = sequential_gradient(),
qualitative = okabe_ito(),
inherit = FALSE
)
thematic_off()
thematic_theme(
bg = "auto",
fg = "auto",
accent = "auto",
font = NA,
sequential = sequential_gradient(),
qualitative = okabe_ito(),
inherit = FALSE
)
thematic_shiny(
bg = "auto",
fg = "auto",
accent = "auto",
font = NA,
sequential = sequential_gradient(),
qualitative = okabe_ito(),
inherit = FALSE,
session = shiny::getDefaultReactiveDomain()
)
thematic_rmd(
bg = "auto",
fg = "auto",
accent = "auto",
font = NA,
sequential = sequential_gradient(),
qualitative = okabe_ito(),
inherit = FALSE,
session = shiny::getDefaultReactiveDomain()
)
thematic_on
accent = "auto",
font = NA,
sequential = sequential_gradient(),
qualitative = okabe_ito(),
inherit = FALSE
)
Arguments
bg a background color.
fg a foreground color.
accent a color for making certain graphical markers 'stand out' (e.g., the fitted line color for `ggplot2::geom_smooth()`). Can be 2 colors for lattice (stroke vs fill accent).
font a `font_spec()` object. If missing, font defaults are not altered.
sequential a color palette for graphical markers that encode numeric values. Can be a vector of color codes or a `sequential_gradient()` object.
qualitative a color palette for graphical markers that encode qualitative values (won’t be used in ggplot2 when the number of data levels exceeds the max allowed colors). Defaults to `okabe_ito()`.
inherit should non-specified values inherit from the previous theme?
session see `shiny::onStop()`.
Value
`thematic_theme()` returns a theme object as a list (which can be activated with `thematic_with_theme()` or `thematic_set_theme()`).
`thematic_on()`, `thematic_off()`, and `thematic_shiny()` all return the previous global theme.
Auto theming
The bg, fg, accent, and font arguments all support a value of 'auto', which are all resolved, at plot time, based on the execution environment. In a `shiny` runtime, resolution of auto values should always work as expect; but in other contexts, auto values may lead to wrong or surprising results. In that case, auto resolution logic can be customized (see `auto_config_set()` for more details).
Global vs. local theming
`thematic_on()` enables thematic in a global fashion (that is, it impacts all future plots, up until `thematic_off()` is called). To use thematic in local fashion, first create a theme with `thematic_theme()`, then provide it to `thematic_with_theme()` (or similar). To use thematic in a global fashion up until a `shiny` app exits, use `thematic_shiny()` (which cleans up after itself once the next shiny app that exits using `shiny::onStop()`). To use thematic in a global fashion up until a `rmarkdown` document finishes rendering, use `thematic_rmd()`.
Color values
Colors (e.g., bg, fg, accent) may be any value understood by \texttt{col2rgb()} or \texttt{htmltools::parseCssColors()} (i.e., may be any valid R or CSS color string).
See Also
\texttt{sequential_gradient()}, \texttt{thematic_with_theme()}, \texttt{thematic_save_plot()}
Examples
# simple dark mode
thematic_on("black", "white")
plot(1:10)
plot(1:10, col = 1:10)
lattice::show.settings()
# use any hex color string
thematic_on("#444444", "#e4e4e4")
plot(1:10)
plot(1:10, col = 1:10)
lattice::show.settings()
# disables thematic (also restores global state)
thematic_off()
plot(1:10)
lattice::show.settings()
thematic_on("darkblue", "skyblue", "orange")
image(volcano)
image(volcano, col = thematic_get_option("sequential"))
lattice::show.settings()
thematic_off()
default_device(type = c("png", "svg", "pdf", "tiff", "jpeg"))
Arguments
- **expr**: an expression that produces a plot.
- **device**: a graphics device to use for capturing the plot.
- **filename**: a filename for the produced plot. The file extension should match the relevant device.
- **type**: the type of output format
Value
thematic_save_plot() returns the filename of the produced plot and default_device() returns a graphics device function.
Examples
```r
library(thematic)
font <- font_spec("Rock Salt", scale = 1.25)
thematic_on("black", "white", font = font)
file <- thematic_save_plot(plot(1:10), res = 144)
if (interactive()) browseURL(file)
```
### thematic_with_theme
**Tools for getting and restoring global state**
**Description**
These functions are helpful for getting and/or temporarily activating a thematic_theme().
**Usage**
```r
thematic_with_theme(theme, expr)
thematic_local_theme(theme, .local_envir = parent.frame())
thematic_set_theme(theme)
thematic_get_theme(resolve = TRUE)
thematic_get_option(name = ",", default = NULL, resolve = TRUE)
thematic_get_mixture(amounts = 0.5, default = NULL)
```
Arguments
theme a thematic_theme() object (or a return value of thematic_on/thematic_get_theme()) or NULL (in which case thematic_off() is called).
expr R code that produces a plot.
.local_envir The environment to use for scoping.
resolve whether or not 'auto' values should be resolved before returning
name a theme element name (e.g., fg, bg, etc.)
default a default value to return in the event no thematic theme is active.
amounts value(s) between 0 and 1 specifying how much to mix bg (0) and fg (1).
Value
the result of expr.
Functions
- thematic_with_theme: similar to thematic_on(), but for a single plot.
- thematic_local_theme: similar to thematic_with_theme(), but de-couples the theme from the plot expression.
- thematic_set_theme: set a given theme object as the current theme.
- thematic_get_theme: obtain the current theme.
- thematic_get_option: obtain a particular theme option (and provide a default if no theme is active).
- thematic_get_mixture: obtain a mixture of the current theme's bg and fg.
Examples
```r
# Use thematic_with_theme() for a one-time use of thematic
thematic_with_theme(
thematic_theme("darkblue", "skyblue", accent = "red"),
plot(1:10, col = thematic_get_option("accent"), pch = 19)
)
# Use thematic_set_theme() if doing something more complicated
# like programming on top thematic (without causing side effects)
my_plot <- function(expr, las = 3, ...) {
old_theme <- thematic_on("black", "white")
on.exit(thematic_set_theme(old_theme), add = TRUE)
opts <- par(las = las)
on.exit(par(opts), add = TRUE)
# Imagine some more customization with ...
force(expr)
}
my_plot(plot(1:10))
thematic_off()
```
```r
thematic_get_option("bg", "white")
thematic_on(bg = "red")
thematic_get_option("bg", "white")
thematic_off()
thematic_with_theme(
thematic_theme("darkblue", "skyblue"),
scales::show_col(thematic_get_mixture(seq(0, 1, by = 0.1)))
)
```
auto_config, 2
auto_config_get (auto_config), 2
auto_config_get(), 3
auto_config_set (auto_config), 2
auto_config_set(), 4, 9
autoResolve_theme, 3
col2rgb(), 10
default_device (thematic_save_plot), 10
font_cache_set, 4
font_cache_set(), 6
font_spec, 5
font_spec(), 3, 5, 9, 10
ggplot2::geom_smooth(), 3, 9
okabe_ito, 6
okabe_ito(), 9
rstudioapi::getThemeInfo(), 3
sequential_gradient, 7
sequential_gradient(), 9, 10
shiny::getCurrentOutputInfo(), 3
shiny::onStop(), 9
thematic_get_mixture
(thematic_with_theme), 11
thematic_get_option
(thematic_with_theme), 11
thematic_get_theme
(thematic_with_theme), 11
thematic_get_theme(), 12
thematic_local_theme
(thematic_with_theme), 11
thematic_off (thematic_on), 8
thematic_off(), 9
thematic_on, 8, 12
thematic_on(), 5–7, 9, 12
|
{"Source-Url": "https://cran.r-project.org/web/packages/thematic/thematic.pdf", "len_cl100k_base": 4384, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28860, "total-output-tokens": 5282, "length": "2e12", "weborganizer": {"__label__adult": 0.0003504753112792969, "__label__art_design": 0.00286865234375, "__label__crime_law": 0.0002219676971435547, "__label__education_jobs": 0.0002105236053466797, "__label__entertainment": 0.00015401840209960938, "__label__fashion_beauty": 0.00012290477752685547, "__label__finance_business": 0.00013887882232666016, "__label__food_dining": 0.00027179718017578125, "__label__games": 0.0004734992980957031, "__label__hardware": 0.0005850791931152344, "__label__health": 0.00016450881958007812, "__label__history": 0.00019168853759765625, "__label__home_hobbies": 8.606910705566406e-05, "__label__industrial": 0.00019812583923339844, "__label__literature": 0.00017774105072021484, "__label__politics": 0.00014269351959228516, "__label__religion": 0.0003859996795654297, "__label__science_tech": 0.0039825439453125, "__label__social_life": 8.547306060791016e-05, "__label__software": 0.04400634765625, "__label__software_dev": 0.94482421875, "__label__sports_fitness": 0.00015878677368164062, "__label__transportation": 0.0001809597015380859, "__label__travel": 0.0002142190933227539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16503, 0.02045]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16503, 0.65257]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16503, 0.69117]], "google_gemma-3-12b-it_contains_pii": [[0, 1428, false], [1428, 2072, null], [2072, 3828, null], [3828, 4840, null], [4840, 5994, null], [5994, 7158, null], [7158, 8661, null], [8661, 9777, null], [9777, 11882, null], [11882, 12668, null], [12668, 13806, null], [13806, 15480, null], [15480, 15725, null], [15725, 16503, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1428, true], [1428, 2072, null], [2072, 3828, null], [3828, 4840, null], [4840, 5994, null], [5994, 7158, null], [7158, 8661, null], [8661, 9777, null], [9777, 11882, null], [11882, 12668, null], [12668, 13806, null], [13806, 15480, null], [15480, 15725, null], [15725, 16503, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16503, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16503, null]], "pdf_page_numbers": [[0, 1428, 1], [1428, 2072, 2], [2072, 3828, 3], [3828, 4840, 4], [4840, 5994, 5], [5994, 7158, 6], [7158, 8661, 7], [8661, 9777, 8], [9777, 11882, 9], [11882, 12668, 10], [12668, 13806, 11], [13806, 15480, 12], [15480, 15725, 13], [15725, 16503, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16503, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
aada864a666095309f28fc9a3db19bc5c5601a87
|
DYNAMIC PROGRAMMING
- introduction
- Fibonacci numbers
- interview problems
- shortest paths in DAGs
- seam carving
DYNAMIC PROGRAMMING
- introduction
- Fibonacci numbers
- interview problems
- shortest paths in DAGs
https://algs4.cs.princeton.edu
Dynamic programming
Algorithm design paradigm.
- Break up a problem into a series of overlapping subproblems.
- Build up solutions to larger and larger subproblems.
(caching solutions to subproblems for later reuse)
Application areas.
- Operations research: multistage decision processes, control theory, optimization, ...
- Computer science: AI, compilers, systems, graphics, databases, robotics, theory, ....
- Economics.
- Bioinformatics.
- Information theory.
- Tech job interviews.
Bottom line. Powerful technique; broadly applicable.
Dynamic programming algorithms
Some famous examples.
- System R algorithm for optimal join order in relational databases.
- Needleman–Wunsch/Smith–Waterman for sequence alignment.
- Cocke–Kasami–Younger for parsing context-free grammars.
- Bellman–Ford–Moore for shortest path.
- De Boor for evaluating spline curves.
- Viterbi for hidden Markov models.
- Unix diff for comparing two files.
- Avidan–Shamir for seam carving.
- \textbf{NP}-complete graph problems on trees (vertex color, vertex cover, independent set, ...).
- ...
Dynamic programming books
DYNAMIC PROGRAMMING
- introduction
- Fibonacci numbers
- interview problems
- shortest paths in DAGs
- seam carving
Fibonacci numbers
**Fibonacci numbers.** 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, …
\[
F_i = \begin{cases}
0 & \text{if } i = 0 \\
1 & \text{if } i = 1 \\
F_{i-1} + F_{i-2} & \text{if } i > 1
\end{cases}
\]
Leonardo Fibonacci
Fibonacci numbers: naïve recursive approach
Fibonacci numbers. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
\[
F_i = \begin{cases}
0 & \text{if } i = 0 \\
1 & \text{if } i = 1 \\
F_{i-1} + F_{i-2} & \text{if } i > 1
\end{cases}
\]
Goal. Given \( n \), compute \( F_n \).
Naïve recursive approach:
```java
public static long fib(int i) {
if (i == 0) return 0;
if (i == 1) return 1;
return fib(i-1) + fib(i-2);
}
```
Dynamic programming: quiz 1
How long to compute fib(80) using the naïve recursive algorithm?
A. Less than 1 second.
B. About 1 minute.
C. More than 1 hour.
D. Overflows a 64-bit long integer.
Fibonacci numbers: recursion tree and exponential growth
**Exponential waste.** Same overlapping subproblems are solved repeatedly.
**Ex.** To compute $\text{fib}(6)$:
- $\text{fib}(5)$ is called 1 time.
- $\text{fib}(4)$ is called 2 times.
- $\text{fib}(3)$ is called 3 times.
- $\text{fib}(2)$ is called 5 times.
- $\text{fib}(1)$ is called $F_n = F_6 = 8$ times.
\[ F_n \sim \phi^n, \quad \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618 \]
![Recursion tree diagram]
**running time = # subproblems \times cost per subproblem**
Fibonacci numbers: top-down dynamic programming
**Memoization.**
- Maintain an array (or symbol table) to remember all computed values.
- If value to compute is known, just return it;
otherwise, compute it; remember it; and return it.
```java
public static long fib(int i) {
if (i == 0) return 0;
if (i == 1) return 1;
if (f[i] == 0) f[i] = fib(i-1) + fib(i-2);
return f[i];
}
```
assume global long array f[], initialized to 0 (unknown)
**Impact.** Solves each subproblem \( F_i \) only once; \( \Theta(n) \) time to compute \( F_n \).
Fibonacci numbers: bottom-up dynamic programming
Bottom-up dynamic programming.
- Build computation from the “bottom up.”
- Solve small subproblems and save solutions.
- Use those solutions to solve larger subproblems.
```java
public static long fib(int n) {
long[] f = new long[n+1];
f[0] = 0;
f[1] = 1;
for (int i = 2; i <= n; i++)
f[i] = f[i-1] + f[i-2];
return f[n];
}
```
Impact. Solves each subproblem $F_i$ only once; $\Theta(n)$ time to compute $F_n$; no recursion.
Fibonacci numbers: further improvements
Performance improvements.
- Save space by saving only two most recent Fibonacci numbers.
```java
public static long fib(int n) {
int f = 0, g = 1;
for (int i = 0; i < n; i++) {
g = f + g;
f = g - f;
}
return f;
}
```
- Exploit additional properties of problem:
\[
F_n = \begin{bmatrix} \phi^n \\ \sqrt{5} \end{bmatrix}, \quad \phi = \frac{1 + \sqrt{5}}{2}
\]
\[
\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}^n = \begin{pmatrix} F_{n+1} \\ F_n \end{pmatrix}^n
\]
Dynamic programming recap
Dynamic programming.
- Divide a complex problem into a number of simpler overlapping subproblems.
[ define $n + 1$ subproblems, where subproblem $i$ is computing the $i^{th}$ Fibonacci number ]
- Define a recurrence relation to solve larger subproblems from smaller subproblems.
[ easy to solve subproblem $i$ if we know solutions to subproblems $i – 1$ and $i – 2$ ]
$$F_i = \begin{cases}
0 & \text{if } i = 0 \\
1 & \text{if } i = 1 \\
F_{i-1} + F_{i-2} & \text{if } i > 1
\end{cases}$$
- Store solutions to each of these subproblems, solving each subproblem only once.
[ use an array, storing subproblem $i$ in $f[i]$ ]
- Use stored solutions to solve the original problem.
[ subproblem $n$ is original problem ]
Dynamic Programming
- introduction
- Fibonacci numbers
- interview problems
- shortest paths in DAGs
- seam carving
Goal. Install WiFi routers in a row of $n$ houses so that:
- Minimize total cost, where $\text{cost}(i) = \text{cost to install a router at house } i$.
- Requirement: no two consecutive houses without a router.
<table>
<thead>
<tr>
<th>$i$</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
</thead>
<tbody>
<tr>
<td>cost($i$)</td>
<td>1</td>
<td>4</td>
<td>12</td>
<td>8</td>
<td>9</td>
<td>11</td>
</tr>
</tbody>
</table>
cost to install router at house $i$
$(4 + 8 + 9 = 21)$
**Router Installation Problem:** Dynamic Programming Formulation
**Goal.** Install WiFi routers in a row of $n$ houses so that:
- Minimize total cost, where $cost(i) =$ cost to install a router at house $i$.
- Requirement: no two consecutive houses without a router.
**Subproblems.**
- $yes(i) = \min \text{ cost to install router at houses } 1, \ldots, i \text{ with router at } i.$
- $no(i) = \min \text{ cost to install router at houses } 1, \ldots, i \text{ with no router at } i.$
- Optimal cost = $\min \{ yes(n), no(n) \}.$
**Dynamic programming recurrence.**
- $yes(0) = no(0) = 0$
- $yes(i) = cost(i) + \min \{ yes(i - 1), no(i - 1) \}$
- $no(i) = yes(i - 1)$
“optimal substructure”
(optimal solution can be constructed from optimal solutions to smaller subproblems)
A mutually recursive implementation.
```java
private int yes(int i)
{
if (i == 0) return 0;
return cost[i] + Math.min(yes(i-1), no(i-1)); // yes(i) = cost(i) + min { yes(i-1), no(i-1) }
}
private int no(int i)
{
if (i == 0) return 0;
return yes(i-1); // no(i) = yes(i-1)
}
public int minCost()
{
return Math.min(yes(n), no(n));
}
```
What is running time of the naïve recursive algorithm as a function of n?
A. \( \Theta(n) \)
B. \( \Theta(n^2) \)
C. \( \Theta(c^n) \) for some \( c > 1 \).
D. \( \Theta(n!) \)
“Those who cannot remember the past are condemned to repeat it.”
— Dynamic Programming
(Jorge Agustín Nicolás Ruiz de Santayana y Borrás)
**Router Installation: Bottom-up Implementation**
Bottom-up DP implementation.
```java
int[] yes = new int[n+1];
int[] no = new int[n+1];
for (int i = 1; i <= n; i++)
{
yes[i] = cost[i] + Math.min(yes[i-1], no[i-1]);
no[i] = yes[i-1];
}
return Math.min(yes[n], no[n]);
```
**Proposition.** Takes $\Theta(n)$ time and uses $\Theta(n)$ extra space.
**Remark.** Could eliminate the `no[]` array by substituting identity `no[k] = yes[k-1]`.
So far: we’ve computed the value of the optimal solution.
Still need: the solution itself (where to install routers).
\[
\begin{array}{cccccccc}
i & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\
yes(i) & 0 & 1 & 4 & 13 & 12 & 21 & 23 \\
no(i) & 0 & 0 & 1 & 4 & 13 & 12 & 21 \\
\end{array}
\]
yes(i) = cost to install routers at houses 1, 2, …, i with router at house i
no(i) = cost to install routers at houses 1, 2, …, i with router not at house i
Coin Changing
Problem. Given $n$ coin denominations $\{d_1, d_2, \ldots, d_n\}$ and a target value $V$, find the fewest coins needed to make change for $V$ (or report impossible).
Ex. Coin denominations = $\{1, 10, 25, 100\}$, $V = 130$.
Greedy (8 coins). $131\text{¢} = 100 + 25 + 1 + 1 + 1 + 1 + 1 + 1$.
Optimal (5 coins). $131\text{¢} = 100 + 10 + 10 + 10 + 1$.
Remark. Greedy algorithm is optimal for U.S. coin denominations $\{1, 5, 10, 25, 100\}$.
vending machine (out of nickels)
**Coin Changing: Dynamic Programming Formulation**
**Problem.** Given $n$ coin denominations $\{d_1, d_2, \ldots, d_n\}$ and a target value $V$, find the fewest coins needed to make change for $V$ (or report impossible).
**Subproblems.** $OPT(v) =$ fewest coins needed to make change for amount $v$.
**Optimal value.** $OPT(V)$.
**Multiway choice.** To compute $OPT(v)$,
- Select a coin of denomination $d_i \leq v$ for some $i$.
- Use fewest coins to make change for $v - d_i$.
**Dynamic programming recurrence.**
$$OPT(v) = \begin{cases} 0 & \text{if } v = 0 \\ \min \limits_{i : d_i \leq v} \{ 1 + OPT(v - d_i) \} & \text{if } v > 0 \end{cases}$$
Dynamic programming: quiz 3
In which order to compute \( OPT(v) \) in bottom–up DP?
A. Increasing \( i \).
\[
\text{for (int } v = 1; v <= V; v++)
\text{ }
\text{opt[v]} = \ldots
\]
B. Decreasing \( i \).
\[
\text{for (int } v = V; v >= 1; v--)
\text{ }
\text{opt[v]} = \ldots
\]
C. Either A or B.
D. Neither A nor B.
\[
OPT(v) = \begin{cases}
0 & \text{if } v = 0 \\
\min_{i: d_i \leq v} \{ 1 + OPT(v - d_i) \} & \text{if } v > 0
\end{cases}
\]
**Coin Changing: Bottom-Up Implementation**
Bottom-up DP implementation.
```java
int[] opt = new int[V+1];
opt[0] = 0;
for (int v = 1; v <= V; v++)
{
// opt[v] = min_i { 1 + opt[v - d[i]] }
opt[v] = INFINITY;
for (int i = 1; i <= n; i++)
{
if (d[i] <= v)
opt[v] = Math.min(opt[v], 1 + opt[v - d[i]]);
}
}
```
\[
OPT(v) = \begin{cases}
0 & \text{if } v = 0 \\
\min_{i : d_i \leq v} \{ 1 + OPT(v - d_i) \} & \text{if } v > 0
\end{cases}
\]
**Proposition.** DP algorithm takes \(\Theta(n V)\) time and uses \(\Theta(V)\) extra space.
**Note.** Not polynomial in input size; underlying problem is \(\textbf{NP}\)-complete.
DYNAMIC PROGRAMMING
- introduction
- Fibonacci numbers
- interview problems
- shortest paths in DAGs
- seam carving
Shortest paths in directed acyclic graphs: dynamic programming formulation
**Problem.** Given a DAG with positive edge weights, find shortest path from $s$ to $t$.
**Subproblems.** $\text{distTo}(v) =$ length of shortest $s \rightarrow v$ path.
**Goal.** $\text{distTo}(t)$.
**Multiway choice.** To compute $\text{distTo}(v)$:
- Select an edge $e = u \rightarrow v$ entering $v$.
- Combine with shortest $s \rightarrow u$ path.
**Dynamic programming recurrence.**
$$
\text{distTo}(v) = \begin{cases}
0 & \text{if } v = s \\
\min_{e = u \rightarrow v} \{ \text{distTo}(u) + \text{weight}(e) \} & \text{if } v \neq s
\end{cases}
$$
Shortest paths in directed acyclic graphs: bottom-up solution
**Bottom-up DP algorithm.** Takes $\Theta(E + V)$ time with two tricks:
- Solve subproblems in **topological order.** ensures that “small” subproblems are solved before “large” ones
- Form reverse digraph $G^R$ (to support iterating over edges incident to vertex $v$).
**Equivalent (but simpler) computation.** Relax vertices in topological order.
```java
Topological topological = new Topological(G);
for (int v : topological.order())
for (DirectedEdge e : G.adj(v))
relax(e);
```
**Remark.** Can find the shortest paths themselves by maintaining `edgeTo[]` array.
Given a DAG, how to find **longest path** from $s$ to $t$ in $\Theta(E + V)$ time?
A. Negate edge weights; use DP algorithm to find shortest path.
B. Replace $\min$ with $\max$ in DP recurrence.
C. Either A or B.
D. No poly-time algorithm is known (NP-complete).
Shortest paths in DAGs and dynamic programming
DP subproblem dependency digraph.
- Vertex $v$ for each subproblem $v$.
- Edge $v \rightarrow w$, if subproblem $v$ must be solved before subproblem $w$.
- Digraph must be a DAG. Why?
Ex 1. Modeling the coin changing problem as a shortest path problem in a DAG.
V = 10; coin denominations = \{ 1, 5, 8 \}
Shortest paths in DAGs and dynamic programming
**DP subproblem dependency digraph.**
- Vertex $v$ for each subproblem $v$.
- Edge $v \rightarrow w$, if subproblem $v$ must be solved before subproblem $w$.
- Digraph must be a DAG. Why?
**Ex 2.** Modeling the router installation problem as a shortest path problem in a DAG.
4.4 Shortest Paths
- introduction
- Fibonacci numbers
- interview problems
- shortest paths in DAGs
- seam carving
Content-aware resizing
Seam carving. [Avidan–Shamir] Resize an image without distortion for display on cell phones and web browsers.
https://www.youtube.com/watch?v=vlFCV2spKtg
Content-aware resizing
**Seam carving.** [Avidan–Shamir] Resize an image without distortion for display on cell phones and web browsers.
---
**In the wild.** Photoshop, ImageMagick, GIMP, ...
Content-aware resizing
To find vertical seam in a picture:
- Grid graph: vertex = pixel; edge = from pixel to 3 downward neighbors.
- Weight of pixel = “energy function” of 8 neighboring pixels.
Content-aware resizing
To find vertical seam in a picture:
- Grid graph: vertex = pixel; edge = from pixel to 3 downward neighbors.
- Weight of pixel = “energy function” of 8 neighboring pixels.
- Seam = shortest path (sum of vertex weights) from top to bottom.
To remove vertical seam in a picture:
- Delete pixels on seam (one in each row).
Content-aware resizing: dynamic programming formulation
**Problem.** Find a min energy path from top to bottom.
**Subproblems.** \( \text{distTo}(col, row) = \text{energy of min energy path from any top pixel to pixel } (col, row). \)
**Goal.** \( \min \{ \text{distTo}(col, H-1) \}. \)
Summary
How to design a dynamic programming algorithm.
- Find good subproblems.
- Develop DP recurrence for optimal value.
- optimal substructure
- overlapping subproblems
- Determine order in which to solve subproblems.
- Cache computed results to avoid unnecessary re-computation.
- Reconstruct the solution: backtrace or save extra state.
|
{"Source-Url": "https://www.cs.princeton.edu/courses/archive/fall21/cos226/lectures/DynamicProgramming.pdf", "len_cl100k_base": 4592, "olmocr-version": "0.1.50", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 75590, "total-output-tokens": 6340, "length": "2e12", "weborganizer": {"__label__adult": 0.0005326271057128906, "__label__art_design": 0.0004668235778808594, "__label__crime_law": 0.0005803108215332031, "__label__education_jobs": 0.0029582977294921875, "__label__entertainment": 0.00010722875595092772, "__label__fashion_beauty": 0.00025582313537597656, "__label__finance_business": 0.0004150867462158203, "__label__food_dining": 0.0005745887756347656, "__label__games": 0.0015573501586914062, "__label__hardware": 0.0015621185302734375, "__label__health": 0.0014905929565429688, "__label__history": 0.0004019737243652344, "__label__home_hobbies": 0.00023090839385986328, "__label__industrial": 0.0006313323974609375, "__label__literature": 0.0004069805145263672, "__label__politics": 0.0003502368927001953, "__label__religion": 0.0007200241088867188, "__label__science_tech": 0.089111328125, "__label__social_life": 0.00015032291412353516, "__label__software": 0.005611419677734375, "__label__software_dev": 0.8896484375, "__label__sports_fitness": 0.0006594657897949219, "__label__transportation": 0.0011968612670898438, "__label__travel": 0.00030493736267089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14363, 0.01901]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14363, 0.40157]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14363, 0.66998]], "google_gemma-3-12b-it_contains_pii": [[0, 117, false], [117, 251, null], [251, 798, null], [798, 1330, null], [1330, 1356, null], [1356, 1473, null], [1473, 1703, null], [1703, 2137, null], [2137, 2331, null], [2331, 2861, null], [2861, 3421, null], [3421, 3926, null], [3926, 4465, null], [4465, 5223, null], [5223, 5340, null], [5340, 5713, null], [5713, 6496, null], [6496, 6855, null], [6855, 7033, null], [7033, 7173, null], [7173, 7630, null], [7630, 8071, null], [8071, 8565, null], [8565, 9222, null], [9222, 9677, null], [9677, 10344, null], [10344, 10461, null], [10461, 11099, null], [11099, 11743, null], [11743, 12010, null], [12010, 12365, null], [12365, 12690, null], [12690, 12806, null], [12806, 12985, null], [12985, 13182, null], [13182, 13379, null], [13379, 13643, null], [13643, 13725, null], [13725, 14015, null], [14015, 14363, null], [14363, 14363, null]], "google_gemma-3-12b-it_is_public_document": [[0, 117, true], [117, 251, null], [251, 798, null], [798, 1330, null], [1330, 1356, null], [1356, 1473, null], [1473, 1703, null], [1703, 2137, null], [2137, 2331, null], [2331, 2861, null], [2861, 3421, null], [3421, 3926, null], [3926, 4465, null], [4465, 5223, null], [5223, 5340, null], [5340, 5713, null], [5713, 6496, null], [6496, 6855, null], [6855, 7033, null], [7033, 7173, null], [7173, 7630, null], [7630, 8071, null], [8071, 8565, null], [8565, 9222, null], [9222, 9677, null], [9677, 10344, null], [10344, 10461, null], [10461, 11099, null], [11099, 11743, null], [11743, 12010, null], [12010, 12365, null], [12365, 12690, null], [12690, 12806, null], [12806, 12985, null], [12985, 13182, null], [13182, 13379, null], [13379, 13643, null], [13643, 13725, null], [13725, 14015, null], [14015, 14363, null], [14363, 14363, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 14363, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14363, null]], "pdf_page_numbers": [[0, 117, 1], [117, 251, 2], [251, 798, 3], [798, 1330, 4], [1330, 1356, 5], [1356, 1473, 6], [1473, 1703, 7], [1703, 2137, 8], [2137, 2331, 9], [2331, 2861, 10], [2861, 3421, 11], [3421, 3926, 12], [3926, 4465, 13], [4465, 5223, 14], [5223, 5340, 15], [5340, 5713, 16], [5713, 6496, 17], [6496, 6855, 18], [6855, 7033, 19], [7033, 7173, 20], [7173, 7630, 21], [7630, 8071, 22], [8071, 8565, 23], [8565, 9222, 24], [9222, 9677, 25], [9677, 10344, 26], [10344, 10461, 27], [10461, 11099, 28], [11099, 11743, 29], [11743, 12010, 30], [12010, 12365, 31], [12365, 12690, 32], [12690, 12806, 33], [12806, 12985, 34], [12985, 13182, 35], [13182, 13379, 36], [13379, 13643, 37], [13643, 13725, 38], [13725, 14015, 39], [14015, 14363, 40], [14363, 14363, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14363, 0.00785]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
5a3b44ab7913385ede9f0a3e08bd2b894d3b4249
|
Are your lights off? Using problem frames to diagnose system failures
Conference or Workshop Item
How to cite:
For guidance on citations see FAQs.
© 2009 IEEE
Version: Version of Record
Are Your Lights Off?
Using Problem Frames to Diagnose System Failures*
Thein Than Tun\textsuperscript{1,2}, Michael Jackson\textsuperscript{2}, Robin Laney\textsuperscript{2}, Bashar Nuseibeh\textsuperscript{2}, Yijun Yu\textsuperscript{2}
\textsuperscript{1}PReCISE Research Centre, Faculty of Computer Science, University of Namur, Belgium
\textsuperscript{2}Department of Computing, The Open University, UK
ttu@info.fundp.ac.be \{m.jackson, r.c.laney, b.nuseibeh, y.yu\}@open.ac.uk
Abstract
This paper reports on our experience of investigating the role of software systems in the power blackout that affected parts of the United States and Canada on 14 August 2003. Based on a detailed study of the official report on the blackout, our investigation has aimed to bring out requirements engineering lessons that can inform development practices for dependable software systems. Since the causes of failures are typically rooted in the complex structures of software systems and their world contexts, we have deployed and evaluated a framework that looks beyond the scope of software and into its physical context, directing attention to places in the system structures where failures are likely to occur. We report that (i) Problem Frames were effective in diagnosing the causes of failures and documenting the causes in a schematic and accessible way, and (ii) errors in addressing the concerns of bidable domains, model building problems, and monitoring problems had contributed to the blackout.
1 Introduction
In mature branches of engineering, failures and “the role played by reaction to and anticipation of failure” are regarded as essential for achieving design success [11]. Identification of the causes of past system failures, organisation and documentation of them in a way accessible by engineers within an engineering community, and application of knowledge of failures when designing future systems, all play a central role in establishing “normal design” practices [15]. Although there have been several excellent reports on high-profile system failures involving software systems [5, 7, 9], development practices for dependable systems have not exploited input from incident or accident investigations in a systematic way [2]. This work is a small step in the direction to address the gap.
Requirements Engineering (RE) is concerned with defining the behaviour of required systems, and any error introduced or prevented early in the development significantly contributes to the system dependability. In this respect, RE has a valuable role to play in systematising and documenting causes of past failures, and utilising this systematised knowledge in the development of future systems. In the same way that system failures can be attributed to programming, design, and human/operational errors, it is possible to attribute certain failures to RE errors. RE errors may be due to missing requirements, incorrect assumptions about the problem context, weak formulation of requirements and unexpected interactions between requirements.
Although the broader context—such as the organisational settings, regulatory regimes and market forces—often plays an important role in failures, we deliberately focus on the role of the software system in its physical context in order to bring out clear lessons for requirements engineers. Therefore, a framework is needed for investigating failures, which looks beyond the scope of software and into its physical context, and directs attention to places in the system structures where failures are likely to occur.
In this paper, we report on our experience of using Problem Frames [4] to identify, organise and document knowledge about the causes of past system failures. In the Problem Frames framework, potential causes of failures—known as “concerns”—are named and associated with a specific pattern of problem structure, a style of problem composition, a type of problem world domain, the requirement and the specification. An instantiation of a pattern, for instance, will immediately raise the need to address certain concerns in the sys-
tem structures. This is, in a sense, similar to generating “verification conditions” for a program in order to prove its correctness with respect to the specification [1]. In this case, concerns raised will have to be discharged by requirements engineers, perhaps in collaboration with other stakeholders.
The rest of the paper is organised as follows. Section 2 gives an overview of the power blackout case study, the methodology used in the investigation, and some of the key principles of Problem Frames. The role of the software systems in the blackout is described and analysed in Section 3. Related work is discussed in Section 4. Section 5 summarises the findings.
2 Preliminaries
This section provides an overview of our case study, the research methodology used to investigate the failures, the conceptual framework of Problem Frames, and the expected outcome of our study.
2.1 2003 US-Canada Electricity Blackout
The electricity blackout that occurred on 14 August, 2003 in large parts of the Midwest and Northeast United States and Ontario, Canada, affected around 50 million people, according to the official report by the U.S.–Canada Power System Outage Task Force [14]. The outage began around 16:00 EDT (Eastern Daylight Time), and power was not fully restored for several days in some parts of the United States. The effect of the outage could be seen in satellite images of North America, whilst financial losses reportedly ran into billions of US dollars. The official report concluded that “this blackout could have been prevented”, and software failures leading to the operator’s reliance on outdated information was identified as one of the two “most important causes” of the blackout [14, p. 46].
2.2 Methodology
Investigating real-life system failures is difficult not least because of the size and complexity of these systems and limited availability of verifiable information about the failures and the systems involved [5]. Even when it is possible to master these difficulties, it is still a challenge to locate exactly when in the development an error was introduced [10]. The official report makes clear that factors such as the sagging of power lines, overgrown trees, poor communication, and lack of personnel training all contributed to the blackout.
Since our interest was to learn RE lessons, our methodology for investigating failures examined the chain of events leading up to the failure, and isolated the role of software systems in the failure. We ascertained what the components of the system did, what they should have done, and how it would have been possible to identify the causes at the RE stage. Therefore, a framework was needed that allowed us to structure the potential causes of failures in a schematic way.
2.3 Problem Frames
The Problem Frames framework [4] is based on certain principles, four of which are relevant to the discussion. First, the framework encourages a systematic separation of descriptions into requirements, problem world context and specifications. For example, Figure 1 shows a high-level description of a type of software problem known as Commanded Behaviour Frame. In this problem, a software system, Control Machine, is required to apply control on a domain in the physical world, the Controlled Domain, according to the commands of a human agent, the Operator. Exactly how the Controlled Domain should behave, or what property it must have, when the Operator issues commands is described by the Commanded Behaviour Requirement. Therefore the requirement states the relationship between the operator command OCommand at the interface a,O, and the behaviour and property of the controlled domain CDBehaviour and CDProperty at the interface a,CD.
Description of the operator behaviour is concerned with the relationship between OInput at the interface b,O and OCommand at the interface a,O, namely what input the operator produces when a command is issued. Similarly, description of the Controlled Domain is concerned with the relationship between CMAction at the interface a,CM and CDBehaviour and CDProperty at the interface a,CD, namely what behaviour or property the controlled domain produces when machine actions are performed. The Operator and the Controlled Domain constitutes the problem world context of the Control Machine. The specification, description of the Control Machine, is concerned with the relationship between OInput at the interface b,O and CMAction at the interface a,CM, namely what actions the machine must perform when operator input is observed.
The operator may be a lift user and the controlled domain, a lift. The requirement will state how the lift should behave when the lift user issues commands. The
specification will state what operations the lift controller will perform when the operator input is received.
Second, this framework emphasises the need to understand the physical structure of the problem world context, and the behaviour of the domains involved. Third, the framework is based on recurring patterns of software problems, called frames. Each frame captures “concerns” of a certain type of software problems. For instance, the main concern of the “Commanded Behaviour” frame is to ensure that the system obeys the operator commands in imposing control on the behaviour of the system. An instantiation of a frame implies generation of certain conditions that need to be discharged.
Fourth, the framework provides a rich scheme for categorising and recording causes of failures. For instance, there are concerns specific to problem world domains, such as reliability, identity and breakage; there are frame concerns such as that of the required behaviour frame; and there are composition concerns such as conflict, consistency and synchronisation.
Therefore, we hypothesised that the Problem Frames framework provides an appropriate foundation for diagnosing failures involving software systems.
2.4 Expected Outcomes
There were two expected outcomes of this study. First, to establish whether Problem Frames are appropriate for investigating systems failures in terms of (i) locating causes of failure in the system structures, and (ii) recording them in a schematic way accessible by engineers within a community. Second, to identify causes of the blackout and either confirm them as known concerns or expand the repertoire of existing concerns by recording them schematically.
3 The Case Study
We now discuss two software-related failures that contributed significantly to the blackout. We briefly recount the chain of events leading to the blackout before discussing how Problem Frames were applied to diagnose the causes of failures and record the causes of failures.
3.1 Problem #1: State Estimator and Real Time Contingency Analysis
The infrastructure of the electric systems are large and complex, comprising many power generation stations, transformers, transmission lines, and individual and industrial customers. Providing reliable electricity through “real-time assessment, control and coordination of electricity production at thousands of generators, moving electricity across an interconnected network of transmission lines, and ultimately delivering the electricity to millions of customers” is a major technical challenge [14].
Reliability coordinators and control operators use complex monitoring systems to collect data about the status of the power network. In addition, they use a system called State Estimator (SE) to improve the accuracy of the collected data against the mathematical model of the power production and usage. When the divergence between the actual and predicted model of power production and usage is large, State Estimator will “produce a solution with a high mismatch”. Information from the improved model is then used by various software tools, including Real Time Contingency Analysis (RTCA), to evaluate the reliability of the power system, and alert operators when necessary, for instance when the power production is critically low. This evaluation can be done periodically or on demand of the operator.
“On August 14 at about 12:15 EDT, MISO’s [Midwest Independent System Operator] state estimator produced a solution with a high mismatch […] To troubleshoot this problem the analyst had turned off the automatic trigger that runs the state estimator every five minutes. After fixing the problem he forgot to re-enable it […] Thinking the system had been successfully restored, the analyst went to lunch. The fact that the state estimator was not running automatically on its regular 5-minute schedule was discovered about 14:40 EDT.”
When the automatic trigger was subsequently re-enabled, the state estimator produced a solution with a high mismatch due to further developments on the network. The official report assesses the situation as follows.
“In summary, the MISO state estimator and real time contingency analysis tools were effectively out of service between 12:15 EDT and 16:04 EDT. This prevented MISO from promptly performing precontingency “early warning” assessments of power system reliability over the afternoon of August 14.”
3.1.1 Problem Analysis
Based on this information, we constructed several problem diagrams to analyse relationships between the problem world domains mentioned in the description. Figure 2 shows a composite of two problem diagrams.
The problem of State Estimator is to produce Revised-Data for the Improved Electrical System Model of the grid, based on StatusData, and Estimates produced by the Mathematical Model. In Problem Frames, this type of problem is known as a “model building problem”. The problem of RTCA System is to examine Revised-Data and raise appropriate alerts on the Display Screen used by the Operator. This type of problem is known as an “information display problem”.
### 3.1.2 A Requirements Engineering Error?
On August 14, when the SE could not produce a consistent model, the operator turned off the automatic trigger of the SE in order to carry out maintenance work. Figure 3 shows the problem diagram, where the Maintenance Engineer uses the machine SE Trigger to turn on or turn off the State Estimator. This problem fits the Commanded Behaviour frame shown in Figure 1. Part of the requirement here is to ensure that when the engineer issues the command OffNow, the SE should cease running.
When the maintenance work was done, the engineer forgot to re-enable the SE, leaving the electrical system model which the operators rely on, outdated. The resulting reliance by the operator on the outdated information was a significant contributing factor.
Clearly, the maintenance engineer should not have forgotten to re-engage the monitoring systems, and as a result, the problem would not have arisen. However, there is more to the problem than this being a “human error”. Perhaps the fallibility of human operators should have been better recognised in the system’s model of the world context.
### 3.1.3 Naming and Categorising Concerns
A key part of the problem is the requirement that says that the operator commands always have precedence over the system actions. This requirement relies on the world assumption that the biddable domain—i.e., a human agent such as the maintenance engineer—always gives the correct commands. However, the Commanded Behaviour frame recognises that the operator is a biddable domain, whose behaviour is non-causal and may not be reliable. Therefore, the operator always giving the correct command may be too strong a condition to discharge. This gives rise to two concerns: one related to the biddable domain and the other, related to the Commanded Behaviour frame.
We will call the concern related to the biddable domain the reminder concern, which raises the following conditions to discharge: (i) Whenever the biddable domain overrides the system operations, which system domain(s) should be reminded about the override? (ii) How long should the override last? (iii) What happens when the length of time expires? In the case of the blackout, this may be translated into a requirement that says (i) whenever the SE has stopped, the system should remind the operator of the SE status and how long it has had that status, and (ii) at the end of a maintenance procedure, the system should remind the engineer of the SE status. Such a reminder could make the engineer’s behaviour more reliable and perhaps could have helped prevent the failure.
A concern related to the Commanded Behaviour frame is whether the system should ignore the operator commands and take control of the system under certain circumstances. We will call this the system precedence concern. This may mean that the system should monitor the actions by the biddable domain, and intervene when the domain does not seem to be reliable. In that case, the requirement should be formulated as follows: Whenever maintenance work is thought to have been completed, the automatic trigger should be enabled.
Another key part of the problem is related to the issue of fault-tolerance in information display: What happens when the input the system receives from the
analogous model is unexpected? This may be due to an incorrect data type or an untimely input from the analogous model. We will call this the outdated information concern. Pertinent questions in this case are: 1) Can RTCA know that the Improved Electrical System Model is outdated? 2) What should it do about it? Had requirements engineers asked such questions, it could have led to a requirement such as “The Improved Electrical System Model must have a timestamp of when it was last updated successfully” and “If the Improved Electrical System Model is older than 30 minutes, the RTCA system should alert the operator that the electrical system model is now outdated”. This will at least warn the operator not to rely on the information provided by the improved electrical system model.
3.2 Problem #2: Alarm and Event Processing Routine (AEPR) System
Another significant cause of the blackout was due, in part, to the Alarm and Event Processing Routine (AEPR) system, “a key software program that gives grid operators visual and audible indications of events occurring on their portion of the grid” [14].
“Alarms are a critical function of an EMS [Energy Management System], and EMS-generated alarms are the fundamental means by which system operators identify events on the power system that need their attention. If an EMS’s alarms are absent, but operators are aware of the situation and the remainder of the EMS’s functions are intact, the operators can potentially continue to use the EMS to monitor and exercise control of their power system. In the same way that an alarm system can inform operators about the failure of key grid facilities, it can also be set up to warn them if the alarm system itself fails to perform properly. FE’s EMS did not have such a notification system.”
The problem of alerting the Grid Operator of the grid status, ascertained from the Grid & Sensors is shown in Figure 4. This problem fits a type of problem known as the Information Display Frame. The requirement is to raise a separate alarm to the operator (GOAlertedGrid) if and only if there are events on the grid that threaten the system reliability (GridOK): ¬GridOK ↔ GOAlertedGrid. The specification of AEPR could be to raise an alert (RaiseAlert) if and only if danger is detected on the grid (DangerDetected): DangerDetected ↔ RaiseAlert. In the case study, the AEPR system failed silently, leading the operators to continue to rely on outdated information, and was one of “the most important causes” of the blackout.
3.2.1 A Requirements Engineering Error?
The official report is very clear about the fact that there was a missing requirement “to monitor the status of EMS and report it to the system operators.” The British Standard 5839 on fire detection and fire alarm systems [12] is also concerned with monitoring systems, and anticipates such a requirement. Since fire alarms may fail when electricity is disconnected, the standard requires that alarms are fitted with a secondary independent source of power. In addition, when the source of power is switched from the primary to secondary source, the system should raise an alarm.
3.2.2 Naming and Categorising Concerns
The cause of this failure can be called a silent failure of alarm systems. Addressing this concern could raise questions such as: What happens if AEPR fails silently? Is it possible to detect such failures? What should be done when such failures are detected. This could have led the designers to the requirement that the system should monitor the behaviour of AEPR and raise an additional alarm when AEPR is thought to have failed. Figure 5 shows a problem diagram in which a wrapper intercepts the input to and output from the AEPR and when AEPR fails to respond as expected, a separate alarm is raised.
(GOAlertedAEPR). The wrapper AEPR Monitor can pass on danger detection from the grid to AEPR (DangerDetected@bGS <-> DangerDetected@bA'AM) and pass on the alert trigger from AEPR to the grid operator (RaiseAlert@aA <-> RaiseAlert@aA'AM). Then the requirement to alert silent failure of AEPR is ¬GridOK ∧ ¬GOAlertedGrid <-> GOalertedAEPR. The specification for AEPR Monitor is DetectDanger@bGS ∧ ¬RaiseAlert@aA'AM -> RaiseSecondaryAlert@aA'AM. An implementation of such a specification could have prevented the failure.
4 Related Work
There are many studies of software-related failures. Leveson, for instance, carried out several studies of software-related accidents, including those involving Therac-25 [7]. Johnson has also contributed an extensive literature on system accidents and incidents [5, 6, 2]. However, those studies of system failure of which we are aware have not been based on a clear conceptual structure for identifying, classifying, and recording the lessons learned at the level of detail appropriate for use by software engineers. For instance, the software engineering lessons Leveson and Turner [7] draw from the Therac-25 accidents include: “Documentation should not be an afterthought”, and “Designs should be kept simple”. Johnson investigated this power blackout in order to “sketch arguments for and against deregulation as a cause of the black-out” [6]. In this paper, we have applied a systematic approach to learning software engineering lessons, structured and described in ways that software engineers can relate to specifically.
Several variants of the Failure Modes and Effect Analysis (FMEA) method have been developed and applied in the development of dependable systems. Lutz and Woodhouse [8], for instance, applied a FMEA-based method to identify critical errors in requirements documents of two space craft systems. Our work is complementary to such methods, in the sense that we are concerned with identifying, structuring and documenting past software failures, which can then be used to narrow the search space in failure analysis.
5 Summary
Our experience of using Problem Frames to investigate system failures involving software systems showed that the framework of Problem Frames was appropriate for identifying causes of system failures and documenting the causes in a schematic and accessible way. The suggestion by the framework that requirements engineers should “look out” into the physical world, rather than “look into” the software was useful in directing and focusing the attention, because many of the causes of failures originated in the physical world context.
The separation of descriptions into requirements, problem world context and the specification enabled us to locate sources of failures in specific descriptions. Some failures were related to the requirements (such as missing requirements) and others to the problem world context (such as mismatch between the assumed and actual behaviour of the problem world domains). Furthermore, associating concerns to the requirement, problem world context, frame, domain type, style of composition, and the specifications provides a good basis for recording concerns in a schematic way.
In summary, specific lessons learnt from the blackout case study are: (i) a further specialisation of the reliability of the biddable domain, called the reminder concern, (ii) a further specialisation of the concern of the Commanded Behaviour frame where the system may have to take precedence over the operator action, called the system precedence concern, (iii) a further specialisation of the Information Display frame called the outdated information concern, and (iv) the silent failure concern related to the monitoring systems.
References
|
{"Source-Url": "http://oro.open.ac.uk/19427/1/Tun09RE.pdf", "len_cl100k_base": 4875, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22056, "total-output-tokens": 5957, "length": "2e12", "weborganizer": {"__label__adult": 0.00032782554626464844, "__label__art_design": 0.0005135536193847656, "__label__crime_law": 0.0006012916564941406, "__label__education_jobs": 0.0026187896728515625, "__label__entertainment": 9.554624557495116e-05, "__label__fashion_beauty": 0.00017833709716796875, "__label__finance_business": 0.0006151199340820312, "__label__food_dining": 0.0003414154052734375, "__label__games": 0.0006237030029296875, "__label__hardware": 0.0030040740966796875, "__label__health": 0.0006914138793945312, "__label__history": 0.00034546852111816406, "__label__home_hobbies": 0.00016820430755615234, "__label__industrial": 0.0012845993041992188, "__label__literature": 0.0003833770751953125, "__label__politics": 0.0002846717834472656, "__label__religion": 0.0003843307495117187, "__label__science_tech": 0.20556640625, "__label__social_life": 0.00012409687042236328, "__label__software": 0.0200042724609375, "__label__software_dev": 0.7607421875, "__label__sports_fitness": 0.00020647048950195312, "__label__transportation": 0.0007534027099609375, "__label__travel": 0.00016701221466064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27218, 0.03397]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27218, 0.62795]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27218, 0.92708]], "google_gemma-3-12b-it_contains_pii": [[0, 455, false], [455, 4563, null], [4563, 9285, null], [9285, 13937, null], [13937, 17697, null], [17697, 21490, null], [21490, 27218, null]], "google_gemma-3-12b-it_is_public_document": [[0, 455, true], [455, 4563, null], [4563, 9285, null], [9285, 13937, null], [13937, 17697, null], [17697, 21490, null], [21490, 27218, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27218, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27218, null]], "pdf_page_numbers": [[0, 455, 1], [455, 4563, 2], [4563, 9285, 3], [9285, 13937, 4], [13937, 17697, 5], [17697, 21490, 6], [21490, 27218, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27218, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
2d0d4dc0a7c873c74df52bd19d3388dc61029513
|
A Review on Recent Web Pre-Fetching and Caching Technique
Varun Kumar¹ Nidhi Seth²
¹Research Scholar ²Assistant Professor
¹,²Department of Computer Science & Engineering
¹,²JMIT, Radaur, Haryana, India
Abstract—Due to the fast development of internet services and a huge amount of network traffic web caching and prefetching are the most popular techniques that play a key role in improving the Web performance by keeping web url that are likely to be visited in the near future closer to the client. Web caching technique work integrated or independently with the web prefetching. The Web caching and prefetching techniques are complement each other since the web caching exploits the temporal locality for predicting revisiting requested url, while the web prefetching utilizes the spatial locality for predicting next related web object of the requested Web url. This paper surveys some principles and existing web caching and prefetching approaches. The intelligent and conventional web caching techniques are investigated and discussed. Furthermore, Web prefetching techniques are summarized and classified with comparison limitations of these approaches. This paper also discusses some studies that take into consideration impact of integrating both web caching and web prefetching jointly. This paper explains about the various prefetching and caching techniques, how these technique predict the web object to be pre-fetched and what are the issues challenges involved when these techniques are applied.
Key words: Web caching, Web prefetching, URL, Prediction
I. INTRODUCTION
Web caching is a well-known strategy for improving the performance of Web-based system by keeping Web objects that are likely to be used in the near future in location close to user. The Web caching mechanisms are applied at three levels: client level, proxy level and original server level[5,6]. Significantly, proxy servers play the key roles between users and web sites in lessening of the response time of user requests and saving of network bandwidth. Thus, for achieving better response time, an efficient caching technique can be built in a proxy server.
The cache replacement is the core or heart of the web caching; consequently, the design of efficient cache replacement algorithms is crucial for caching mechanism achievement. So, cache replacement algorithms are also called web caching algorithms[7]. Because of limited space of cache, an intuitive mechanism is required to manage the Web cache content properly. The conventional caching policies are not efficient in the Web caching since they consider just one factor and ignore other factors that have impact on the efficiency of the Web caching. In these caching policies, most popular objects get the most requests, while a large segment of objects, which are stored in the cache, are never requested again. This is called cache pollution problem. Therefore, many Web cache replacement policies have been proposed attempting to get good performance. Hence, the difficulty in determining which ideal web objects will re-accessed is still a big challenge faced by the existing Web caching techniques. In other words, what Web objects should be cached and what Web objects should be replaced to make the best use of available cache space, better hit rates, decrease network traffic, and reduce loads on the original server[3,4].
Unfortunately, the cache hit ratio is not improved much with caching schemes. despite with a cache of infinite size, the hit ratio are still limited only at the range from 40% to about 50%, regardless of the caching scheme [8,9,10]. This is because most people browse and explore the new web pages trying to find new information. In order to improve the hit ratio of cache, Web pre-fetching technique is integrated with web caching to overcome these limitations. Web prefetching is fetching web pages in advance by proxy server/client before a request is send by a client/proxy server. The major advantage of using web prefetching is reduced latency. When a client makes a request for web object, rather than sending request to the web server, it may be fetched from a pre-fetch area. The main factor for selecting a web prefetching algorithm is that its ability to predict the web object to be pre-fetched in order to reduce latency. Web prefetching exploits the spatial locality of web pages, i.e., pages that are linked with current page will be accessed with higher probability than other pages. Web prefetching can be implemented in a web environment as between clients and web server, between proxy server and web server and between clients and proxy server[11]. If it is implemented between web server and client, it is helpful in decreasing user perceived latency, but the problem is that it will increases network traffic. If it is implemented between web server and proxy server, can reduce the bandwidth usage by prefetching only a specific number of hyperlinks. If it is implemented between clients and proxy server, the proxy starts feeds pre-fetched web objects from its cache to the clients so there won’t be extra internet traffic. This paper tells about the different prefetching techniques, how they predict the web object to be pre-fetched and what are the issues involved in these techniques.
The remaining parts of this paper are organized as follows. Some principles of Web caching are presented in Section 2. The existing works of traditional and intelligent web caching techniques are also discussed in Section 2. Section 3 describes types and approaches of Web prefetching and surveys some of the representative techniques for each approach. Section 4 elucidates some studies that discussed integration of web caching and web prefetching together. Finally, Sections 5 concludes the paper.
II. WEB CACHING
Web caching is one of the most successful solutions for improving the performance of Web-based system. In Web caching, the popular web objects that likely to be visited in the near future are stored in positions closer to the user like client machine or proxy server. Thus, the web caching helps
in reducing Web service bottleneck, alleviating of traffic over the Internet and improving scalability of the Web system[2].
A. Basic Types of Web Cache:
1) Browser Cache:
It is located in the client. The user can notice the cache setting of any modern Web browser such as Internet Explorer, Safari, Mozilla Firefox, Netscape, and Google chrome. This cache is useful, especially when users hit the “back” button or click a link to see a page they have just looked at.
2) Proxy Server Cache:
It is found in the proxy server which located between client machines and origin servers. It works on the same principle of browser cache, but it is a much larger scale. Unlike the browser cache which deals with only a single user, the proxies serve hundreds or thousands of users in the same way. When a request is received, the proxy server checks its cache. If the object is available, it sends the object to the client. If the object is not available, or it has expired, the proxy server will request the object from the origin server and send it to the client. The object will be stored in the proxy's local cache for future requests.
3) Origin Server Cache:
Even at the origin server, web pages can be stored in a server-side cache for reducing the need for redundant computations or database retrievals. Thus, the server load can be reduced if the origin server cache is employed.
III. WEB CACHING ALGORITHMS
Cache replacement policy plays an extremely important role in Web caching. Hence, the design of efficient cache replacement algorithms is required to achieve highly sophisticated caching mechanism. In general, cache replacement algorithms are also called Web caching algorithms.
<table>
<thead>
<tr>
<th>Policy</th>
<th>Brief description</th>
<th>Advantage</th>
<th>Disadvantage</th>
</tr>
</thead>
<tbody>
<tr>
<td>LRU</td>
<td>The least recently used objects are removed first.</td>
<td>Simple and efficient with uniform size objects, such as the memory cache.</td>
<td>Ignores download latency and the size of web objects</td>
</tr>
<tr>
<td>LFU</td>
<td>The least frequently used objects are removed first.</td>
<td>Simplicity</td>
<td>Ignores download latency and size of objects and may store obsolete web objects indefinitely.</td>
</tr>
<tr>
<td>SIZE</td>
<td>Big objects are removed first</td>
<td>Prefers keeping small web objects in the cache, causing high cachet hit ratio.</td>
<td>Stores small web objects even if these object are never accessed again. -- Low byte hit ratio.</td>
</tr>
<tr>
<td>GD-size</td>
<td>It assigns a key value to each object in the cache. Consequently, the object with the lowest key value is replaced when cache space becomes occupied</td>
<td>Overcomes the weakness of SIZE policy by removing objects which are no longer requested by users.</td>
<td>Not take into account the previous frequency of web objects</td>
</tr>
<tr>
<td>GDSF</td>
<td>It extends GDS algorithm by integrating the frequency factor into of the key value K(\left( p \right) = L + F \left( p \right) \times CM)</td>
<td>Overcomes the drawback of GD-size</td>
<td>Not take into account the predicted accesses in the future</td>
</tr>
</tbody>
</table>
A. Replacement Policy Categories:
<table>
<thead>
<tr>
<th>Category</th>
<th>Brief description</th>
<th>Available replacement policies</th>
<th>Representative policy</th>
<th>disadvantage</th>
</tr>
</thead>
<tbody>
<tr>
<td>Recency-based policies</td>
<td>These policies use recency factor as the primary factor to remove Web object</td>
<td>LRU, LRU- threshold, LRU*, LRU-hot, LRU- LSC, SB-LRU, SLRU, HLRU, Pitkow/Recker, EXP1, value- aging, generational replacement.</td>
<td>LRU</td>
<td>not consider frequency, size and download latency of Web objects</td>
</tr>
<tr>
<td>Frequency-based policies</td>
<td>These policies use object popularity (or frequency count) as the primary factor to remove Web object</td>
<td>LFU, LFU-Aging, LFU-DA, Window-LFU, swLFU, AgedswLFU, a-Aging, HYPER - G.</td>
<td>LFU</td>
<td>not consider recency, size and download latency of Web objects</td>
</tr>
<tr>
<td>Size-based policies</td>
<td>These policies use object size as the primary factor for removing Web object.</td>
<td>SIZE, LRU min, partitioned caching, PSS, CSS, LRU-SP</td>
<td>SIZE</td>
<td>not consider recency, frequency and download latency of Web objects</td>
</tr>
</tbody>
</table>
A Review on Recent Web Pre-Fetching and Caching Technique
IV. PREFETCHING
Web prefetching is another very effective technique, which is utilized to complement the Web caching mechanism. The web prefetching predicts the web object expected to be requested in the near future, but these objects are not yet requested by users. Then, the predicted objects are fetched from the origin server and stored in a cache. Thus, the web prefetching helps in increasing the cache hits and reducing the user-perceived latency.
Table 2: replacement policies Categories
| Function-based policies | These policies associate each object in the cache with a utility value. The value is calculated | GD-Size, GDSF, GD*, PGDS, server-assisted cache replacement, GD-SIZE | Choosing appropriate weights of factors is difficult task |
A. Types of Prefetching Based On Location:
The prefetching techniques can be implemented on server, proxy or client side. The client-based prefetching concentrates on the navigation patterns of a single user across many Web servers. On another hand, the sever-based prefetching concentrates on the navigation patterns of all users accessing a single website. The proxy-based prefetching concentrates on the navigation patterns of a group of users across many Web servers.
Table 3: Types of prefetching based on location
<table>
<thead>
<tr>
<th>Prefetching Location</th>
<th>Data for Prediction Model</th>
<th>Advantages</th>
<th>Disadvantages</th>
</tr>
</thead>
<tbody>
<tr>
<td>Client</td>
<td>Historical and current user requests</td>
<td>Easy to partition user session and realize personalized prefetching.</td>
<td>– Not Share Prefetching Content Among Users. – Needs A Lot Of Network Bandwidth.</td>
</tr>
<tr>
<td>Proxy</td>
<td>Proxy log and current user requests</td>
<td>– Reflects Common Interests For A Group Of Users. – Shares Prefetching Content from Different Servers among Users.</td>
<td>– Not Reflect Common Interests For A Single Website From All Users.</td>
</tr>
<tr>
<td>Server</td>
<td>Server log and current user requests</td>
<td>– Records Single Website Access Information From All Users And Better Reflect All Users’ Common Interests.</td>
<td>– Not Reflect Users’ Real Browsing Behavior. – Difficult To Partition User Session. – Needs Additional Communications Between Clients And Servers For Deciding Prefetching Content.</td>
</tr>
</tbody>
</table>
V. WEB PRE-FETCHING TECHNIQUES
A. Domain Top Approach:
(Domain1 ∪ Domain2 ∪ ... ∪ DomainN)
Seung Won Shin et al. proposes a domain top approach for web prefetching, which combines the proxy’s active knowledge of most popular domains and documents[11], [13]. In this approach proxy is responsible for calculating the most popular domains and most popular documents in those domains, then prepares a rank list for prefetching.
B. Dynamic Web Prefetching:
In dynamic web pre-fetching technique [12], each user can keep a list of sites to access immediately called user’s preference list. The preference list is stored in proxy server's database. Intelligent agents are used for parsing the web page, monitoring the bandwidth usage and maintaining hash table, preference list and cache consistency. It controls the web traffic by reducing pre-fetching at heavy traffic and increasing pre-fetching at light traffic. Thus it reduces the idle time of the existing network and makes the traffic almost constant. A hash table is maintained for storing the list of accessed URLs and its weight information [12], [13].
Depending upon the bandwidth usage and weights in the hash table, the prediction engine decides the number of URLs to be pre-fetched and gives the list to pre-fetch engine for pre-fetching the predicted web pages. After pre-fetching, the proxy server keeps the pre-fetched web pages in a separate area called pre-fetch area.
Fig. 1: Structure of rank list for Domain Top Approach
<table>
<thead>
<tr>
<th>web page</th>
<th>web page</th>
<th>web page</th>
</tr>
</thead>
<tbody>
<tr>
<td>web page</td>
<td>web page</td>
<td>web page</td>
</tr>
<tr>
<td>web page</td>
<td>web page</td>
<td>web page</td>
</tr>
<tr>
<td>web page</td>
<td>web page</td>
<td>web page</td>
</tr>
<tr>
<td>web page</td>
<td>web page</td>
<td>web page</td>
</tr>
<tr>
<td>web page</td>
<td>web page</td>
<td>web page</td>
</tr>
<tr>
<td>web page</td>
<td>web page</td>
<td>web page</td>
</tr>
<tr>
<td>web page</td>
<td>web page</td>
<td>web page</td>
</tr>
<tr>
<td>web page</td>
<td>web page</td>
<td>web page</td>
</tr>
<tr>
<td>Top domain</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
C. Link Pre-Fetching:
A web page provides a set of pre-fetching hints to the browser and after the browser finishes loading the page, it starts pre-fetching specified documents and stores them in its cache. When the user visits one of the pre-fetched documents, it can be served up quickly out of the browser’s cache. Fisher et. Al proposed a server driven approach for ink prefetching [14]. In this approach browser follows special directives from the web server or proxy server that instructs into pre-fetch specific documents. This mechanism allows servers to control the contents to be perfected by the browser. The browser looks for either HTML <link> tag or an HTTP Link: headerTag to pre-fetch the subsequent links. The Link: header can also be specified within the HTML document itself by using a HTML <meta>tag[16]. When the browser is idle, it observes these hints and queues up each unique request to be pre-fetched.
D. Top10 Approach:
Evangelos P. Markatos et al. proposes a top 10 approach to prefetching on the web, in which the server calculates the list of most popular documents[19]. This approach is easy to implement in an acient server architecture. It considers a list of access for predicting the web object, not the client characteristics on the web.
E. Model Based Predictive Pre-Fetching:
Yang et. al proposed a model based predictive prefetching, in which an integrated web-caching and web-pre-fetching model is used[18,19]. The prediction model used in this is based on the statistical correlation between web objects. The prediction model is time based, prediction window represents some specific time period than number. The algorithm constructs a logical graph called correlation graph, which shows the correlation between web objects and pre-fetch web objects that are highly correlated to a currently requested object. They developed an integrated caching and prefetching algorithm, Pre- GDF. This algorithm is based on the algorithms GD- Size [13] and its enhancement GDSF [7]. The key components in the algorithm are replacement manager, pre-fetching agent, prediction queue and cache.
F. Adaptive Pre-Fetching Scheme:
Adaptive pre-fetch scheme is developed to adapt user’s browsing history and habits [15]. Jia and et. al proposed an adaptive pre-fetch scheme, in which the number of files to be pre-fetch depends on user access history and network conditions. This scheme consists of two modules: prediction module and threshold module. The prediction module predicts the access probability of each file. Files whose access probabilities are greater than or equal to the threshold are only prefetched. Chen and et. al [7] proposed an adaptive pre-fetch scheme, in which dynamically adjust the prefetch aggressiveness in web servers and uses a threshold to adjust the aggressiveness of prefetching. Fagni and et. al [7] proposed an approach for boosting the performance of search engine by exploiting the spatial and temporal locality present in the stream of processed queries.
G. Semantic Prefetching:
"Semantics", hidden in web documents. From certain point of view, the semantics of web document is already considered in history-based prediction. In that case, this semantics is derived from user interest assuming that users passing the same URL-graph are interested in the same thing semantically.
They do not consider real semantics of document, however. As semantic prefetching we understand prefetching based on preferences of past retrieved documents in semantics, rather than on the chronological relationships between URL accesses. Semantically based prefetching tries to extract a semantic description of a document and asks server to provide pages with similar semantics, with the same so called "semantic locality". Based on the document semantics, this approach is capable of prefetching documents whose URLs have never been accessed [15].
H. Measuring the Performance of Pre-Fetching:
To judge the success of a pre fetch system and to tune the parameters used, the performance of the system must be measured[17]. The following criteria can be used to do this:-
1) Usefulness of Predictions/Pre-fetches: the percentage of fetched pages that had already been predicted is pre-fetch.
2) Accuracy of Predictions/Pre-fetches: The percentage of predicted or pre-fetched pages that were later actually requested by the user.
3) Practical accuracy of predictions: the probability that one of the received predictions was correct.
4) Coverage: the percentage of actual fetches which were preceded by the predictions.
5) Network traffic Increase: The volume of network traffic with pre-fetching enabled / the volume of traffic without pre-fetching.
VI. CONCLUSION
On a large scale research of web prefetching is going on in various directions. In this paper survey on principles and some the existing web caching and prefetching approaches. Web caching and prefetching are two effective approach for Web service bottlenecks, lessen traffic over the Internet and improve scalability of the Web system. Combination of the web caching and the web prefetching improve the performance compared to single caching. In this paper different web prefetching techniques and other directions of web prefetching are studied and discussed Firstly, we have reviewed principles and existing works of web caching. This includes the conventional and intelligent web caching. Secondly, types and categories of prefetching have presented and discussed briefly. The web prefetching scheme focuses on the properties of spatial locality of web objects. Finally, this review paper has presented some studies that discussed integration of web caching and web prefetching together. Top 10 domain technique is better than other technique.
VII. ACKNOWLEDGEMENT
I would express my deep sense of regard to our Head of Department Mr. Vivek Sharma and to Mrs. Nidhi Seth, AP
CSE Department for regular inputs and for constantly guiding. I am obliged to the staff members of CSE department, JMIT for the valuable information provided by them as and when necessary.
REFERENCES
[12] Sandhaya Gawade and Hitesh Gupta, "Review of algorithms for prefetching and caching” 2012 IJARCCSE
|
{"Source-Url": "http://www.ijsrd.com/articles/IJSRDV3I2749.pdf", "len_cl100k_base": 4607, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 18326, "total-output-tokens": 5498, "length": "2e12", "weborganizer": {"__label__adult": 0.0003376007080078125, "__label__art_design": 0.0004448890686035156, "__label__crime_law": 0.0005164146423339844, "__label__education_jobs": 0.0015811920166015625, "__label__entertainment": 0.0002218484878540039, "__label__fashion_beauty": 0.00020003318786621096, "__label__finance_business": 0.0006465911865234375, "__label__food_dining": 0.00042939186096191406, "__label__games": 0.0009059906005859376, "__label__hardware": 0.0029449462890625, "__label__health": 0.0007882118225097656, "__label__history": 0.0004544258117675781, "__label__home_hobbies": 0.0001062154769897461, "__label__industrial": 0.00044035911560058594, "__label__literature": 0.0006814002990722656, "__label__politics": 0.00025653839111328125, "__label__religion": 0.0004782676696777344, "__label__science_tech": 0.26611328125, "__label__social_life": 0.00013077259063720703, "__label__software": 0.1424560546875, "__label__software_dev": 0.5791015625, "__label__sports_fitness": 0.00024437904357910156, "__label__transportation": 0.0004267692565917969, "__label__travel": 0.00026297569274902344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23718, 0.01838]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23718, 0.51703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23718, 0.87524]], "google_gemma-3-12b-it_contains_pii": [[0, 6116, false], [6116, 10166, null], [10166, 14354, null], [14354, 20247, null], [20247, 23718, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6116, true], [6116, 10166, null], [10166, 14354, null], [14354, 20247, null], [20247, 23718, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23718, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23718, null]], "pdf_page_numbers": [[0, 6116, 1], [6116, 10166, 2], [10166, 14354, 3], [14354, 20247, 4], [20247, 23718, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23718, 0.25893]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
08b1234a61838f24c9ef28a48d093a29330a0e9d
|
[REMOVED]
|
{"Source-Url": "https://hal-paris1.archives-ouvertes.fr/hal-00874633/file/inap-2013.pdf", "len_cl100k_base": 7559, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 39522, "total-output-tokens": 9521, "length": "2e12", "weborganizer": {"__label__adult": 0.0003941059112548828, "__label__art_design": 0.0003390312194824219, "__label__crime_law": 0.0004181861877441406, "__label__education_jobs": 0.0006251335144042969, "__label__entertainment": 8.952617645263672e-05, "__label__fashion_beauty": 0.00017583370208740234, "__label__finance_business": 0.0002989768981933594, "__label__food_dining": 0.00040340423583984375, "__label__games": 0.0006895065307617188, "__label__hardware": 0.00128936767578125, "__label__health": 0.0005774497985839844, "__label__history": 0.0003437995910644531, "__label__home_hobbies": 0.00010734796524047852, "__label__industrial": 0.0005922317504882812, "__label__literature": 0.0002562999725341797, "__label__politics": 0.0003829002380371094, "__label__religion": 0.0005817413330078125, "__label__science_tech": 0.0589599609375, "__label__social_life": 0.00010251998901367188, "__label__software": 0.0060577392578125, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.0003840923309326172, "__label__transportation": 0.0007472038269042969, "__label__travel": 0.0002446174621582031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36749, 0.06675]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36749, 0.39893]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36749, 0.847]], "google_gemma-3-12b-it_contains_pii": [[0, 1053, false], [1053, 3706, null], [3706, 6763, null], [6763, 8964, null], [8964, 11941, null], [11941, 13351, null], [13351, 15733, null], [15733, 18274, null], [18274, 21144, null], [21144, 21825, null], [21825, 24651, null], [24651, 27213, null], [27213, 28492, null], [28492, 31272, null], [31272, 34755, null], [34755, 36749, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1053, true], [1053, 3706, null], [3706, 6763, null], [6763, 8964, null], [8964, 11941, null], [11941, 13351, null], [13351, 15733, null], [15733, 18274, null], [18274, 21144, null], [21144, 21825, null], [21825, 24651, null], [24651, 27213, null], [27213, 28492, null], [28492, 31272, null], [31272, 34755, null], [34755, 36749, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36749, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36749, null]], "pdf_page_numbers": [[0, 1053, 1], [1053, 3706, 2], [3706, 6763, 3], [6763, 8964, 4], [8964, 11941, 5], [11941, 13351, 6], [13351, 15733, 7], [15733, 18274, 8], [18274, 21144, 9], [21144, 21825, 10], [21825, 24651, 11], [24651, 27213, 12], [27213, 28492, 13], [28492, 31272, 14], [31272, 34755, 15], [34755, 36749, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36749, 0.06429]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
dba7b224b4b546febf5da591eefd71a238fd2b4e
|
[REMOVED]
|
{"Source-Url": "http://researcher.ibm.com/researcher/files/us-zsura/iwomp07_cellOMP.pdf", "len_cl100k_base": 6034, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 27452, "total-output-tokens": 6945, "length": "2e12", "weborganizer": {"__label__adult": 0.0005965232849121094, "__label__art_design": 0.0005726814270019531, "__label__crime_law": 0.000591278076171875, "__label__education_jobs": 0.0005354881286621094, "__label__entertainment": 0.00015306472778320312, "__label__fashion_beauty": 0.00029540061950683594, "__label__finance_business": 0.00040030479431152344, "__label__food_dining": 0.0005593299865722656, "__label__games": 0.0014486312866210938, "__label__hardware": 0.0225830078125, "__label__health": 0.0007576942443847656, "__label__history": 0.0006127357482910156, "__label__home_hobbies": 0.00021648406982421875, "__label__industrial": 0.0018110275268554688, "__label__literature": 0.00023949146270751953, "__label__politics": 0.0004625320434570313, "__label__religion": 0.0009946823120117188, "__label__science_tech": 0.3193359375, "__label__social_life": 8.082389831542969e-05, "__label__software": 0.0102386474609375, "__label__software_dev": 0.634765625, "__label__sports_fitness": 0.0006208419799804688, "__label__transportation": 0.0017518997192382812, "__label__travel": 0.0003654956817626953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32413, 0.01229]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32413, 0.35476]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32413, 0.90533]], "google_gemma-3-12b-it_contains_pii": [[0, 2469, false], [2469, 5736, null], [5736, 8635, null], [8635, 12278, null], [12278, 15737, null], [15737, 17139, null], [17139, 20464, null], [20464, 24149, null], [24149, 26469, null], [26469, 27332, null], [27332, 29691, null], [29691, 32413, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2469, true], [2469, 5736, null], [5736, 8635, null], [8635, 12278, null], [12278, 15737, null], [15737, 17139, null], [17139, 20464, null], [20464, 24149, null], [24149, 26469, null], [26469, 27332, null], [27332, 29691, null], [29691, 32413, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32413, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32413, null]], "pdf_page_numbers": [[0, 2469, 1], [2469, 5736, 2], [5736, 8635, 3], [8635, 12278, 4], [12278, 15737, 5], [15737, 17139, 6], [17139, 20464, 7], [20464, 24149, 8], [24149, 26469, 9], [26469, 27332, 10], [27332, 29691, 11], [29691, 32413, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32413, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
3ee5295230f63bb2cf75fc93f6c5731473fe989e
|
[REMOVED]
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-01276193/file/ieee-is17.pdf", "len_cl100k_base": 6988, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 39959, "total-output-tokens": 8527, "length": "2e12", "weborganizer": {"__label__adult": 0.0004260540008544922, "__label__art_design": 0.0004298686981201172, "__label__crime_law": 0.0004973411560058594, "__label__education_jobs": 0.0012388229370117188, "__label__entertainment": 9.387731552124023e-05, "__label__fashion_beauty": 0.0002295970916748047, "__label__finance_business": 0.00054931640625, "__label__food_dining": 0.0004835128784179687, "__label__games": 0.0007472038269042969, "__label__hardware": 0.0011997222900390625, "__label__health": 0.0012712478637695312, "__label__history": 0.0003254413604736328, "__label__home_hobbies": 0.0001850128173828125, "__label__industrial": 0.0008320808410644531, "__label__literature": 0.0003192424774169922, "__label__politics": 0.00036525726318359375, "__label__religion": 0.000583648681640625, "__label__science_tech": 0.1497802734375, "__label__social_life": 0.0001239776611328125, "__label__software": 0.00972747802734375, "__label__software_dev": 0.8291015625, "__label__sports_fitness": 0.0004775524139404297, "__label__transportation": 0.0009107589721679688, "__label__travel": 0.0002655982971191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34529, 0.04216]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34529, 0.81597]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34529, 0.88966]], "google_gemma-3-12b-it_contains_pii": [[0, 1133, false], [1133, 3463, null], [3463, 6811, null], [6811, 9915, null], [9915, 11390, null], [11390, 14510, null], [14510, 15365, null], [15365, 17785, null], [17785, 20817, null], [20817, 23925, null], [23925, 26243, null], [26243, 28263, null], [28263, 29599, null], [29599, 31956, null], [31956, 34179, null], [34179, 34529, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1133, true], [1133, 3463, null], [3463, 6811, null], [6811, 9915, null], [9915, 11390, null], [11390, 14510, null], [14510, 15365, null], [15365, 17785, null], [17785, 20817, null], [20817, 23925, null], [23925, 26243, null], [26243, 28263, null], [28263, 29599, null], [29599, 31956, null], [31956, 34179, null], [34179, 34529, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34529, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34529, null]], "pdf_page_numbers": [[0, 1133, 1], [1133, 3463, 2], [3463, 6811, 3], [6811, 9915, 4], [9915, 11390, 5], [11390, 14510, 6], [14510, 15365, 7], [15365, 17785, 8], [17785, 20817, 9], [20817, 23925, 10], [23925, 26243, 11], [26243, 28263, 12], [28263, 29599, 13], [29599, 31956, 14], [31956, 34179, 15], [34179, 34529, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34529, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
5d37ed09ce1d55528f415f1353e0f53efc99251f
|
Compiler and optimization level recognition using graph neural networks
Sébastien Bardin, Tristan Benoit, Jean-Yves Marion
To cite this version:
Sébastien Bardin, Tristan Benoit, Jean-Yves Marion. Compiler and optimization level recognition using graph neural networks. MLPA 2020 - Machine Learning for Program Analysis, Jan 2021, Yokohama / Virtual, Japan. hal-03270335
HAL Id: hal-03270335
https://hal.science/hal-03270335
Submitted on 24 Jun 2021
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Compiler and optimization level recognition using graph neural networks
Sébastien Bardin\textsuperscript{1}, Tristan Benoit\textsuperscript{2} and Jean-Yves Marion\textsuperscript{2}
\textsuperscript{1}CEA, LIST, Paris-Saclay, France, \\
\textsuperscript{2}Université de Lorraine, CNRS, LORIA Nancy, France
sebastien.bardin@cea.fr, \{tristan.benoit,jean-yves.marion\}@loria.fr
Abstract
We consider the problem of recovering the compiling chain used to generate a given bare binary code. We present a first attempt to devise a Graph Neural Network framework to solve this problem, in order to take into account the shallow semantics provided by the binary code’s \textit{structured} control flow graph (CFG). We introduce a Graph Neural Network, called Site Neural Network (SNN), dedicated to this problem. Feature extraction is simplified by forgetting almost everything in a CFG except transfer control instructions. While at an early stage, our experiments show that our method already recovers the compiler and the optimization level provenance with very high accuracy. We believe these are promising results that may offer new, more robust leads for compiling tool chain identification.
1 Introduction
The problem. Identifying the \textit{compiling chain}, i.e. both the compiler (e.g. Visual Studio) and its optimization options (e.g. \texttt{-O1}, \texttt{-O2}), that have been used to produce a given bare binary code is an important problem in at least two scenarios: to determine security flaws inside binary codes and to identify known functions.
- Applications are built by linking together libraries that are quite often commercial off-the-shelf (COTS)\textsuperscript{1}. As a result, applications are developed more easily and quickly, and are usually more robust since COTS are normally already well-tested components. On the other hand, developers do not have the source code of COTS and in fact they do not even know the compiling chain used to generate these COTS. This is an important issue in software maintenance and long-term support. Indeed, compilers may inject vulnerabilities that are discovered after the COTS released and after the deployment of the applications that used them. For example, CVE-2018-12886 describes a vulnerability allowing an attacker to bypass stack protection and [Hohnka \textit{et al.}, 2019] is a recent comprehensive study on vulnerabilities produced by compilers. Hence, there is a need to be able to retrieve the compiling chain in order to assess whether an application may present a certain vulnerability.
- The library function identification in a binary code is another primary issue for software maintenance and security, such as cloning detection (see for example [White \textit{et al.}, 2016] using Deep Learning) or malware reverse engineering (see for example [Calvet \textit{et al.}, 2012] using I/O relationship). The function name identification problem is readily solved when the binary code analyzed is a well-behaved binary code, that is when it contains enough information to disassemble it. For example, IDA disassembler proposes the F.L.I.R.T algorithm [Guilfanov, 2012] based on signature-patterns that recognizes functions and assigns a name to them, while ByteWeight [Bao \textit{et al.}, 2014] constructs a weight prefix tree of the function prologue by machine learning to identify functions. These methods work well for regular binary codes, yet identification fails in many situations: when the library is unknown, when the code is slightly modified in such a way that its pattern does not match anymore, or when it is stripped or obfuscated. And in this context, the identification of the compiler together with the compiling options used to generate a binary code may help [Shirani \textit{et al.}, 2017].
Actually, we tested IDA Freeware edition to determine the compiler used to generate a binary code. For this, we performed the test manually on 15 unstripped and stripped binary codes. On unstripped binary codes, IDA correctly distinguish MinGW binary codes from Visual Studio ones. In the case of stripped binary codes, IDA is not able to find the correct compiler and assigns each binary code to Visual Studio, probably as a default setting. Moreover and in all cases, IDA was unable to retrieve the compiling options.
Goal and approach. The \textit{goal of the paper is to devise a machine learning based solution to the compiling chain identification problem}.
Rosenblum et al.’s pioneering work [Rosenblum \textit{et al.}, 2011] introduced the problem and a first solution based on Support Vector Machine (SVM). Most of the recent works on this topic [Yang \textit{et al.}, 2019; Rahimian \textit{et al.}, 2015; Chen \textit{et al.}, 2019; Massarelli \textit{et al.}, 2019] rely on machine-
learning based approaches, taking their roots in Convolutional Neural Networks (CNN). Hence, the extracted features of a binary code are encoded as a text or as an image in order to be fed to a CNN. But a program is not like a text or an image that can be projected into a regular Euclidean space.
**Claim.** We claim that binary code semantics should be taken into account and the first model is given by its Control Flow Graph (CFG). That is why we suggest to use Graph Neural Networks (GNN) [Zhou et al., 2018]. As a result, relationships between basic blocks (nodes of CFG) are taken into account in the graph embedding, and the weight of a CFG node depends transitively on its neighbors.
**Contribution.** Our paper makes the following contributions:
1. We develop a Graph Neural Network based framework that we call Site Neural Network (SNN) to determine the compiler family and the optimization level that generate a given binary code. The overall architecture is displayed in Figure 1 and Figure 2. The preprocessing is made as follows: a CFG is extracted from a binary code and then abstracted by just preserving the skeletal control flow. Then, the abstracted flow graph is chopped into fixed size that we call sites. This preprocessing step is fully automatic and unsupervised. Next, the Site Neural network takes the graph of all sites as input in order to classify the binary code. The overall architecture of our framework follows the approach of adapting Residual Neural Network (ResNet) to sites [Zhao et al., 2018] and by refining the model with adaptive max pooling layers [Shervashidze et al., 2011] to graphs.
Our approach has at least two advantages.
(a) Compared to prior works [Rosenblum et al., 2011; Yang et al., 2019; Rahimian et al., 2015; Chen et al., 2019; Massarelli et al., 2019], the model is quite simple because it is reduced to sites. There are no instruction specific features and no focus on prologue/epilogue of binary functions. And so, we do expect that Site Neural Networks are more robust and generic.
(b) From a methodological point of view, Site Neural Networks provides end-to-end graph based classifiers. As a result, decisions and classifications should be more easily based on the binary code semantics.
2. Most of the works in compiler provenance based on Machine Learning use datasets comprised of a set of binary functions generated by different compilers and optimization levels. From our point of view, this approach creates a bias as identifying function boundaries can be difficult (stripped binary code, obfuscated COTS) – this bias might be acceptable depending on the context.
Thus, in our dataset we use full binary codes without any knowledge about functions and their localization. Our dataset consists of about 30,000 binary codes compiled from 30,000 different source codes with two compilers and three optimization levels. Thus, we can train and test our approach with six distinct and well-balanced sets of binary codes and all having a different code sources. It is for this reason that in the remainder of the paper, we will use the macro-average measures. The fact that our datasets are well-balanced is evidenced by a respective macro-average F1-Score of 0.973 and 0.996.
3. The identification covers two compilers, Visual Studio and MinGW, and three types of optimization O0, O1, and O2. We evaluated our system in terms of detection accuracy on a broad dataset composed of about 30000 binary codes. It makes accurate predictions with an overall F1-score of 0.978 with an estimated standard deviation of 0.0049. Also, we demonstrate the ability of our SNN-based approach to successfully recover the compiling chain on both un-stripped and stripped binary codes from our datasets.
Overall, we believe these are promising results that may offer new, more robust leads for compiling tool chain identification.
2 Related works
As we previously said, Rosenblum et al. in a series of two seminal papers [Rosenblum et al., 2010; Rosenblum et al., 2011] were the first to attempt to recover the compiler and compiler options using SVM – where features are composed of regular expressions (idioms) on the assembly program together with 3-vertex graphlets. Rahimian et al. [Rahimian et al., 2015] developed BinComp that uses a complex model based on three layers and where the last one is an Annotated Control Flow Graphs. All these features are embedded into a vector by applying a neighbor hash graph kernel.
More recently, three papers were published on compiler provenance. Yang et al. [Yang et al., 2019] extracts 1024 bits from the object file and process them with a one-dimensional CNN. Chen et al. [Chen et al., 2019] describes the Himalia approach based on a two-tier classifier. Features are extracted from binary functions and consist of a sequence of instruction types of fixed size, which are eventually completed by padding. Thus, Himalia focuses on prologue and epilogue of functions and as a result is able to explain its classification. That said, the authors made the strong hypothesis to be able to determine the function prologue and epilogue.
Lastly, the closest related work is by Massarelli et al. [Massarelli et al., 2019]. They propose a graph embedding neural network based on methods developed in the field of natural language processing (NLP). The preprocessing is composed of two stages. The first stage transforms a sequence of instruction in a vector by measuring different instruction parameters. Then, the overall CFG is embedded into a graph, which is aggregated by a 2-round processes using a Recurrent Neural Network.
3 Background
Recent work used convolutional neural networks to predict the compiler toolchain. Two difficulties here are that (i) binary code semantics, denoted by a control flow graph, has to be taken into account and (ii) the input to the classifier, a binary code or CFG, can be of arbitrary size. The second
difficulty is yet inherent to most machine learning pipelines that can only handle inputs of a fixed size. In this paper, we replace convolutional neural networks (CNN) by graph neural networks (GNN) [Micheli, 2009; Zhou et al., 2018]. The architecture of GNN is mostly based on a transformation of classical CNN architecture and for example the paper [Zhao et al., 2018] generalizes ResNet [He et al., 2015] to graphs. Inputs of a graph neural network is the representation of a graph and the survey [Hamilton et al., 2017] discusses the different graph representation. Compared to unstructured data like texts (one-dimensional data) or images (two-dimensional data), the graph encoding must preserve certain properties like the shape or the connectivity.
A key feature is the pooling method. A pooling layer of GNN does not depend on the input size. For this, the pooling can just take the sum of node values. There are other methods such as sort pooling selecting a fixed sized set of maximum node values [Zhang et al., 2018] or such as adaptive max pooling by dividing the matrix at each convolution in a fixed number of parts [Yan et al., 2019], as illustrated in Figure 4.
Note that there is still a fundamental limitation in the GNN approaches. Indeed, subgraph isomorphism is an important problem in graph classification. In [Xu et al., 2018], it is demonstrated that GNN cannot do better than WL-test of isomorphism [Weisfeiler and Leman, 1968], even if naming each node with an identifier increases the predictive capacity of the model as shown in [Seidel et al., 2015].
4 Our method for compiler identification
4.1 Binary code preprocessing
The architecture of the preprocessing is shown in Figure 1. Inputs are binary codes. Each binary code is disassembled and the Control Flow Graph is built. Then, there are two more steps that we call the forgetful phase and the chopping phase that are explained below:
4.2 The forgetful phase
The forgetful phase consists in simplifying a CFG by removing sequential instructions and by just keeping control flow instruction types. Figure 3 illustrates this reduction. For this, the phase runs in two stages:
1. All consecutive nodes labeled by a sequential instruction (like mov or add) are pruned in one single node that is removed.
2. All remaining nodes are relabeled based on the instruction type following Table 1.
The forgetful phase respects the underlying structure of the input CFG and maps it to a reduced graph that we call the forgetting CFG for convenient notation in the remainder.
4.3 The chopping phase
The chopping phase cuts a forgetting graph into a set of small disconnected subgraphs. These subgraphs are called sites and their size is at most of 24 nodes. Sites are obtained using a breadth first search algorithm from the forgetting graph – the algorithm is presented in Appendix (Algorithm 1). We associate to each node of a site two features composed of the instruction type (see Table 1) and a unique identifier in order to solve the problem of anonymity.
Clearly, the forgetful and chopping phases reduce drastically the input dimension. Moreover, since each site has a small diameter at most 24), a small number of convolutions allows information to pass through all nodes. Notice that
4.4 Site Neural Networks
The input are graphs that are built by the two previous phases from a binary code. Take a graph composed of a set of nodes $V$ and represented by the adjacency matrix $A$. We define $X_0$ as the matrix containing the nodes attributes, thus it has a dimension of $n \times 2$.
**Mini-batches.** A single input is a set of sites which are collectively regrouped in a graph. In the training phase, the input graphs are partitioned into mini-batches. This gives us the opportunity to normalize the data [Ioffe and Szegedy, 2015]. Each mini batch $B$ is normalized by calculating $\text{batchNorm}_B(x) = \frac{x - \mu_B}{\sigma_B}$, where $\mu_B$ is the observed mean and $\sigma_B$ is the observed variance. Notice that, the observed means and variances are memorized, because they are reused in test time. This process has been shown to be successful, but is not yet understood in theory. The activation function is the rectified linear unit $\text{relu}(x) = \max(0, x)$.
**Deep Convolution.** Now, the vector sequence of node values $(Y_{k+1})_{k \geq 0}$ obtained after $k + 1$ convolution(s) is defined as follows:
$$Y_1 = \text{relu}(\text{batchNorm}_B((A + I)X_0W_0 + b_0))$$
$$Y_{k+1} = (\text{relu}(\text{batchNorm}_B((A + I)Y_kW_k + b_k)))Y_k$$
The notation $|$ is the matrix augmentation. As usual with deep convolutions, the output of one step is fed into every future step.
**Dimensions.** Let $d_t$ be the hyper parameter corresponding to the second dimension of the matrix $W_t$ at convolution $t$. The first dimension of the matrix $W_k$, for $k > 0$, is $\sum_{t=0}^{k-1} d_t$.
**Output.** The dimension of the matrix $Y_k$, for $k > 0$, is $n \times \sum_{t=0}^{k} d_t$ where $n$ is the number of nodes. We now perform a pooling, which reduces the matrix of the last convolution to some smaller fixed size matrix.
**Adaptive Max Pooling Layer(s).** Following [Shervashidze et al., 2011], a crucial point in our approach is the extraction of features based on the Weisfeiler-Lehman test of graph isomorphism. For this, we apply on the result of the convolutional layers an Adaptive Max Pooling (AMP) step. This pooling operation is defined by an operator $\text{amp}_{n,m}$ that reduces a matrix to a matrix of smaller dimension $n \times m$ as follows: Take a matrix $M$ of dimension $a \times v$. $M$ is cut into $n \times m$ matrices of kernel size $\left[ \frac{a}{n} \right] \times \left[ \frac{v}{m} \right]$. We take the maximum in each block. The figure 4 illustrates the adaptive max pooling computation. We iterate four times the adaptive max pooling operation to extract a fixed size representation of $X$.
**Readout layer.** At this point we have obtained from the adaptive max pooling layers a fixed size representation of our graph. The output of the adaptive max pooling layers is fed into a multilayer perceptron to predict the probability distribution of the class that the input graph should belong to.
Figure 4: An example of two adaptive max pooling on a matrix of dimension $8 \times 22$. The first adaptive max pooling is of dimension $2 \times 2$, it has a kernel size of dimension $4 \times 11$. The second adaptive max pooling is of dimension $4 \times 4$, it has a kernel size of dimension $4 \times 6$.
5 Evaluation
5.1 Implementation details
We perform four convolutions with a dimension of 8 at each step. The hyper-parameter $d_t$ of the matrix $W_t$ is 8 for each convolution $t$. We use four pooling layers with $\text{amp}_{2,2}$, $\text{amp}_{4,4}$, $\text{amp}_{8,8}$ and $\text{amp}_{16,16}$ operators. The multilayer perceptron has four layers of respective numbers of weights of $384$, $256$, $128$ and $2$. We implemented our framework using the language python along with the machine learning library PyTorch. Prepossessing is performed using C++.
5.2 Datasets
We evaluated the performance of our framework against two datasets coming from two separate domains:
**The Codeforces Dataset.** This dataset is made from 19,986 source codes, that solve 91 problems from Codeforces. We compiled them using Visual Studio 2019 and MinGW on Windows 10 with optimization options O0, O1, O2 and O3. As a result, we get 7 distinct classes of approximately 3,000 binary codes;
**The CSmith Dataset** This dataset is produced using CSmith [Yang et al., 2011], a random program generator created to find bugs in compilers. We compiled 10,562 binaries, half of them with Visual Studio 2019 and the other half with MinGW using an optimization level among O0, O1, O2 and O3. The average size of each binary code is approximately 30KB and there is at most 10 nested loops;
**Similarities.** In Codeforces Dataset, there are huge similarities between programs compiled with MinGW O2 and MinGW O3. Over a random sample of 2920 binary codes, 561 binary codes compiled with O2 are identical when compiled with O3. This has already been reported in [Egele et al., 2014] and has been an issue to Chen et al. [Chen et al., 2019] in their compiler optimization detection framework. Straightforwardly, this is not the case however in the Visual Studio Compiler which has only one advanced optimization level.
5.3 Research questions
We investigate the following research questions in order to validate our framework and to see its current limits:
---
3https://codeforces.com/
RQ1 Does our framework have the capacity to predict the compiler and optimization level of binary codes coming from our datasets?
RQ2 Using our framework, can a model learned from a specific dataset be applied on another dataset? The question is to know whether or not what has been learned with one dataset by our framework can be exploited on another distinct dataset.
RQ3 Are the performances decreasing when binary codes are stripped? That is, take a SNN trained with an unstripped dataset: are we still able to detect the compiler and the compiling options from a stripped binary code?
5.4 RQ1: Compiler and optimization option identification
Methodology. To answer the first question, we consider both datasets separately. Due to the similarity between programs produced by MinGW -O2 and by MinGW -O3, we will consider MinGW -O2/O3 as a proper sub class. Figure 5 presents our hierarchical classifier. A site neural network is specialized to make the separation between two choices (e.g. between MinGW -O0/O1 and MinGW -O2/O3).
To run this experiment, each dataset is split in a train set, a validation set and a test set with resp. 80%, 10%, and 10% of the starting datasets. Next, for both datasets, each specialized site neural network is trained using 0.0005 as the learning rate during 30 epochs. The loss function is . And, we select the model for each specialized site neural network with the best accuracy on the validation split on some epoch.
After training, each site neural network is evaluated thanks to the validation set of the corresponding dataset. We performed this operation twice to mitigate randomness. Considering any class as important as any other, we choose to report the macro average of the precision, recall and F1-Score using three significant digits.
Results. Table 2 reports the performance of a classifier trained on Codeforces Dataset. It achieves an overall F1-score of 0.973 with an estimated standard deviation of 0.0017.
In table 3, we just report, due to the lack of space, the performance of a classifier trained on CSmith Dataset. It achieves a slightly better overall F1-score of 0.996 with an estimated standard deviation of 0.0023.
In table 4, we report the performance of a classifier trained with the full Dataset. The full dataset is composed of the union of both datasets. It achieves an overall F1-Score of 0.978 with a standard deviation of 0.0049. Taking into account that the full Dataset contains one third instances from the CSmith Dataset and two third from Codeforces Dataset, we could have expected an overall F1-Score of 0.996 × \( \frac{10562}{30548} \) + 0.973 × \( \frac{19986}{30548} \) ≈ 0.981. Thus, there is a small loss of 0.003.
Conclusion. We conclude that our framework has an elevated capacity to predict a compiler and the optimization level of a binary from our datasets. Our framework exhibits a potential loss when the diversity of the input data increases. Further analysis is needed to tackle a more robust statistical analysis.
5.5 RQ2: Applying a model learned on a dataset to a new dataset
We used our previous models on the datasets we had not learned.
Methodology. We conduct two symmetric experiments. First, we trained specialized site neural networks as explained previously following the decision tree in 5 with Codeforces dataset. Then, we evaluate trained specialized site neural networks with Codeforces dataset on CSmith dataset in order to see if the starting dataset is sufficient.
Second, we inverse the role of Codeforces dataset and of CSmith dataset. That is, we evaluate trained specialized site neural networks with CSmithdataset on Codeforces dataset.
<table>
<thead>
<tr>
<th>Class</th>
<th>Precision</th>
<th>Recall</th>
<th>F1-Score</th>
<th>Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>MinGW O0</td>
<td>0.960</td>
<td>0.961</td>
<td>0.961</td>
<td>285</td>
</tr>
<tr>
<td>MinGW O1</td>
<td>0.954</td>
<td>0.947</td>
<td>0.951</td>
<td>285</td>
</tr>
<tr>
<td>MinGW O2/3</td>
<td>0.994</td>
<td>0.997</td>
<td>0.996</td>
<td>570</td>
</tr>
<tr>
<td>VS O0</td>
<td>0.981</td>
<td>0.974</td>
<td>0.977</td>
<td>285</td>
</tr>
<tr>
<td>VS O1</td>
<td>0.975</td>
<td>0.975</td>
<td>0.975</td>
<td>285</td>
</tr>
<tr>
<td>VS O2</td>
<td>0.974</td>
<td>0.979</td>
<td>0.976</td>
<td>285</td>
</tr>
<tr>
<td>Macro Avg</td>
<td>0.973</td>
<td>0.972</td>
<td>0.973</td>
<td>1995</td>
</tr>
</tbody>
</table>
Table 2: Prediction on Codeforces Dataset.
<table>
<thead>
<tr>
<th>Class</th>
<th>Precision</th>
<th>Recall</th>
<th>F1-Score</th>
<th>Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>Macro Avg</td>
<td>0.996</td>
<td>0.996</td>
<td>0.996</td>
<td>1050</td>
</tr>
</tbody>
</table>
Table 3: Prediction on CSmith Dataset
<table>
<thead>
<tr>
<th>Class</th>
<th>Precision</th>
<th>Recall</th>
<th>F1-Score</th>
<th>Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>Macro Avg</td>
<td>0.979</td>
<td>0.978</td>
<td>0.978</td>
<td>3052</td>
</tr>
</tbody>
</table>
Table 4: Prediction on the full Dataset
Table 5: Prediction on CSMith using Codeforces models
<table>
<thead>
<tr>
<th>Precision</th>
<th>Recall</th>
<th>F1-Score</th>
<th>Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>Macro Avg</td>
<td>0.531</td>
<td>0.505</td>
<td>0.477</td>
</tr>
</tbody>
</table>
Table 6: Prediction on Codeforces using CSMith models
<table>
<thead>
<tr>
<th>Precision</th>
<th>Recall</th>
<th>F1-Score</th>
<th>Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>Macro Avg</td>
<td>0.225</td>
<td>0.254</td>
<td>0.201</td>
</tr>
</tbody>
</table>
Table 7: Prediction on Codeforces stripped binary codes
<table>
<thead>
<tr>
<th>Precision</th>
<th>Recall</th>
<th>F1-Score</th>
<th>Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>Macro Avg</td>
<td>0.993</td>
<td>0.993</td>
<td>0.993</td>
</tr>
</tbody>
</table>
Table 8: Prediction on CSMith stripped binary codes
<table>
<thead>
<tr>
<th>Precision</th>
<th>Recall</th>
<th>F1-Score</th>
<th>Support</th>
</tr>
</thead>
<tbody>
<tr>
<td>Macro Avg</td>
<td>0.999</td>
<td>0.999</td>
<td>0.999</td>
</tr>
</tbody>
</table>
Results. In table 5, we report the performance of a classifier trained with Codeforces dataset on the CSMith Dataset. It achieves an overall F1-Score of 0.477 with an estimated standard deviation of 0.0191. In table 6, we report the performance of a classifier trained with the CSMith dataset on Codeforces Dataset which achieves an overall F1-Score of 0.201 with a standard deviation of 0.0537. We are intrigued by the fact that no instance was classified as a MinGW -OO compiler.
Conclusion. We speculate that due to the purpose of the CSMith generated programs, the CSMith Dataset is more specific than the Codeforces Dataset. While the result of predicting the CSMith dataset shows a clear loss of 0.519, we still achieve a moderate F1-score. We conclude that what have been learned with one dataset by our framework may partly be exploited on a distinct dataset.
6 Limitations
- Our dataset is composed for one part of small programs (most file sizes are around 30kb) coming from Codeforces dataset and for the other part, synthetic programs generated by CSMith. It would be interesting to use a more diverse dataset. Nevertheless, our evaluation on both datasets at least demonstrates the accuracy of SNN.
- The experiments should be pursued to incorporate other compilers with more optimizations. We leave this for future work.
- Our approach was tested and validated on un-stripped and stripped binary codes. In adversarial contexts where binary are obfuscated or when we are dealing with malware, the situation is quite different. We believe this is a challenge worth working on.
7 Conclusion
Our starting hypothesis is that binary code is not unstructured data and that semantics is important. In this ongoing work, we begin to explore the possibility of (i) extracting semantic features in the form of graphs and (ii) processing the neural networks of the graphs in order to propagate the information according to the topology of the graph. The outcome is to improve binary code analysis in particular in the context of obfuscated codes and malware.
This work opens several immediate questions. The combination of the forgetful phase followed by the chopped phase provides a simple and realistic feature graph model. That said, one could think of a first phase that would leave out less information and various ways of cutting out a graph by having, for example, sites of different sizes. Also, the pooling layers play an important role. It should be worth looking at which features are useful and how they are intertwined in order to improve pooling. This question bounces off the question of semantics. Finally, CFG provides only a very shallow semantics view of a given program. An interesting question would be to automatically extract and take advantage of some sort of richer semantic features.
As a conclusion, we believe our preliminary results are promising and may offer new, more robust leads for compiling tool chain identification.
Acknowledgments
This work is supported by a public grant overseen by the French National Research Agency (ANR) as part of the "Investissements d’Avenir" French PIA project "Lorraine Université d’Excellence", reference ANR-15-IDEX-04-LUE.
Experiments presented in this paper were carried out using the Grid5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see https://www.grid5000.fr).
References
A Additional material
The breadth first search algorithm of the chopping phase performs a limited exploration of the graph. During that exploration, it disconnects from the graph multiple small subgraphs.
Algorithm 1 Graph chopping algorithm
Input: A forgetting graph \( G = (V, E) \), a root vertex \( r \) in \( V \)
Parameters: \( n \) the number of sites to extract, \( s \) max node in a site
Output: A graph containing a maximum of \( n \) sites
Let \( G_r = (V_r, E_r) \) be a graph.
while \( |V| > 0 \) and \( n > 0 \) do
Let \( q_1 \) be a queue.
Let \( q_2 \) be a queue.
Let \( q = (N, A) \) be a graph.
push \( q_1, r \)
while \( |N| < s \) and \( |q_1| > 0 \) do
empty \( q_2 \)
for all \( x \in q_1 \) do
if \( |N| > s \) then
break
end if
for all \( y \) such that \( (x, y) \in E \) do
if \( |N| > s \) then
break
end if
if \( y \in N \) then
\( A \leftarrow A \cup \{(x, y)\} \)
break
end if
\( N \leftarrow N \cup \{y\} \)
\( A \leftarrow A \cup \{(x, y)\} \)
push \( y \) in \( q_2 \)
end for
end for
end while
\( q_1 \leftarrow q_2 \)
end while
\( V \leftarrow V \setminus \{N\} \)
\( E \leftarrow E \setminus \{(u, v) | u, v \in N\} \)
\( r \leftarrow \) first vertex left in \( V \)
\( V_r \leftarrow V_r \cup N \)
\( E_r \leftarrow E_r \cup A \)
\( n \leftarrow n - 1 \)
end while
return \( G_r \)
|
{"Source-Url": "https://hal.science/hal-03270335/file/Workshop_MLBA_MLPA_2020.pdf", "len_cl100k_base": 7730, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30374, "total-output-tokens": 9908, "length": "2e12", "weborganizer": {"__label__adult": 0.0005030632019042969, "__label__art_design": 0.0004222393035888672, "__label__crime_law": 0.0004367828369140625, "__label__education_jobs": 0.0004239082336425781, "__label__entertainment": 7.37309455871582e-05, "__label__fashion_beauty": 0.00021207332611083984, "__label__finance_business": 0.00018703937530517575, "__label__food_dining": 0.00039076805114746094, "__label__games": 0.0006461143493652344, "__label__hardware": 0.0016374588012695312, "__label__health": 0.0006761550903320312, "__label__history": 0.0002263784408569336, "__label__home_hobbies": 0.0001163482666015625, "__label__industrial": 0.0004608631134033203, "__label__literature": 0.000255584716796875, "__label__politics": 0.00027108192443847656, "__label__religion": 0.0005121231079101562, "__label__science_tech": 0.0311126708984375, "__label__social_life": 8.755922317504883e-05, "__label__software": 0.0053558349609375, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.0003764629364013672, "__label__transportation": 0.0006394386291503906, "__label__travel": 0.0002213716506958008}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35804, 0.04784]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35804, 0.38225]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35804, 0.86577]], "google_gemma-3-12b-it_contains_pii": [[0, 996, false], [996, 5789, null], [5789, 11744, null], [11744, 15021, null], [15021, 20369, null], [20369, 25005, null], [25005, 29254, null], [29254, 34192, null], [34192, 35804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 996, true], [996, 5789, null], [5789, 11744, null], [11744, 15021, null], [15021, 20369, null], [20369, 25005, null], [25005, 29254, null], [29254, 34192, null], [34192, 35804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35804, null]], "pdf_page_numbers": [[0, 996, 1], [996, 5789, 2], [5789, 11744, 3], [11744, 15021, 4], [15021, 20369, 5], [20369, 25005, 6], [25005, 29254, 7], [29254, 34192, 8], [34192, 35804, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35804, 0.12919]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
d1f491e2f9b8179451ac7e0a9e4df286bdaf913b
|
A Formative Analysis of Mobile Devices and Gestures to Control a Multimedia Application from the Distance
Andreas Lorenz
RWTH Aachen University
Aachen, Germany
lorenz@dbis.rwth-aachen.de
Cyril Concolato
Telecom ParisTech
Paris, France
cyril.concolato@telecom-paristech.fr
Marc Jentsch
Fraunhofer Institute for Applied Information Technology
St. Augustin, Germany
marc.jentsch@fit.fraunhofer.de
Enrico Rukzio
Lancaster University
Lancaster, UK
rukzio@comp.lancs.ac.uk
Abstract—The use of mobile and handheld devices is a desirable option for implementation of user interaction with remote services from a distance. Another prominent option to operate a remote application is the use of gestures performed in the air. This paper describes the design and realization of a system to enable mobile devices and gesture recognition tools to have control on a remote movie-player application. A small qualitative user study verified the use of mobile phones, switching between three input modalities, and the opportunity of another three methods of performing gestures in the air.
I. INTRODUCTION
The design and implementation of interaction in pervasive computing environments cannot rely on traditional input devices like mouse and keyboard. For remote interaction from a distance, Lorenz et al. [1] revealed a dramatic increase of the error rate using wireless mouse and keyboard compared to a handheld device. Whether a control device is suitable for an intended interaction depends on the capabilities, personal preferences, situation and task of the user. If the physical shape of the equipment causes complaints or errors in operation, then the interaction could be improved either by revised design of the input hardware or by switching to another input device more aligned to the task and the personal attributes of the user.
The information flow for controlling remote services is directed from the user towards the system. In the scope of this research, the user requires the opportunity to provide input to control the behavior of the system. In recent research, two approaches have approved high potential for interacting with services in the environment of the user: The use of a mobile or handheld device, or performing gestures in the air, with the head, finger or other part of the body, or simply by moving around.
The opportunity to use mobile devices is a desirable option to enhance interaction with remote services, in particular if the user is experienced in its operation.
The system realized in this work addresses the use of mobile and gesture based input methods to wirelessly control multimedia applications in the current environment of the user. It demonstrates the use of different input devices crossing modalities.
II. RELATED WORK
Already in 1999, Eustice et al. [2] concluded that any wearable device with a minimum functionality could act as remote control for all appliances. Using for instance the Personal Universal Controller [3] a user can speak the name of a command through which this is executed by the system. Myers [4] illustrated how users are able to select from different interaction styles and devices. Rukzio [5] identified four main remote interaction styles to interact with objects in the real world: Touching, Pointing, Scanning, and User-mediated Object Selection.
A. Mobile and Handheld Interaction Devices
Many studies in smart environments have affirmed that users can easily interact with their context using handheld devices. Nichols and Myers [6], [7] presented positive results after performing an exhaustive study of the efficiency of users using handheld devices to remotely control a stereo and a telephone/digital answering machine. Some authors introduce the mobile phone as the user's favorite device for remote controlling (like [4],[8]). Others have already presented software solutions for Personal Digital Assistants (PDAs) that simulate a remote control, certifying that from the user's point of view the handheld interfaces are easier and clearer to use than remote controls or complex buttons panels [9].
This research was supported by the European Commission within the Network of Excellence “Interactive Media with Personal Networked Devices (InterMedia)”, Project No. 038419.
Lumsden and Brewster [11] criticized that “the interfaces and associated interaction techniques of most mobile and wearable computers are based on those of desktop GUIs”. They request a paradigm shift in interaction techniques beyond mouse and keyboard as mechanisms of interaction. Ballagas et al. [10] structured an analysis of mobile input techniques with five dimensions: graphical subtask (position, orient and select), dimensionality, relative vs. absolute movement, interaction style (direct vs. indirect) and feedback (continuous vs. discrete). The review of the relationships between input techniques gives insight to the key design factors of each technique. The design space helps designers of ubiquitous computing applications to select the most appropriate input mechanism to use in a given application.
B. Gesture-Based Interaction
Head-tracker solutions (like [12]) are designed to work with gestures for replacing traditional pointing devices. Using a Web-cam, it allows users to point and click by simply aiming their face. A combination of pointer position and keystroke input device is described in [13], using miniature video cameras that track finger position where the user can type or point in the air.
Sweep [14] lets users move a camera-phone along the desired direction of the cursor motion. By comparing consecutive frames of the camera, it offers indirect control of the cursor position. Direct Pointer [15] allows direct manipulation of the cursor with continuous visual feedback, closely resembling the laser pointer. It enables to use cameras equipped on handheld devices, such as mobile phones and PDAs. The primary advantage of this technique is that it only requires equipment that is readily available: an electronic display, a handheld digital camera, and a connection between the two. Comparable systems use a pre-calibrated, fixed display, a handheld digital camera, and a connection between the two. All these systems have the advantages of natural interaction and immediate feedback. Depending on the depth of objects in the camera images, short-distance motions may generate different distances for the cursor to move, making control difficult. Additional effort is required for the implementation of key strokes and text input.
III. SYSTEM DESIGN
A. Infrastructure
Figure 1 illustrates the technical components and their places in the overall image. The components of the architecture are distributed on two different hardware devices:
The controlled device is a computing device available in the current environment of the user. It is the host of the service, which is remotely controlled by the user. The controlled device usually is a Windows-PC, Mac, or Linux machine connected with a display for graphical output.
The other device denotes the input device employed by the user to control the remote service. It hosts an interactive application receiving the input from the user. The intended input device is a mobile phone or personal digital assistant (PDA), which is compared with the use of a gesture recognition tool.
The input device and the controlled device are connected with each other using a TCP/IP socket connection, preferably using wireless LAN (WLAN).
The architecture is based on a client/server pattern. A component residing on the input device realizes the user interface to express input to the service, including the modality used with the input device. The physical device hosting the user interface components acts as the client device for the remote service.
The server component is the counterpart for the communication with the input device residing on the controlled device. This component receives the stream of input sent by the client, and distributes the information to the local interactive services. Examples for the server realization are networked applications, application-server, and web-server.
B. Software Architecture
The software architecture instantiates the framework introduced by Lorenz [18]. Figure 2 illustrates the architecture integrating the software modules. It supports three modes for the realization of the user interface on the mobile device, and three modes for using gestures observed by an infrared tracking camera.
1) Movie Control UI I Implementation: The user controls on the mobile device are separated from the controlled service. The user interface sends control commands to the service. It communicates using XML-RPC with a server listening on a specific port.
2) GPAC Client Player: In order to display a graphical user interface for the control of the remote service, the GPAC player is used on the mobile device. It detects, using UPNp, the presence of controllable services. These services are advertised by the GPAC Server Player. A widget is then downloaded and displayed on the mobile phone.
3) Widget: The widget is the visual part of the service. It is deported to the mobile phone using UPNp and HTTP. The widget is implemented using the Scalable Vector Graphics (SVG) rich media language and with JavaScript and the XMLHttpRequest API for communication with the XMLRPC Adapter Server.
4) XMLRPC-Adapter_Client: The XML-RPC adapter for delivery of input events. The adapter is responsible for connection establishment, data encoding, and implementation of the network protocol on behalf of the controls UI module. The event delivery is translated into a single method invocation of the adapter’s responsible method.
5) **XMLRPC-Adapter_Server**: The XML-RPC adapter for receiving and decoding the transmitted data. The adapter is responsible for connection management, data decoding and mapping of input events to shortcuts posted to the local event queue. The adapter acts like a web-server waiting at a specific network port for incoming events. The server component integrates the incoming events as mouse- and keyboard events into the event queue of the local operating system. It can therefore work together with any application on the server, if the GPAC player gained focus on the server, this application receives the shortcuts from the event queue.
6) **GPAC Server and Player**: The server part of this module is in charge of advertising the service and providing the widget to control the service. The player part of this module displays multimedia content and is interfaced with the system event queue. It can be controlled by XMLRPC messages translated by the XMLRPC_Adapter_Server.
The user instantiates the process of event delivery by interacting with the system. The user interface on the input device recognizes the interaction, and triggers to create an input event. For triggering event consumption, the user interface on the input device remotely invokes a corresponding method of an instance of the XML-RPC adapter of the client. The adapter internally creates an input event, which is transformed into a corresponding representation for network transmission. The network adapters transmit the data on a network, potentially using handshake techniques to improve quality of service.
When using the widget approach, the user arrives with the mobile phone in the vicinity of the server. Since the GPAC Server advertises the controllable service, the mobile phone, running the GPAC Client Player, automatically receives the widget. The user selects the widget and the GPAC Client Player presents the control buttons to the user. When the user activates one of the buttons, messages are created and transmitted to the server.
For both cases, on the controlled device, the message is unpacked from the network representation, and handed out to consumers. The process ends, when the GPAC Server Player module received a copy of the input event.
## IV. REALISATION
The software has been realized using Java Mobile Edition (mobile device), C# (gestures in the air), Java Standard Edition (server), and C (GPAC tools).
This work implements the use of a remote graphical user interface on the mobile phone, the use of gestures performed on the touch sensitive display of the phone, and the transfer of a copy of the user interface from the service to the mobile device (widget-approach). The user can select from the three options to control the media player application:
1) **Hardware Buttons**: Use the hardware buttons of the mobile phone (see Figure 3, left).
2) **Software Buttons (Widget)**: Activate software buttons. The software buttons mirror the controls of the application. The controls are downloaded from the remote application as a widget. For the user tests, the widgets were not downloaded on the fly but represented by software buttons on the mobile device (see Figure 3, central image).
3) **Touchscreen/Gestures**: Touching on the display, and perform gestures dragged on the screen of the mobile phone (see Figure 3, right).
C. **Control Flow**
The user additionally has the opportunity of aiming gestures in the air. The movement of a pen in the air was tracked by an infrared camera. The changes of the position were bound to the movement of the mouse pointer on the screen to have visual feedback. Three techniques were implemented:
1) **Perform-Return**: Performing an effectuating gesture starting at the central area of the screen combined with a return movement to this central area (see Figure 4, left)
2) **Single-Move**: Performing an effectuating gesture from any place on the screen with the opportunity of slowly moving to any point on the screen (Figure 4, central image).
3) **Static-Areas**: Moving the pen to a direction lets the mouse cursor move to the same direction on the screen. If the pen is held still for a short time, an event associated to the area the cursor is currently pointing to, is executed (see Figure 4, right).


798
For all three techniques it is taken into consideration how far the actions are performed. For example a smaller gesture to the right initiates a small forward event of the video while a larger gesture to the right initiates a larger forward event of the video. For the up movement, this difference indicates if a volume up or a play/pause event is triggered.
The implementation of the user interface maps the user input to control commands for the server, which are transmitted using XML-RPC. The server maps remote procedure calls to the creation of local shortcuts. The application that gained focus on the server receives the shortcuts from the local event queue as if typed by the user on the local keyboard.
When the widget approach is used to interact with the service, the widget is represented using the SVG language with additional JavaScript code which uses the XMLHttpRequest and MPEG-U API. The MPEG-U standard is currently being developed within the Moving Pictures Expert Group. This new standard enables to bridge networking technologies, such as UPnP, and presentation technologies, such as HTML or SVG, to enable widget mobility and widget communications.
According to the action item, the server implementation integrates the key strokes into the local event queue. It uses procedures that can be called remotely using any appropriate library of XML-RPC. The last column of the table defines the shortcut that is posted to the GPAC player by remotely invoking the method at the server.
V. QUALITATIVE USER STUDY
A qualitative evaluation was started to compare the three interaction techniques for interaction on the mobile device and the three interaction techniques of performing gestures in the air. The aim of the study was to find major anomalies in interaction design and implementation. A larger study will be conducted to evaluate the usefulness of a sub-set of the six interaction methods tested here.
A. Task
The users of the system had to remotely control a video on a large display from a distance of about 2 meters. The order of the interaction techniques to be used was randomized for each user. The users had the opportunity to try and learn each interaction technique in a 5 minutes training phase. The users were asked to perform a task of five steps of starting the video, increasing the volume to a certain level, scrolling forward to a given position, mute, and scrolling forward to another position. Each sequence was executed two times with each of the six interaction techniques.
B. Participants
A small group of four persons worked with the six interaction methods to operate the movie-player application. Three persons were male, one female. The persons were between 25 and 35 years old. All had experience in using mobile devices; one had previous experience in using gestures in the air to operate a computer system. All are right-handed.
C. Task Completion Time
The time needed to complete the task was measured for each of the six interaction methods. The average of the task completion time of both task executions was calculated. Figure 5 illustrates the average time needed by four test persons. The overall fastest time, and also the slowest performance, were both set using the software buttons on the mobile device. The best average was reached by the users using the hardware buttons of the mobile device. Noticeably, the fastest way of performing gesture in the air using static areas was faster than the averages of both the other techniques using the mobile device.
D. Results
After the user finished the task, the user was asked to fill in the 19 items of the Post-Study System Usability Questionnaire (PSSUQ [19]). Figure 6 illustrates the average ratings for all 19 items (OVERALL, left bar), and the average regarding system usefulness (SYSUSE, right bar). Lower numbers indicate a higher satisfaction of the user. The hardware buttons
of the mobile phone reached a remarkable high level of satisfaction, for OVERALL and in particular for SYSUSE. The software buttons were on a similar high level of satisfaction, though the SYSUSE-average was higher (indicating lower satisfaction). Though the technique of having static areas for the gesture was fast, the user’s satisfaction dropped down in comparison to the hardware- or software buttons of the mobile device.
The software buttons were on a similar high level of satisfaction, for OVERALL and in particular for SYSUSE. The software buttons had the disadvantage that users had to change the focus between the control device and the application to control: the users operated the hardware buttons blind without looking at the mobile device. Therefore, the task was done faster than with all other techniques. The users did not have problems to distinguish between pressing a button and holding a button to trigger different commands.
The software buttons had the disadvantage that users had to change the focus between the large display and the display of the mobile device to activate the command. The users looked at the application on the display and, if asked to activate a specific command, the users moved the focus to the mobile phone to precisely activate the right software button. Some users increased the speed for a series of the same commands by first moving the hand to the right position on the screen of the mobile phone, fixing the position of the hand, and then looking at the large display and activating the same software button several times, e.g. to adjust the volume stepwise.
Performing the gestures on the display of the mobile device needed more training than the other approaches using the mobile phone. People who tried the hardware buttons before the gestures on the screen were faster in learning because of the similar directions. People who started with gestures in the air also quickly adopted to use the gestures on the mobile device, but had serious problems in activating the play-action. They got confused of the different
implementation of the gesture in the air (large move to the top) and on the screen of the mobile phone (tab on the screen).
The use of different distances in horizontal movements on the screen of the mobile phone was problematic because of the limited space in portrait mode. Six commands for different speed of scrolling the video were placed on only 45mm of horizontal space of the display. In addition, the tab on the screen to activate the play-action was often not recognized well by the recognition engine on the mobile phone. Users had problems in finding the right time-span of pressing on the screen without moving the hand.
For all gestures, no matter if performed on the screen of the mobile phone or in the air, users invested time in training the operation. All users went through the video performing each command several times. The users did not invest time for training the hardware buttons and software buttons of the mobile phone. In particular, users who tried another method beforehand used the hardware buttons straightforwardly.
Performing the gestures in the air, all users tended to unintentionally perform returning gestures to the starting point of the movement; nevertheless, the Perform-Return gesture was slow and had low satisfaction. The resting area for Perform-Return and Static-Areas should be perceivable by the users. For Static-Areas, the areas, their exact borders, and assigned commands should be visible to the user. Opposed to this, users accepted that this would disturb the quality of the application, which is very much dependent on the visual appearance.
For all gestures performed in the air, users asked for an inactive mode, in which the tracking of the pen is disabled in order to rest the hand. In particular, the approach of Static-Areas would benefit if the user can activate a command and then rest the hand anywhere else. Some users proposed to apply the method of a laser pointer to have a small button on the pen to activate the light explicitly. The technical setting used in this work did not support this approach, because the pen is always inactive only reflecting the light from the camera. The user would need to turn the camera on or of which is only possible at the computer the camera is connected with.
The users required more feedback for all gestures performed in the air. For the Perform-Return gesture and the Static-Areas gesture they required visibility of the central area. They also wanted to know the last command recognized from the gesture engine. This would support error prevention, recovering from errors, and learning to perform the intended gesture. The users also would like to know the predicted command before it is consumed by the system. One user asked for a kind of eraser, to delete a misinterpretation by the recognition engine before the wrong action is activated. In particular for Perform-Return, it would be possible to illustrate the recognized command after the Perform-move, and to erase it by simply postponing the Return-move for a second.
One user got extremely frustrated of the gestures in the air because the engine did not understand the intention of the user, who in turn tried harder and less precise.
VI. CONCLUSIONS AND FUTURE WORK
This work compared six interaction methods to control a multimedia application from a distance:
1) Hardware Buttons of the mobile phone: Because people were familiar with the use of the buttons, they quickly adopted its usage to the task. The use of the hardware buttons is narrowed to simple controls, low number of commands, which have a comprehensible mapping to the four directions.
2) Software buttons / Widgets: The users quickly anticipated the behavior of the application, enabling intuitive use of this approach. In general, the software buttons can mirror all interaction components of the application to control, leading to the same functionality of the user interface. The speed of using it is limited because of the changes in foci of the user if operating a remote application.
3) Gestures on the touch screen: Because the use of rather simple gestures, it was intuitively used; but limited in the provided functionality. If it is possible to map the controls to hardware or software buttons, then the use of the other approaches is faster and seems to be more convincing.
4) Perform-Return in the air: Users agreed that they return to the starting point of the move for resting the hand. The method can be used for any closed gesture, i.e. if the gesture logically ends close to the starting point. If divided into two phases, discarding a movement before activating a command is possible.
5) Single-Move in the air: Generally, this approach allows recognizing any gesture performed in the air. For the small set of simple commands used in this application, it was the slowest of all approaches and assigned with lowest level of satisfaction. The threshold speed to move the hand unobserved by the recognition engine needs precise adjustments.
6) Static-Areas: This approach was the fastest of all gestures performed in the air. Because it only depends on the size of the movement and not of the speed and acceleration, it can be applied quickly. The set of possible commands is narrowed by the available space on the display. It requires visual feedback of the current position, in this case implemented by the mouse pointer controlled by the pen movements.
The future work will fine-tune the interaction methods and perform a qualitative user study to select two or three interaction methods from the current six methods. A larger user study will compare the selected interaction methods with the current available approach to use a wireless mouse and keyboard in a home environment.
ACKNOWLEDGMENTS
We thank all partners of the InterMedia project for intensively discussing the demonstrator “Controlling Remote Displays” that has been a primary source of feedback. We thank all participants in the user studies for their attendance and valuable feedback.
|
{"Source-Url": "http://biblio.telecom-paristech.fr/cgi-bin/download.cgi?id=10368", "len_cl100k_base": 5072, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18258, "total-output-tokens": 5374, "length": "2e12", "weborganizer": {"__label__adult": 0.0006012916564941406, "__label__art_design": 0.00682830810546875, "__label__crime_law": 0.00046372413635253906, "__label__education_jobs": 0.00312042236328125, "__label__entertainment": 0.0004153251647949219, "__label__fashion_beauty": 0.00034117698669433594, "__label__finance_business": 0.00036454200744628906, "__label__food_dining": 0.0005178451538085938, "__label__games": 0.0010700225830078125, "__label__hardware": 0.0292510986328125, "__label__health": 0.0009717941284179688, "__label__history": 0.0006628036499023438, "__label__home_hobbies": 0.0002713203430175781, "__label__industrial": 0.0007529258728027344, "__label__literature": 0.000492095947265625, "__label__politics": 0.0002486705780029297, "__label__religion": 0.0007390975952148438, "__label__science_tech": 0.432373046875, "__label__social_life": 0.00014138221740722656, "__label__software": 0.047760009765625, "__label__software_dev": 0.47119140625, "__label__sports_fitness": 0.00031280517578125, "__label__transportation": 0.0008592605590820312, "__label__travel": 0.0002875328063964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26213, 0.01257]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26213, 0.31826]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26213, 0.92452]], "google_gemma-3-12b-it_contains_pii": [[0, 4262, false], [4262, 9716, null], [9716, 14195, null], [14195, 18105, null], [18105, 23399, null], [23399, 26213, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4262, true], [4262, 9716, null], [9716, 14195, null], [14195, 18105, null], [18105, 23399, null], [23399, 26213, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26213, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26213, null]], "pdf_page_numbers": [[0, 4262, 1], [4262, 9716, 2], [9716, 14195, 3], [14195, 18105, 4], [18105, 23399, 5], [23399, 26213, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26213, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
19d39fd52d175819c110ed4f8d406e10082c2e4a
|
Near-Neighbor Search: Finding similar sets
Announcements
- **Tuesday 1/11 5-7pm Gates B12:**
Hadoop Q&A session
- **Association Rules Gradiance assignment is out!**
- Due: 2011-01-17 23:59
Scene Completion Problem
[Hays and Efros, SIGGRAPH 2007]
The Bare Data Approach
Simple algorithms with access to large datasets
The Web
Many real-world problems
- Web Search and Text Mining
- Billions of documents, millions of terms
- Product Recommendations
- Millions of customers, millions of products
- Scene Completion, other graphics problems
- Image features
- Online Advertising, Behavioral Analysis
- Customer actions e.g., websites visited, searches
Many problems can be expressed as finding “similar” sets:
- Find near-neighbors in high-D space
Examples:
- Pages with similar words
- For duplicate detection, classification by topic
- Customers who purchased similar products
- NetFlix users with similar tastes in movies
- Products with similar customer sets
- Images with similar features
- Users who visited the same websites
In some cases, result is set of nearest neighbors
In other cases, extrapolate the result from attributes of near-neighbors
Example: Question Answering
- Question Answering
- Who killed Abraham Lincoln?
- What is the height of Mount Everest?
- Naïve algorithm:
- Find all web pages containing the terms “killed” and “Abraham Lincoln” in close proximity
- Extract k-grams from a small window around the terms
- Find the most commonly occurring k-grams
Naïve algorithm works fairly well!
Some improvements
- Use sentence structure, e.g., restrict to noun phrases only
- Rewrite questions before matching
- “What is the height of Mt Everest” becomes “The height of Mt Everest is <blank>”
The number of pages analyzed is more important than the sophistication of the NLP
- For simple questions
Reference: Dumais et al.
The Curse of Dimensionality
1-d space
2-d space
The Curse of Dimensionality
- Let’s take a data set with a fixed number \( N \) of points.
- As we increase the number of dimensions in which these points are embedded, the average distance between points keeps increasing.
- Fewer “neighbors” on average within a certain radius of any given point.
The Sparsity Problem
- Most customers have not purchased most products
- Most scenes don’t have most features
- Most documents don’t contain most terms
- **Easy solution**: Add more data!
- More customers, longer purchase histories
- More images
- More documents
- And there’s more of it available every day!
Example: Scene Completion
Example: Scene Completion
10 nearest neighbors from a collection of 20,000 images
[Hays and Efros, SIGGRAPH 2007]
Example: Scene Completion
10 nearest neighbors from a collection of 2 million images
[Hays and Efros, SIGGRAPH 2007]
Distance Measures
- We formally define “near neighbors” as points that are a “small distance” apart.
- For each use case, we need to define what “distance” means.
- Two major classes of distance measures:
- Euclidean
- Non-Euclidean
Euclidean Vs. Non-Euclidean
- A *Euclidean space* has some number of real-valued dimensions and “dense” points
- There is a notion of “average” of two points
- A *Euclidean distance* is based on the locations of points in such a space
- A *Non-Euclidean distance* is based on properties of points, but not their “location” in a space
d is a *distance measure (i.e., metric)* if it is a function that maps from pairs of points to real numbers such that:
1. \( d(p,q) \geq 0 \)
2. \( d(p,q) = 0 \) iff \( p = q \)
3. \( d(p,q) = d(q,p) \)
4. \( d(p,q) \leq d(p,r) + d(r,q) \) (*triangle inequality*)
Some Euclidean Distances
- \( L_2 \) norm: \( d(p,q) = \sqrt{\sum (q_i - p_i)^2} \)
\[ d(p, q) = d(q, p) = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + \cdots + (q_n - p_n)^2} = \sqrt{\sum_{i=1}^{n} (q_i - p_i)^2}. \]
- The most common notion of “distance”
- \( L_1 \) norm: sum of the absolute differences in each dimension
- Manhattan distance = distance if you had to travel along coordinates only
\[ d_1(p, q) = \|p - q\|_1 = \sum_{i=1}^{n} |p_i - q_i|, \]
Another Euclidean Distance
- **$L_\infty$ norm**: $d(x,y) = \text{the maximum of the differences between } x \text{ and } y \text{ in any dimension.}$
$D_{\text{Chebyshev}}(p,q) := \max_i (|p_i - q_i|)$.
- **Note**: the maximum is the limit as $p$ goes to $\infty$ of the $L_p$ norm:
$\|x\|_p = (|x_1|^p + |x_2|^p + \cdots + |x_n|^p)^{1/p}$
Non-Euclidean Distances
- **Cosine distance** = angle between vectors from the origin to the points in question
- **Edit distance** = number of inserts and deletes to change one string into another
- **Hamming Distance** = number of positions in which bit vectors differ
Think of a point as a vector from the origin \((0,0,...,0)\) to its location.
Two points’ vectors make an angle, whose cosine is the normalized dot-product of the vectors:
\[
\text{similarity} = \cos(\theta) = \frac{A \cdot B}{\|A\| \|B\|} = \frac{\sum_{i=1}^{n} A_i \times B_i}{\sqrt{\sum_{i=1}^{n} (A_i)^2} \times \sqrt{\sum_{i=1}^{n} (B_i)^2}}
\]
- **Example**: \(A = 00111; B = 10011\)
- \(A \cdot B = 2; \|A\| = \|B\| = \sqrt{3}\)
- \(\cos(\theta) = 2/3; \theta\) is about 48 degrees
Cosine Distance: Diagram
\[ d(A, B) = \theta = \arccos \left( \frac{A \cdot B}{\|A\| \|B\|} \right) \]
Cosine Distance is a Metric
- \( d(x,x) = 0 \) because \( \arccos(1) = 0 \)
- \( d(x,y) = d(y,x) \) by symmetry
- \( d(x,y) > 0 \) because angles are chosen to be in the range 0 to 180 degrees
- **Triangle inequality:** If we rotate an angle from \( x \) to \( z \) and then from \( z \) to \( y \), we can’t rotate less than from \( x \) to \( y \)
The edit distance of two strings is the number of inserts and deletes of characters needed to turn one into the other.
- Equivalently:
\[ d(x,y) = |x| + |y| - 2|\text{LCS}(x,y)| \]
- LCS = *Longest Common Subsequence* = any longest string obtained both by deleting from \( x \) and deleting from \( y \)
Example: LCS
- $x = abcde$ ; $y = bcduve$
- Turn $x$ into $y$ by deleting $a$, then inserting $u$ and $v$ after $d$
- Edit distance = 3
- Or, LCS($x,y$) = $bcde$
- Note that $d(x,y) = |x| + |y| - 2|\text{LCS}(x,y)|$
= $5 + 6 - 2*4 = 3$
Edit Distance is a Metric
- $d(x,x) = 0$ because 0 edits suffice
- $d(x,y) = d(y,x)$ because insert/delete are inverses of each other
- $d(x,y) \geq 0$: no notion of negative edits
- **Triangle inequality**: changing $x$ to $z$ and then to $y$ is one way to change $x$ to $y$
Variants of the Edit Distance
- Allow insert, delete, and *mutate*
- Change one character into another
- Minimum number of inserts, deletes, and mutates also forms a distance measure
- Ditto for any set of operations on strings
- Example: substring reversal OK for DNA sequences
Hamming Distance
- **Hamming distance** is the number of positions in which bit-vectors differ.
- **Example:** \( x = 10101; \ y = 10011 \)
\[
d(x, y) = 2 \text{ because the bit-vectors } x \text{ and } y \text{ differ at } 3^{rd} \text{ and } 4^{th} \text{ position}
\]
Jaccard Similarity
- The Jaccard Similarity of two sets is the size of their intersection divided by the size of their union:
\[ \text{Sim}(C_1, C_2) = \frac{|C_1 \cap C_2|}{|C_1 \cup C_2|} \]
- The Jaccard Distance between sets is 1 minus their Jaccard similarity:
\[ d(C_1, C_2) = 1 - \frac{|C_1 \cap C_2|}{|C_1 \cup C_2|} \]
Example: Jaccard Distance
3 in intersection
8 in union
Jaccard similarity = 3/8
Jaccard distance = 5/8
Finding Similar Items
The best techniques depend on whether you are looking for items that are very similar or only somewhat similar.
We’ll cover the “somewhat” case first, then talk about “very”
Goal: Common text, not common topic
Special cases are easy:
- Identical documents
- Pairs where one document is completely contained in another
General case is hard:
- Many small pieces of one doc can appear out of order in another
Goal: Given a large number (N in the millions or even billions) of text documents, find pairs that are “near duplicates”
Applications:
- Mirror websites, or approximate mirrors
- Don’t want to show both in a search
- Plagiarism, including large quotations
- Web spam detection
- Similar news articles at many news sites
- Cluster articles by “same story”
3 Essential Steps for Similar Docs
1. **Shingling**: convert documents, emails, etc., to sets
2. **Minhashing**: convert large sets to short signatures, while preserving similarity.
3. **Locality-sensitive hashing**: focus on pairs of signatures likely to be similar
The set of strings of length $k$ that appear in the document
**Signatures**: short integer vectors that represent the sets, and reflect their similarity
**Candidate pairs**: those pairs of signatures that we need to test for similarity.
The Big Picture
Simple approaches:
- Document = set of words appearing in doc
- Document = set of “important” words
- Don’t work well for this application. Why?
- Need to account for ordering of words
- A different way: Shingles
A $k$-shingle (or $k$-gram) for a document is a sequence of $k$ tokens that appears in the document
- Tokens can be characters, words or something else, depending on application
- Assume tokens = characters for examples
**Example:** $k=2$; $D_1= \text{abcab}$
Set of 2-shingles: $S(D_1) = \{\text{ab, bc, ca}\}$
- **Option:** Shingles as a bag, count $\text{ab}$ twice
**Represent a doc by a set of its $k$-shingles**
Working Assumption
- Documents that have lots of shingles in common have similar text, even if the text appears in different order.
- **Careful:** You must pick $k$ large enough, or most documents will have most shingles.
- $k = 5$ is OK for short documents.
- $k = 10$ is better for long documents.
Compressing Shingles
- To **compress long shingles**, we can **hash** them to (say) 4 bytes
- **Represent a doc by the set of hash values of its** $k$-shingles
- **Idea**: Two documents could (rarely) appear to have shingles in common, when in fact only the hash-values were shared
Why is it better to hash 9-shingles (say) to 4 bytes than to use 4-shingles?
**Hint:** How random are the 32-bit sequences that result from 4-shingling?
Document → Shingling → MinHashing → Locality-sensitive Hashing
**Signatures**: short integer vectors that represent the sets, and reflect their similarity.
**Candidate pairs**: those pairs of signatures that we need to test for similarity.
MinHashing
Document $D_1 = \text{set of k-shingles } C_1 = \text{S}(D_1)$
Equivalently, each document is a 0/1 vector in the space of k-shingles
- Each unique shingle is a dimension
- Vectors are very sparse
A natural similarity measure is the Jaccard similarity:
$$\text{Sim}(D_1, D_2) = \frac{|C_1 \cap C_2|}{|C_1 \cup C_2|}$$
Suppose we need to find near-duplicate documents among $N=1$ million documents.
Naïvely, we’d have to compute pairwise Jaccard similarities for every pair of docs:
- i.e., $N(N-1)/2 \approx 5 \times 10^{11}$ comparisons
- At $10^5$ secs/day and $10^6$ comparisons/sec, it would take 5 days
For $N = 10$ million, it takes more than a year...
Many similarity problems can be formalized as finding subsets of some universal set that have significant intersection.
We can encode sets using 0/1 (bit, boolean) vectors:
- One dimension per element in the universal set.
- Interpret set intersection as bitwise AND, and set union as bitwise OR.
Example: $C_1 = 10111$; $C_2 = 10011$
- Size of intersection = 3; size of union = 4, Jaccard similarity (not distance) = $3/4$
- $d(C_1, C_2) = 1 - \text{(Jaccard similarity)} = 1/4$
Rows = elements of the universal set
Columns = sets
1 in row $e$ and column $s$ if and only if $e$ is a member of $s$
Column similarity is the Jaccard similarity of the sets of their rows with 1
Typical matrix is sparse
### Example: Jaccard of Columns
- **Each document is a column:**
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td></td>
<td></td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td></td>
<td></td>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td></td>
<td></td>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Note:
- We might not really represent the data by a boolean matrix.
- Sparse matrices are usually better represented by the list of places where there is a non-zero value.
\[
\text{Sim}(C_1, C_2) = \frac{2}{5} = 0.4
\]
When Is Similarity Interesting?
1. When the sets are so large or so many that they cannot fit in main memory
2. Or, when there are so many sets that comparing all pairs of sets takes too much time
3. Or both
1. **Compute signatures of columns** = small summaries of columns
2. Examine pairs of signatures to find similar signatures
- **Essential**: Similarities of signatures and columns are related
3. **Optional**: Check that columns with similar signatures are really similar.
1. Comparing all pairs of signatures may take too much time, even if not too much space
- A job for Locality-Sensitive Hashing
2. These methods can produce false negatives, and even false positives (if the optional check is not made)
Hashing Columns (Signatures)
- **Key idea**: “hash” each column \( C \) to a small signature \( h(C) \), such that:
1. \( h(C) \) is small enough that we can fit a signature in main memory for each column
2. \( Sim(C_1, C_2) \) is the same as the “similarity” of \( h(C_1) \) and \( h(C_2) \)
- **Goal**: Find a hash function \( h \) such that:
- if \( Sim(C_1, C_2) \) is high, then with high prob. \( h(C_1) = h(C_2) \)
- if \( Sim(C_1, C_2) \) is low, then with high prob. \( h(C_1) \neq h(C_2) \)
- Hash docs into buckets, and expect that “most” pairs of near duplicate docs hash into the same bucket
Min-hashing
- Clearly, the hash function depends on the similarity metric
- Not all similarity metrics have a suitable hash function
- There is a suitable hash function for Jaccard similarity
- Min-hashing
Imagine the rows of the boolean matrix permuted under random permutation $\pi$
Define “hash” function $h_\pi(C) = \text{the number of the first (in the permuted order $\pi$) row in which column } C \text{ has 1}$:
$$h_\pi(C) = \min \pi(C)$$
Use several (e.g., 100) independent hash functions to create a signature
Minhashing Example
Permutation $\pi$
| 1 | 4 | 3 | 2 | 6 | 7 | 5 | 4 |
Input matrix
<table>
<thead>
<tr>
<th>1</th>
<th>0</th>
<th>1</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
Signature matrix $M$
<table>
<thead>
<tr>
<th>2</th>
<th>1</th>
<th>2</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>1</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>1</td>
<td>2</td>
</tr>
</tbody>
</table>
1/10/2011
Choose a random permutation $\pi$
Prob. that $h_{\pi}(C_1) = h_{\pi}(C_2)$ is the same as $Sim(C_1, C_2)$:
$$\Pr[h_{\pi}(C_1) = h_{\pi}(C_2)] = Sim(C_1, C_2)$$
Why?
- Let $X$ be a set of shingles, $X \subseteq [2^{64}]$, $x \in X$
- Then: $\Pr[\pi(x) = \min(\pi(X))] = 1/|X|$
- It is equally likely that any $x \in X$ is mapped to the min element
- Let $x$ be s.t. $\pi(x) = \min(\pi(C_1 \cup C_2))$
- Then either: $\pi(x) = \min(\pi(C_1))$ if $x \in C_1$, or $\pi(x) = \min(\pi(C_2))$ if $x \in C_2$
- So the prob. that both are true is the prob. $x \in C_1 \cap C_2$
- $\Pr[\min(\pi(C_1))=\min(\pi(C_2))] = |C_1 \cap C_2|/|C_1 \cup C_2| = Sim(C_1, C_2)$
Given cols C₁ and C₂, rows may be classified as:
<table>
<thead>
<tr>
<th></th>
<th>C₁</th>
<th>C₂</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>b</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>c</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>d</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
Also, a = # rows of type a, etc.
Note Sim(C₁, C₂) = a/(a + b + c)
Then: \( \Pr[h(C₁) = h(C₂)] = Sim(C₁, C₂) \)
- Look down the cols C₁ and C₂ until we see a 1
- If it’s a type-a row, then \( h(C₁) = h(C₂) \)
- If a type-b or type-c row, then not
The similarity of two signatures is the fraction of the hash functions in which they agree.
Min Hashing – Example
Input matrix
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>4</th>
<th>3</th>
<th>2</th>
<th>6</th>
<th>7</th>
<th>1</th>
<th>6</th>
<th>1</th>
<th>2</th>
<th>5</th>
<th>7</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>4</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>6</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>7</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
Signature matrix $M$
<table>
<thead>
<tr>
<th></th>
<th>2</th>
<th>1</th>
<th>2</th>
<th>4</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>4</td>
<td>1</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>1</td>
</tr>
</tbody>
</table>
Similarities:
<table>
<thead>
<tr>
<th></th>
<th>1-3</th>
<th>2-4</th>
<th>1-2</th>
<th>3-4</th>
</tr>
</thead>
<tbody>
<tr>
<td>Col/Col</td>
<td>0.75</td>
<td>0.75</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Sig/Sig</td>
<td>0.67</td>
<td>1.00</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
1/10/2011
Minhash Signatures
- Pick (say) 100 random permutations of the rows
- Think of $\text{Sig}(C)$ as a column vector
- Let $\text{Sig}(C)[i] =$ according to the $i$-th permutation, the index of the first row that has a 1 in column $C$
- Note: We store the sketch of document $C$ in ~100 bytes:
$$\text{Sig}(C)[i] = \min(\pi_i(C))$$
Suppose 1 billion rows
Hard to pick a random permutation from 1...billion
Representing a random permutation requires 1 billion entries
Accessing rows in permuted order leads to thrashing
A good approximation to permuting rows: pick 100 (?) hash functions
For each column $c$ and each hash function $h_i$, keep a “slot” $M(i, c)$
**Intent:** $M(i, c)$ will become the smallest value of $h_i(r)$ for which column $c$ has 1 in row $r$
i.e., $h_i(r)$ gives order of rows for $i$-th permutation
Implementation – (3)
for each row $r$
for each column $c$
if $c$ has 1 in row $r$
for each hash function $h_i$ do
if $h_i(r)$ is a smaller value than $M(i, c)$ then
$M(i, c) := h_i(r);$
Example
\[ h(x) = x \text{ mod } 5 \]
\[ g(x) = 2x + 1 \text{ mod } 5 \]
<table>
<thead>
<tr>
<th>Row</th>
<th>C1</th>
<th>C2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>Sig1</th>
<th>Sig2</th>
</tr>
</thead>
<tbody>
<tr>
<td>h(1) = 1</td>
<td>1</td>
<td>-</td>
</tr>
<tr>
<td>g(1) = 3</td>
<td>3</td>
<td>-</td>
</tr>
<tr>
<td>h(2) = 2</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>g(2) = 0</td>
<td>3</td>
<td>0</td>
</tr>
<tr>
<td>h(3) = 3</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>g(3) = 2</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>h(4) = 4</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>g(4) = 4</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>h(5) = 0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>g(5) = 1</td>
<td>2</td>
<td>0</td>
</tr>
</tbody>
</table>
Often, data is given by column, not row
- E.g., columns = documents, rows = shingles
If so, sort matrix once so it is by row
And *always* compute $h_i(r)$ only once for each row.
|
{"Source-Url": "http://snap.stanford.edu/class/cs246-2011/slides/03-nn.pdf", "len_cl100k_base": 7383, "olmocr-version": "0.1.50", "pdf-total-pages": 63, "total-fallback-pages": 0, "total-input-tokens": 89628, "total-output-tokens": 8886, "length": "2e12", "weborganizer": {"__label__adult": 0.00033926963806152344, "__label__art_design": 0.0005335807800292969, "__label__crime_law": 0.0007572174072265625, "__label__education_jobs": 0.001064300537109375, "__label__entertainment": 0.00011581182479858398, "__label__fashion_beauty": 0.00020301342010498047, "__label__finance_business": 0.0004193782806396485, "__label__food_dining": 0.00042724609375, "__label__games": 0.0009241104125976562, "__label__hardware": 0.00197601318359375, "__label__health": 0.0005908012390136719, "__label__history": 0.0003919601440429687, "__label__home_hobbies": 0.000308990478515625, "__label__industrial": 0.0006403923034667969, "__label__literature": 0.0004017353057861328, "__label__politics": 0.0003020763397216797, "__label__religion": 0.0004851818084716797, "__label__science_tech": 0.2005615234375, "__label__social_life": 0.00018846988677978516, "__label__software": 0.051910400390625, "__label__software_dev": 0.736328125, "__label__sports_fitness": 0.0003085136413574219, "__label__transportation": 0.0003590583801269531, "__label__travel": 0.0002052783966064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19028, 0.0456]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19028, 0.74452]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19028, 0.81576]], "google_gemma-3-12b-it_contains_pii": [[0, 43, false], [43, 195, null], [195, 253, null], [253, 334, null], [334, 667, null], [667, 1180, null], [1180, 1519, null], [1519, 1890, null], [1890, 1940, null], [1940, 2239, null], [2239, 2557, null], [2557, 2583, null], [2583, 2699, null], [2699, 2818, null], [2818, 3056, null], [3056, 3395, null], [3395, 3660, null], [3660, 4126, null], [4126, 4475, null], [4475, 4747, null], [4747, 5239, null], [5239, 5343, null], [5343, 5694, null], [5694, 6001, null], [6001, 6242, null], [6242, 6519, null], [6519, 6803, null], [6803, 7078, null], [7078, 7412, null], [7412, 7516, null], [7516, 7714, null], [7714, 7948, null], [7948, 8308, null], [8308, 8578, null], [8578, 8834, null], [8834, 9048, null], [9048, 9471, null], [9471, 9777, null], [9777, 10062, null], [10062, 10216, null], [10216, 10470, null], [10470, 10791, null], [10791, 11134, null], [11134, 11618, null], [11618, 11842, null], [11842, 12850, null], [12850, 13061, null], [13061, 13338, null], [13338, 13576, null], [13576, 14192, null], [14192, 14403, null], [14403, 14720, null], [14720, 15076, null], [15076, 15738, null], [15738, 16134, null], [16134, 16226, null], [16226, 17289, null], [17289, 17621, null], [17621, 17811, null], [17811, 18117, null], [18117, 18334, null], [18334, 18846, null], [18846, 19028, null]], "google_gemma-3-12b-it_is_public_document": [[0, 43, true], [43, 195, null], [195, 253, null], [253, 334, null], [334, 667, null], [667, 1180, null], [1180, 1519, null], [1519, 1890, null], [1890, 1940, null], [1940, 2239, null], [2239, 2557, null], [2557, 2583, null], [2583, 2699, null], [2699, 2818, null], [2818, 3056, null], [3056, 3395, null], [3395, 3660, null], [3660, 4126, null], [4126, 4475, null], [4475, 4747, null], [4747, 5239, null], [5239, 5343, null], [5343, 5694, null], [5694, 6001, null], [6001, 6242, null], [6242, 6519, null], [6519, 6803, null], [6803, 7078, null], [7078, 7412, null], [7412, 7516, null], [7516, 7714, null], [7714, 7948, null], [7948, 8308, null], [8308, 8578, null], [8578, 8834, null], [8834, 9048, null], [9048, 9471, null], [9471, 9777, null], [9777, 10062, null], [10062, 10216, null], [10216, 10470, null], [10470, 10791, null], [10791, 11134, null], [11134, 11618, null], [11618, 11842, null], [11842, 12850, null], [12850, 13061, null], [13061, 13338, null], [13338, 13576, null], [13576, 14192, null], [14192, 14403, null], [14403, 14720, null], [14720, 15076, null], [15076, 15738, null], [15738, 16134, null], [16134, 16226, null], [16226, 17289, null], [17289, 17621, null], [17621, 17811, null], [17811, 18117, null], [18117, 18334, null], [18334, 18846, null], [18846, 19028, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19028, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19028, null]], "pdf_page_numbers": [[0, 43, 1], [43, 195, 2], [195, 253, 3], [253, 334, 4], [334, 667, 5], [667, 1180, 6], [1180, 1519, 7], [1519, 1890, 8], [1890, 1940, 9], [1940, 2239, 10], [2239, 2557, 11], [2557, 2583, 12], [2583, 2699, 13], [2699, 2818, 14], [2818, 3056, 15], [3056, 3395, 16], [3395, 3660, 17], [3660, 4126, 18], [4126, 4475, 19], [4475, 4747, 20], [4747, 5239, 21], [5239, 5343, 22], [5343, 5694, 23], [5694, 6001, 24], [6001, 6242, 25], [6242, 6519, 26], [6519, 6803, 27], [6803, 7078, 28], [7078, 7412, 29], [7412, 7516, 30], [7516, 7714, 31], [7714, 7948, 32], [7948, 8308, 33], [8308, 8578, 34], [8578, 8834, 35], [8834, 9048, 36], [9048, 9471, 37], [9471, 9777, 38], [9777, 10062, 39], [10062, 10216, 40], [10216, 10470, 41], [10470, 10791, 42], [10791, 11134, 43], [11134, 11618, 44], [11618, 11842, 45], [11842, 12850, 46], [12850, 13061, 47], [13061, 13338, 48], [13338, 13576, 49], [13576, 14192, 50], [14192, 14403, 51], [14403, 14720, 52], [14720, 15076, 53], [15076, 15738, 54], [15738, 16134, 55], [16134, 16226, 56], [16226, 17289, 57], [17289, 17621, 58], [17621, 17811, 59], [17811, 18117, 60], [18117, 18334, 61], [18334, 18846, 62], [18846, 19028, 63]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19028, 0.1688]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
9760045f53417e2cbce2fd05adb7f861ddce1a90
|
Algorithm efficiency can be measured in terms of:
- Time
- Space
- Other resources such as processors, network packets, etc.
Algorithms analysis tends to focus on time:
- Techniques for measuring all resources are basically the same
- Historically, time dominated
The efficiency of a program or system is affected by:
- Programming Language
- Physical characteristics of the computer
- Room temperature
- Amount of data to process (or actual parameters)
- Algorithms used
In the real world all factors are considered, and any one can dominate in any particular situation.
When developing algorithms in the abstract, most such factors are ignored because they are:
- Out of our control, as programmers and algorithm developers.
- Difficult to predict.
- For that reason we say they are arbitrary.
Consequently, we are interested in algorithm efficiency, not program or system efficiency.
The efficiency of an algorithm is specified as a **running time**, sometimes also called **complexity**.
In our abstract world, the **input length** or parameters have the most influence on running time of an algorithm.
The running time of an algorithm measures:
- Total number of *operations* executed by an algorithm, or
- Total number of *statements* executed by an algorithm
Running times can be based on:
- Worst case (most frequently used)
- Average case (most difficult)
- Best case
Example – Inputting $n$ integers:
```java
public static void inputInts(int n) {
int i;
i = 1;
while (i <= n) {
x = kb.nextInt();
i = i + 1;
}
}
```
Total number of statements executed is approximately $T(n) = 3n+2$
Running time is therefore $O(n)$
How many operations are executed by the above algorithm?
Another Example:
```java
public static void inputInts(int n) {
for (int i = 0; i < n; i++) {
System.out.println(i);
}
for (int i = 0; i < 2*n; i++) {
System.out.println(2*n-i+1);
}
}
```
Total number of statements executed is approximately $T(n) = 9n+4$
Running time is therefore $O(n)$
Another Example:
```java
public static void inputInts(int n) {
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
System.out.println(i);
System.out.println(i+j);
}
System.out.println(j);
}
System.out.println("All done!");
}
```
Total number of statements executed is approximately $T(n) = 4n^2 + 5n + 2$
Running time is therefore $O(n^2)$
Another Example:
```java
public static void inputInts(int n) {
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
for (int k = 0; k < n; k++) {
System.out.println(i);
System.out.println(j);
System.out.println(k);
}
System.out.println(“All done inner loop!”);
}
System.out.println(“All done middle loop!”);
}
System.out.println(“All done outer loop!”);
}
```
Total number of statements executed is approximately … *good luck with that!!!*
Running time is therefore $O(n^3)$
Another Example:
```java
public static void inputInts(int n) {
int x, y;
x = n;
y = 10 * n;
x = x + y;
System.out.println(x);
}
```
Total number of statements executed is $T(n) = 4$
Running time is therefore $O(1)$
“Big Oh” Notation
- Note that it is tempting to think that $O(f(n))$ means “approximately $f(n)$.”
- Although used in this manner frequently, this interpretation or use of “big Oh” is absolutely incorrect.
- Most frequently, $O(f(n))$ is pronounced “on the order of $f(n)$” or “order $f(n)$,” but keep in mind that this also does not mean “approximately $f(n)$.”
Definitions:
Let $f(n)$ and $g(n)$ be functions:
- $f(n)$ is said to be less than $g(n)$ if $f(n) \leq g(n)$ for all $n$.
For example, $n^2$ is less than $n^4 + 1$.
To within a constant factor, \( f(n) \) is said to be less than \( g(n) \) if there exists a positive constant \( c \) such that \( f(n) \leq cg(n) \) for all \( n \).
Example:
\[ g(n) = 3n^2 + 2 \]
\[ f(n) = 6n^2 + 3 \]
Note:
6\(n^2 + 3\) is not less than 3\(n^2 + 2\)
However:
To within a constant factor 6\(n^2 + 3\) is less than 3\(n^2 + 2\)
Proof:
Let \(c=9\), then we see that:
\[ 6n^2 + 3 <= 9(3n^2 + 2) \]
\[ = 27n^2 + 18 \]
By the way, for these two functions it is also the case that:
To within a constant factor $3n^2 + 2$ is less than $6n^2 + 3$
**Proof:**
Let $c=1$, then we see that:
$$3n^2 + 2 \leq 1(6n^2 + 3)$$
$$= 6n^2 + 3$$
In fact $3n^2 + 2$ is actually less than $6n^2 + 3$
In other words, *asymptotically*, there ain’t much difference in the growth of the two functions, as $n$ goes to infinite.
(always enjoy using that word in class)
Question:
\[ g(n) = n^2 \]
\[ f(n) = 2^n \]
Is \( f(n) \), to within a constant factor, less than \( g(n) \)?
In other words, is there a constant \( c \) such that:
\[ f(n) \leq cg(n) \]
No! Not even if we let \( c = 1,000,000, \) or larger!
Definition: $f(n)$ is said to be $O(g(n))$ if there exists two positive constants $c$ and $n_0$ such that $f(n) \leq cg(n)$ for all $n \geq n_0$.
Intuitively, we say that, as $n$ goes to infinity, $f(n)$ is, to within a constant factor, less than $g(n)$.
Stated another way, as $n$ goes to infinity, $f(n)$ is, to within a constant factor, bounded from above by $g(n)$.
- From the definition of big-O, it should now be clear that saying $T(n)$ is $O(g(n))$ means that $g(n)$ is an *upper bound* on $T(n)$, and does not mean approximately.
- An analogy:
- Consider two integers $x$ and $y$, where $x \leq y$.
- Is $y$ approximately equal to $x$?
Of course not!
- Similarly for the definition of big-O.
Example:
\[ g(n) = n^2 \]
\[ f(n) = n^2 + 1 \]
Claim:
\[ n^2 + 1 \text{ is } O(n^2) \]
Proof:
Let \( c = 2 \) and \( n_0 = 1 \), then we see that:
\[ n^2 + 1 \leq 2n^2 \text{ for all } n \geq n_0 = 1. \]
*Note that in this case \( f(n) \) is not less than \( g(n) \), even to within a constant.*
The original procedure is still correct (choose the highest order term, drop the constant).
More specifically, this procedure is consistent with the formal definition of big-O.
\[ f(n) = n^2 + 1 \quad f(n) \text{ is } O(n^2) \]
\[ g(n) = n + 6 \quad g(n) \text{ is } O(n) \]
\[ h(n) = n^{35} + n^{16} + n^4 + 3n^2 + 1025 \quad h(n) \text{ is } O(n^{35}) \]
\[ r(n) = (1/2)n + 300 \quad r(n) \text{ is } O(n) \]
\[ s(n) = 4n^2 + 2n + 25 \quad s(n) \text{ is } O(n^2) \]
And yes, the depth of loop nesting, at least on simple programs, does indicate the running time.
More generally:
If: \[ f(n) = a_{m-1}n^{m-1} + a_{m-2}n^{m-2} + \ldots + a_1n^1 + a_0 \]
Then: \[ f(n) \text{ is } O(n^{m-1}). \]
Explanation:
Let: \[ c = |a_{m-1}| + |a_{m-2}| + \ldots |a_1| + |a_0| \]
\[ n_0 = 1 \]
Then it will be the case that:
\[ f(n) <= cn^{m-1} \]
\[ \text{for all } n >= n_0 \]
Why is $f(n) \leq cn^{m-1}$ for all $n \geq n_0 = 1$?
$$cn^{m-1} = |a_{m-1}|n^{m-1} + |a_{m-2}|n^{m-1} + \ldots + |a_1|n^{m-1} + |a_0|n^{m-1}$$
$$f(n) = a_{m-1}n^{m-1} + a_{m-2}n^{m-2} + \ldots + a_1n + a_0$$
Another thing worth noting:
If, for example, \( f(n) = 5n^3 + 35n^2 + 3n + 1025 \)
Then \( f(n) \) is \( O(n^3) \), but also \( O(n^4) \), \( O(n^5) \), etc.
This may seem counter-intuitive, but it is exactly what we want big-oh to be, i.e., an upper bound.
### Common Running Times:
- **Polynomial**
- $O(1)$: constant
- $O(\log n)$: logarithmic
- $O(n)$: linear
- $O(n \log n)$
- $O(n^2)$: quadratic
- $O(n^2 \log n)$
- $O(n^3)$
- $O(n^4)$
- **Exponential**
- $O(2^n)$
- $O(3^n)$
Let $n$ be the length of the array $A$.
What is the (worst case) running time of sequential/linear search?
\[ \Rightarrow \text{What are the worst case scenario?} \]
\[ \Rightarrow \text{We will focus on the “==“ operation (not that its special, or anything...)} \]
```java
public boolean inList(int x; int[] A)
{
int i;
i=0; // notice for -> while
while (i<=A.length-1) {
if (x == A[i])
return true;
i++;
}
return false;
}
```
What is the (worst case) running time of selection sort?
⇒ What are the worst case scenario?
⇒ Again, we will focus on the “<“ operator.
// Selection sort
public static void sort(int[] a) {
int i, j, minPos, temp;
for (i=0; i<a.length-1; i++) {
// Find the position of the value that belongs in position i
minPos = i;
for (j=i+1; j<=a.length-1; j++)
if (a[j] < a[minPos])
minPos = j;
// Swap the values in positions i and min
temp = a[i];
a[i] = a[minPos];
a[minPos] = temp;
}
}
Consider the number of comparisons made in the *worst case*:
- $n - 1$ iterations performed by the outer loop
- First iteration: how many comparisons?
- Second iteration: how many comparisons?
\[
(N - 1) + (N - 2) + \ldots + 2 + 1 = N(N-1)/2 = (N^2 - N)/2
\]
Which is $O(n^2)$
*Question:*
- What about the other operations – shouldn’t they be counted also?
- We could count those also, but the end result would be the same, at least *asymptotically* it would.
Other Questions:
- Should we distinguish between different types of operations, such as comparisons vs. assignments?
- In some cases, yes, but here the distinction is not important; all operations are in main memory, and operate on a small, fixed number of bytes.
- File I/O or network message operations would be more time consuming.
- What about best and average cases?
Best case:
- The assignment to \texttt{minPos} never takes place, would appear more efficient, but the \texttt{<} operator is still executed the same number of times.
- Consequently, the end result is the same - $O(n^2)$
Average case:
- Same thing - $O(n^2)$
What is the (worst case) running time of insertion sort?
```java
public static void insertionSort(int[] a) {
int j;
for (int i=1; i<=a.length-1; i++) {
j=i;
v = a[j];
while ((j>0) && (v<a[j-1])) {
a[j]=a[j-1];
j=j-1;
}
a[j] = v;
}
}
```
Consider the number of times the condition in the while-loop is evaluated in the worst case:
- \( N - 1 \) iterations performed by the outer loop.
- First iteration: how many evaluations?
- Second iteration: how many evaluations?
:
- \( 2 + \ldots + (N - 2) + (N - 1) + N = \)
- \( (1 + 2 + \ldots + N) - 1 = \)
- \( N(N+1)/2 - 1 = \)
- \( (1/2)N^2 + (1/2)N - 1 \)
Which is \( O(n^2) \)
- How about best case?
- How about average case?
What is the worst-case running time of bubble sort?
```java
public static void bubbleSort1(int[] a)
{
int temp;
for (int i=1; i<a.length; i++) {
for (int j=0; j<a.length-i; j++) {
if (a[j] > a[j+1]) {
temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
}
}
}
}
```
Question: Is there a distinction between worst, best and average cases?
Consider the number of comparisons made by the if-statement in the \textit{worst case}:
- $n - 1$ iterations performed by the outer loop
- First iteration: how many comparisons?
- Second iteration: how many comparisons?
\[
(N - 1) + (N - 2) + \ldots + 2 + 1 = \frac{N(N-1)}{2} = \frac{N^2 - N}{2}
\]
Which is $O(n^2)$
What about the other versions of bubble sort?
Second version: (fewer bubbles)
// This version stops when a pass occurs with no swaps.
public static void bubbleSort1(int[] a) {
int i, temp;
boolean doMore;
i = 1;
doMore = true;
while ((i<a.length) && (doMore)) {
doMore = false;
for (int j=0; j<a.length-i; j++)
if (a[j] > a[j+1]) {
temp = a[j];
a[j] = a[j+1];
a[j+1] = temp;
doMore = true;
}
i = i + 1;
}
}
Analysis of Binary Search
- Recall binary search:
- Suppose we are searching for:
- 45
- 23
- 29
- 31
Summary of binary search:
- At each step there is a segment of the array being searched, which is indicated by a “low” position and a “high” position.
- Look at the middle element between the “low” and “high.”
- If the middle element is the one you are looking for, stop.
- If not, repeat the search on the upper or lower half of the current segment of the array being searched.
See the program at:
- [http://cs.fit.edu/~pbernhar/teaching/cse1001/binarySearch.txt](http://cs.fit.edu/~pbernhar/teaching/cse1001/binarySearch.txt)
Logarithms
“Fear not the logarithm, for it is your friend!”
- The *logarithm* of a number to a given base is the exponent to which the base must be raised in order to produce that number.
- In other words, \( \log_a(b) \) is the power to which “\( a \)” must be raised in order to produce “\( b \).”
- What is:
\[
\begin{align*}
\log_{10}(1000) &= \ ? \\
\log_3(9) &= \ ? \\
\log_5(1) &= \ ? \\
\log_2(16) &= \ ? \\
\log_2(32) &= \ ? \\
\log_2(1024) &= \ ?
\end{align*}
\]
Of particular interest in computer science are base-2 logarithms.
\[
\begin{align*}
\log_2(1) &= 0 & \log_2(64) &= 6 \\
\log_2(2) &= 1 & \log_2(128) &= 7 \\
\log_2(4) &= 2 & \log_2(256) &= 8 \\
\log_2(8) &= 3 & \log_2(512) &= 9 \\
\log_2(16) &= 4 & \log_2(1024) &= 10 \\
\log_2(32) &= 5 & \log_2(2048) &= 11 \\
\end{align*}
\]
Notice that the logarithm is the inverse of exponentiation.
\[\log_2(2^k) = k\]
The logarithm function is therefore a very slowly growing function.
- Exponentiation is a very fast growing function.
- See the wikipedia page on logarithms for the graph.
As stated previously, $\log_2(n)$ is the power to which 2 must be raised in order to produce “$n$.”
Another way to state this is:
“$\log_2(n)$ is the number of times you can divide $n$ by 2 and still get a number $>= 1$.”
$log_2(8) = 3$
$log_2(16) = 4$
$log_2(32) = 5$
$log_2(2^k) = k$
$log_2(9) = 3$
$log_2(30) = 4$
$log_2(2000) = 10$
Why is this important? Recall binary search…
➤ [http://cs.fit.edu/~pbernhar/teaching/cse1001/binarySearch.txt](http://cs.fit.edu/~pbernhar/teaching/cse1001/binarySearch.txt)
Running Time of Binary Search
- Each iteration of the main loop, performs one comparison, and divides the array segment to be searched in half.
- At the start of the first iteration, the segment has length n.
- At the start of the second iteration, the segment has length n/2.
- At the start of the second iteration, the segment has length n/4.
- ...
- In the worst-case, the loop will continue until the array has length < 1.
- How many times does the loop iterate, i.e., how many times can the length of the array be reduced by ½ and still result in an array of length >= 1?
- \( \log_2(n) \)
- Since each iteration performs a constant number of operations, the running time of binary search is therefore \( O(\log n) \)
|
{"Source-Url": "http://my.fit.edu/~pbernhar/Teaching/SoftwareDevelopment1/Slides/AA.pdf", "len_cl100k_base": 4835, "olmocr-version": "0.1.53", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 58941, "total-output-tokens": 6586, "length": "2e12", "weborganizer": {"__label__adult": 0.00042128562927246094, "__label__art_design": 0.00032210350036621094, "__label__crime_law": 0.0005373954772949219, "__label__education_jobs": 0.0009984970092773438, "__label__entertainment": 9.888410568237303e-05, "__label__fashion_beauty": 0.0001952648162841797, "__label__finance_business": 0.0002053976058959961, "__label__food_dining": 0.000537872314453125, "__label__games": 0.00102996826171875, "__label__hardware": 0.0018110275268554688, "__label__health": 0.0008869171142578125, "__label__history": 0.0003104209899902344, "__label__home_hobbies": 0.00016176700592041016, "__label__industrial": 0.0005731582641601562, "__label__literature": 0.0003833770751953125, "__label__politics": 0.0003249645233154297, "__label__religion": 0.0007047653198242188, "__label__science_tech": 0.056365966796875, "__label__social_life": 0.00010311603546142578, "__label__software": 0.004863739013671875, "__label__software_dev": 0.92724609375, "__label__sports_fitness": 0.0006513595581054688, "__label__transportation": 0.0008578300476074219, "__label__travel": 0.0002505779266357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14800, 0.03288]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14800, 0.81412]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14800, 0.82843]], "google_gemma-3-12b-it_contains_pii": [[0, 267, false], [267, 895, null], [895, 1390, null], [1390, 1732, null], [1732, 2054, null], [2054, 2465, null], [2465, 3070, null], [3070, 3309, null], [3309, 3675, null], [3675, 3725, null], [3725, 3843, null], [3843, 4011, null], [4011, 4284, null], [4284, 4715, null], [4715, 4962, null], [4962, 5334, null], [5334, 5674, null], [5674, 5977, null], [5977, 6546, null], [6546, 6854, null], [6854, 7065, null], [7065, 7326, null], [7326, 7572, null], [7572, 8076, null], [8076, 8655, null], [8655, 9123, null], [9123, 9759, null], [9759, 10074, null], [10074, 10516, null], [10516, 10947, null], [10947, 11315, null], [11315, 11814, null], [11814, 11926, null], [11926, 12457, null], [12457, 12955, null], [12955, 13539, null], [13539, 14065, null], [14065, 14800, null]], "google_gemma-3-12b-it_is_public_document": [[0, 267, true], [267, 895, null], [895, 1390, null], [1390, 1732, null], [1732, 2054, null], [2054, 2465, null], [2465, 3070, null], [3070, 3309, null], [3309, 3675, null], [3675, 3725, null], [3725, 3843, null], [3843, 4011, null], [4011, 4284, null], [4284, 4715, null], [4715, 4962, null], [4962, 5334, null], [5334, 5674, null], [5674, 5977, null], [5977, 6546, null], [6546, 6854, null], [6854, 7065, null], [7065, 7326, null], [7326, 7572, null], [7572, 8076, null], [8076, 8655, null], [8655, 9123, null], [9123, 9759, null], [9759, 10074, null], [10074, 10516, null], [10516, 10947, null], [10947, 11315, null], [11315, 11814, null], [11814, 11926, null], [11926, 12457, null], [12457, 12955, null], [12955, 13539, null], [13539, 14065, null], [14065, 14800, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14800, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14800, null]], "pdf_page_numbers": [[0, 267, 1], [267, 895, 2], [895, 1390, 3], [1390, 1732, 4], [1732, 2054, 5], [2054, 2465, 6], [2465, 3070, 7], [3070, 3309, 8], [3309, 3675, 9], [3675, 3725, 10], [3725, 3843, 11], [3843, 4011, 12], [4011, 4284, 13], [4284, 4715, 14], [4715, 4962, 15], [4962, 5334, 16], [5334, 5674, 17], [5674, 5977, 18], [5977, 6546, 19], [6546, 6854, 20], [6854, 7065, 21], [7065, 7326, 22], [7326, 7572, 23], [7572, 8076, 24], [8076, 8655, 25], [8655, 9123, 26], [9123, 9759, 27], [9759, 10074, 28], [10074, 10516, 29], [10516, 10947, 30], [10947, 11315, 31], [11315, 11814, 32], [11814, 11926, 33], [11926, 12457, 34], [12457, 12955, 35], [12955, 13539, 36], [13539, 14065, 37], [14065, 14800, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14800, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
41bad30e463986ab7d2dd36b4d32bc51c06b7f97
|
Using SVG and XSLT for graphic representation
Andres Baravalle
Department of Computer Science - University of Turin
Italy
Biography
Andres Baravalle is a full-time Ph.D. student at Turin University, Department of Informatics. His actual research topics are artificial intelligence, human-computer interaction, usability. He took his degree summa cum laude in Communications at Turin University. His degree thesis discusses about different technologies for information storage (databases, XML etc.) and about languages and techniques for multimodal web interaction with desktop and mobile users. Economical aspects concerning implementation are also covered. Among his key qualifications he lists a deep knowledge of the main languages and technologies related to web development. He worked at several projects as consultant. Among them he worked to develop multimodal interaction systems based on XML and server side technologies for the web sites costameno.it and loescher.it. Publications "Remote web usability testing: a proxy approach", to be published at HCI2003, Crete. "Remote web usability testing", Measuring Behaviour 2002, Amsterdam. "A usable web for long-stay hospitalised children", HCI2002, London.
Marco Gribaudo
Department of Computer Science - University of Turin
Italy
Biography
Marco Gribaudo is currently a researcher at the Department of Computer Science, University of Turin.
Vitaveska Lanfranchi
Department of Computer Science - University of Turin
Italy
Biography
Vitaveska Lanfranchi is a full-time Ph.D. student at Turin University, Department of Informatics. Her actual research topics are artificial intelligence, human-computer
interaction, usability. She took his degree summa cum laude in Communications at Turin University. Her degree thesis discusses about different languages and techniques for multimodal web interaction with desktop and mobile users. Among her key qualifications she lists a deep knowledge of the main languages and technologies related to semantic web. She worked at several projects as consultant. Among them she worked to develop multimodal interaction systems based on XML and server side technologies for the web sites costameno.it and loescher.it. Selected publications: "Remote web usability testing: a proxy approach", to be published at HCI2003, Creete; "Remote web usability testing", Measuring Behaviour 2002, Amsterdam: "A usable web for long-stay hospitalised children", HCI2002, London.
**Tiziana Sandri**
Communication Science - University of Turin
Italy
*Biography*
Tiziana Sandri is currently student at Communication Science, University of Turin.
---
**Table of Contents**
*Using SVG and XSLT for graphic representation*
- Introduction
- State of the art
- Visualization model
- Source primitives
- Modification primitives
- Disposition primitives
- Action primitives
- Implementation
- Examples
- Conclusions
*Bibliography*
Using SVG and XSLT for graphic representation
Introduction
In a world based on communication, the way in which information is presented plays a role as important as the actual content being transmitted.
In many occasions an impressive presentation may be the key for the success of a particular strategy. This is also true for scientific data, obtained from complex experiments or computer simulations. Many software and technology exist to aid the construction of visually appealing presentations. Most of them however, focus on the production of graphs such has function plots, histograms, diagrams and so on. Even though this kind of diagrams are extremely useful for the experts involved in a particular area, the may be very hard to understand for people not as much keen on that particular subject.
For this reason alternative representation forms exists, such as pictograms, where the information is visualized by real images, rather than by abstract representations.
The main problem with the realization of such kind of visualizations is that they usually have to be drawn by hand by the presenter after the final data have been obtained. If, for some reason, a mistake in the data is discovered and the corresponding values have to be recomputed, the pictograms must be redrawn. Also, if the data changes rapidly, the time required to draw them with a pictogram representation may be too much to follow the changes. Usually, to avoid this kind of problems, the user that wishes to use such type of representations has to write ad hoc programs that produce the images starting from the data.
In this paper we present an alternative approaches based on XML, XSL and SVG that can help the realization of pictograms for rapidly changing data.
Data are written in a XML using a very simple name-value representation, while a stylesheet, written in XML, specifies the way in which the pictograms are to be constructed. The data and its pictogram style definition file are combined by an XSLT stylesheet, to produce an SVG representation of the final image.
The proposed approaches as several advantages:
- Interoperability. The use of standard technologies such as XML, XSLT and SVG makes the approach portable on different platforms with a small effort.
- Efficiency. The time required to update a pictogram due to the change of data is comparable
to the one achieved using special purpose software.
- Portability. The SVG image produced may be easily included into many form of presentations, such has texts, slides and web sites (in this case, even on the fly), and are compatible with the majority of authoring tools.
**State of the art**
Scientific visualization has become a strategic issue in human-computer interaction: in many situations the main problem is not to gather experimental data, but to interpret and display the results in a suitable form [1].
Raw data are often difficult to be interpreted by the intended audience of a research, or they are simply not as fancy as the author would like. Converting raw data into pictures and animations, easily understandable and readable, while seems not adequate to some authors, it is a pragmatic approach that can be a good choice in many cases, as a more appealing presentation can be the method to convey the results of a research to a wider audience.
Unfortunately the interest of software companies in scientific visualization for research purposes seems to be low, probably because the target of users is restricted and often unwilling to invest economical resources for the presentation of their work.
Undoubting generic office applications, as Microsoft Excel (included in Microsoft Office) or Calc (included in OpenOffice) are the most used software for graph representation, at least between non-specialized users. This kind of software targets non-specialized users, both for the number of features and for complexity that can handle. Researchers need a more specialized and powerful software and can sacrifice the simplicity of use of the software for visualizing information in the most suitable way.
Since the visualization job does not constitute only the final phase of a research, but it is also a fundamental element of every stage of the experimentation, is particularly important that also the most complex visualizations can be managed in simple way and real time.
These features have been implemented in softwares like Vis5d, Aspen 2000, Por 2000.
Vis5d is a software developed by SSEC (Space Science and Engineering Center) of the university of Winsconsin-Madison; it offers the possibility to animate in real time and to visualize large amount of data in 3D. Aspen 2000 is a software for the analysis of bulkheads and walls that allows to precisely visualize cross section with elevated degree of detail and to edit all the graphical details. Por 2000 is a software for planning and verification of buildings; it allows graphical modification of the elements and provides visual control for editing elements.
All these software allow to collect data, to analyse them and then to create bidimensional or three-dimensional graphs: the graphs can be bubbles, histograms or cakes but they are all abstract diagrams that do not have any connection with the nature of the data.
The diagrams produced adapt to whichever type of numerical data but do not provide an intuitive point of view over the described data: the true nature of the described data it is not immediate or easy to understand.
Otherwise in meteorological and oceanographic fields several applications of scientific visualization have been developed. Researchers in these fields deserve clear and complete comprehension of studied phenomena, need a platform able to handle a huge quantity of data, analyzing them through appropriate visualization techniques.
Moreover it lets visualizing a lot of different variables at the same time, without deteriorating the quality of the produced images, and without increasing the complexity.
Some researchers suggested to adopt XML based language to semantically describe complex data [2]: since XML is a powerful language that allows creating custom tags and conveys a semantical values to the contained data, it is often use to describe data that can be difficult to understand, as chemical or mathematical formula [3][4].
In mathematical or chimical fields the visualization of the produced results is very important [5]: some recent researches proposed to adopt an XML dialect for describing data and to transform them in other XML based language, specialized for visualization as VRML [6] or SVG.
SVG has been used as visualization language also for different type of data, as Census Data [7], reverse engineering data [8] or medical data [9].
In this paper we present an XML based framework that can be used to produce graphical representation of scientific data. The approach rather than producing ordinary histogram and function diagram graphs, tries to represent the information in a more graphical appealing and easy to understand way. The proposed framework is able to maintain the value of the data strictly separated from the visual form of its representation (positions of element, colors, visual representation etc.).
Since XML can be used for describing complex data information, we represent every level of the graphic representation with an XML structure.
To describe our architecture we defined the following XML dialects, each one with different markup tags, reflecting the semantical values of the elements.
- **Data definition level.** Used to define the value of the data that can be used in the graphic representation.
- **Data representation level.** Used to define the graphic representation, it defines how the values expressed by the data definition level are represented.
Both data representation and data definition files are based on a DTD to impose the constraints.
XSLT is then used to output a SVG file derived from the two files describing the graphic representation.
**Visualization model**
Data representation level is the core of the system, and defines a powerful language to represent:
- **Source primitives.** Used to define for the source of the graphic elements, for example bitmap image files or vectorial SVG images.
- **Modification primitives.** Used to define the modifications that can affect a graphic element, for example rotation, scaling or repetition.
- **Disposition primitives.** Used to define the possible dispositions along x, y and z axes, for example to impose a order in the representation of elements.
- **Action primitives.** Used to define the possible actions that can be activated by graphic elements for different user behaviors. For example a mouse action can activate a link to a different resource, or can change the value of any of the other primitives of the data structure, as image source or disposition, or can show a tooltip.
**Source primitives**
The main idea is that numeric data can be easily understandable if graphically represented in a well-known context.
Then we decided to associate to every numeric data an image that visually represents a contextual reference to the described subject. Dynamic images, superimposed on the static image, help representing variations of numeric data.
It is very important to choose an adequate image that can be intuitively understood by any user, according to her cultural background. A knowledge of the user cultural background and, if possible, usability tests, can help to decide which graphical elements to use.
In the example below we wanted to represent the lowest and the fastest speed of a car.
We choose to represent the speed of the car using a static image of a speedometer: the lowest and fastest speed are rendered using two dynamic hands.
**Modification primitives**
Elements are provided to describe the possible modifications of the images, such as scaling, rotating and skewing.
This lets modify basic properties of the images, in order to render the relation between them and therefore between the data easy and meaningful.
The most intuitive operation is to change width and height of the images in order to represent an increment of a phenomenon. The example below represents a cloudy sky.
When the weather conditions change, it is possible to represent this variation simply widening the image of the cloud over the sun:
To define these operation we used a mathematical formula that establish in which way the selected dimension change proportionally to the variation of the numeric data:
\[ h = s(d-c) \]
where \( h \) represents the dimension to be modified (height, width, rotation angle), \( s \) is the scale, the constant used to multiply the dimension; \( d \) represents the image dimension, \( c \) represents the starting point of the represented unit of measure. The formula is stored in our formalism.
**Disposition primitives**
Disposition primitives permit to repeat images or to put them in a sequence, for example from the smaller to the bigger data, or from the older to the newer data.
**Action primitives**
Actions primitives permit to associate responses to specific user behaviors.
For example it is possible to underline or highlight an image or part of it, or to add tooltips and links.
**Implementation**
The proposed approach may be implemented in several different ways:
1. Using a high-level programming language: a software tool that allow to visually build the pictograms, starting form the data, may be written in general purpose programming languages that support XML such as Java or C++. The tool may use the proposed XML document to store the style definitions and to import the data.
2. Using the scripting languages of some advanced graphic format: for example, an XML aware browser plug-in, such as Macromedia Flash, can be used to read both the data and the pictogram style XML specification. Using the scripting language of the graphic format, the pictogram may be generated by parsing and interpreting both files.
3. Using a generic XML transformation process to create a graphic file in a suitable format: for example, we can use XSLT to produce an SVG representation of the pictogram obtained by combining the data with its style.
Each of the alternative has its own advantages and disadvantages. Using an interpreter (such as in case 1 and 2), the interactive features (such as the one defined by the Action Primitives) may be easier to implement. Also, high-level programming languages are the one that can provide the most efficient implementation. However, an interpreter in a programming language (case 1) is more expensive to develop than a script in an advanced graphic format (case 2), which in turn is more expensive to develop and debug than a text transformation specification (case 3). XSLT transformation may be easier to implement server side, and simplify the distribution of the produced pictograms.
We have tried to prototype several implementations: a Flash movie that can read data and style specifications and represent them, and an XSL style-sheet that combine them to produce an SVG output. In this paper we will concentrate on the latter.
We decided to adopt SVG because it meets two main requirements:
- SVG is data oriented and XML-based: it is simple to generate an SVG image file from a native XML.
- SVG is interactive: it allows to manipulate data and to associate responses to specific user behaviors.
The transformation of XML representations of scientific data into SVG is made through an XSLT stylesheet.
For every defined primitive we created a XSLT transformation that converted XML into SVG code:
- Source primitives are converted into SVG <img> elements.
- Modification primitives are implemented through the use of transform attribute, with rotate, scale or matrix values.
- Disposition primitives that allows repeating images are implemented through the use of the element <pattern>.
- Action primitives are implemented adopting the EcmaScript language.
Examples
In these chapter we present some examples realized simulating real situations.
The first example concerns measuring pressure in weather forecast.
We want to represent the maximum and minimum temperature in a city.
Below is listed the XML file that defines numeric values:
```xml
<?xml version="1.0" standalone="no"?>
<?stile stiletemp.xml ?>
<!DOCTYPE datafile SYSTEM "data.dtd">
<datafile>
<data>
<value id="tempmin" num="2"/>
<value id="tempmax" num="6"/>
</data>
</datafile>
```
We choose to represent the contextual static image with a thermometer and the dynamic data with two vertical lines, blue for the minimum, red for the maximum.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE stile SYSTEM "astile1.dtd">
<style>
<image src="termometro.svg" imgx="20" imgy="20"/>
<addimage addsrc="tempmin.svg" addimgx="30" addimgy="500"/>
<addimage addsrc="tempmax.svg" addimgx="30" addimgy="500"/>
<upimage link="tempmin" imageref="tempmin.svg" center="0" scale="5"></upimage>
<upimage link="tempmax" imageref="tempmax.svg" center="0" scale="5"></upimage>
</style>
```
We used the element `<upimage>` to represent the direction of the dynamic images (their numeric values are positive numbers).
Below is the resulting image:
The second example concerns measuring human corporeal pressure.
The data file contains the systolic and diastolic pressure values:
```xml
<?xml version="1.0" standalone="no"?>
<?stile astile.xml ?>
<!DOCTYPE datafile SYSTEM "data.dtd">
<datafile>
<data>
<value id="pressmin" num="75" />
<value id="pressmax" num="120" />
</data>
</datafile>
```
We choose to represent pressure through an analogical sphygmomanometer, and the maximum and minimum values are represented with two hands that rotate according to their numeric value. When a user clicks on the image, a web page containing further data is visualized.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE stile SYSTEM "stile.dtd">
<stile>
<image src="sfigm01.svg" imgx="30" imgy="30" />
<addimage addsrc="lancettablu.svg" addimgx="75" addimgy="175" />
<addimage addsrc="lancettarossa.svg" addimgx="75" addimgy="175" />
<rotateimage center="0" scale="2" link="pressmin" imageref="lanc xcenter="75" ycenter="175" />
<rotateimage center="0" scale="2" link="pressmax" imageref="lanc xcenter="75" ycenter="175" />
<infodata url="press.htm" imageref="pressmin" />
</stile>
```
The last example regards the number of university students that actively follow a course.
An XML file represents the number of students associated to the different courses:
<?xml version="1.0" standalone="no"?>
<?stile astile.xml ?>
<!DOCTYPE datafile SYSTEM "data.dtd">
<datafile>
<group>
<gdata gid="1corso" gnum="60" />
<gdata gid="2corso" gnum="100" />
<gdata gid="3corso" gnum="70" />
<gdata gid="4corso" gnum="120" />
</group>
</datafile>
We decide to represent the context with the image of a lecture hall and to represent the different number of students with the same image of a student but differently scaled.
We also choose to highlight the highest frequency by surrounding the correspondent image with a green rectangle.
Below is the resulting SVG code:
<?xml version="1.0"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.0//EN"
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg xmlns="http://www.w3.org/2000/svg" width="350" height="400">
<image xlink:href="aula.gif" width="300" height="200"
transform="scale(0.666667 0.68) translate(12 35.2941) translate("/>
<image xlink:href="stu.gif" width="50" height="90"
transform="translate(88.3333 110) translate(211.667 -3.33333) tr"/>
<image xlink:href="aula.gif" width="300" height="200"
transform="matrix(1 0 0 1 100 150) translate(-95 58.3333) scale(/>
Conclusions
In this paper we presented an approach to produce pictogram representation of numerical data obtained, for example, from scientific computer programs. How a pictogram has to be drawn is specified using an XML based language, and the data to be represented itself are specified using XML. An implementation using XSLT to produce SVG representation of the pictogram has been proposed and several examples have been presented to prove the applicability of the technique. Future direction include the extension of both XML dialects to include more sophisticated data types, such as tables and non numerical information. Also we plan to implement the approach using different technologies, such as Java, to allow the graphical construction of the pictogram styles as well as their visualisation.
Bibliography
Switzerland.
|
{"Source-Url": "http://oro.open.ac.uk/6740/1/Baravalle.pdf", "len_cl100k_base": 4563, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31580, "total-output-tokens": 5809, "length": "2e12", "weborganizer": {"__label__adult": 0.0004301071166992187, "__label__art_design": 0.0030879974365234375, "__label__crime_law": 0.0004982948303222656, "__label__education_jobs": 0.005748748779296875, "__label__entertainment": 0.000148773193359375, "__label__fashion_beauty": 0.0002624988555908203, "__label__finance_business": 0.0002884864807128906, "__label__food_dining": 0.000457763671875, "__label__games": 0.00048470497131347656, "__label__hardware": 0.001514434814453125, "__label__health": 0.0008783340454101562, "__label__history": 0.0006442070007324219, "__label__home_hobbies": 0.0001513957977294922, "__label__industrial": 0.0006604194641113281, "__label__literature": 0.0005612373352050781, "__label__politics": 0.0002779960632324219, "__label__religion": 0.0006976127624511719, "__label__science_tech": 0.16796875, "__label__social_life": 0.0001609325408935547, "__label__software": 0.03131103515625, "__label__software_dev": 0.78271484375, "__label__sports_fitness": 0.00025177001953125, "__label__transportation": 0.0005283355712890625, "__label__travel": 0.00026798248291015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22845, 0.02032]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22845, 0.8544]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22845, 0.86493]], "google_gemma-3-12b-it_contains_pii": [[0, 1667, false], [1667, 2922, null], [2922, 5284, null], [5284, 7936, null], [7936, 10580, null], [10580, 12571, null], [12571, 13315, null], [13315, 14622, null], [14622, 16946, null], [16946, 18216, null], [18216, 19402, null], [19402, 20770, null], [20770, 20782, null], [20782, 22833, null], [22833, 22845, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1667, true], [1667, 2922, null], [2922, 5284, null], [5284, 7936, null], [7936, 10580, null], [10580, 12571, null], [12571, 13315, null], [13315, 14622, null], [14622, 16946, null], [16946, 18216, null], [18216, 19402, null], [19402, 20770, null], [20770, 20782, null], [20782, 22833, null], [22833, 22845, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22845, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22845, null]], "pdf_page_numbers": [[0, 1667, 1], [1667, 2922, 2], [2922, 5284, 3], [5284, 7936, 4], [7936, 10580, 5], [10580, 12571, 6], [12571, 13315, 7], [13315, 14622, 8], [14622, 16946, 9], [16946, 18216, 10], [18216, 19402, 11], [19402, 20770, 12], [20770, 20782, 13], [20782, 22833, 14], [22833, 22845, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22845, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
7af960e06818a6b4bff8261024387728b91ad4a6
|
Language Support for Megamodel Renarration
Ralf Lämmel$^1$ and Vadim Zaytsev$^2$
1 Software Languages Team, Universität Koblenz-Landau, Germany
2 Software Analysis & Transformation Team, CWI, Amsterdam, The Netherlands
Abstract. Megamodels may be difficult to understand because they reside at a high level of abstraction and they are graph-like structures that do not immediately provide means of order and decomposition as needed for successive examination and comprehension. To improve megamodel comprehension, we introduce modeling features for the recreation, in fact, renarration of megamodels. Our approach relies on certain operators for extending, instantiating, and otherwise modifying megamodels. We illustrate the approach in the context of megamodeling for Object/XML mapping (also known as XML data binding).
Keywords: megamodeling, linguistic architecture, renarration, software language engineering, XML data binding
1 Introduction
Models (of all kinds) may be difficult to understand when they reside at a high level of abstraction and when they are not structured in a way to serve successive examination and comprehension. In this paper, we are specifically concerned with the modeling domain of the linguistic architecture of software systems [5] and a corresponding form of megamodels [3]. These are highly abstract models about software systems in terms of the involved languages, technologies, concepts, and artifacts. We aim to improve understanding of such models by means of renarration such that a megamodel is described (in fact, recreated) by a ‘story’ as opposed to a monolithic, highly abstract graph.
Contribution of this paper We enrich the megamodeling language MegaL [5] with language support for renarration such that megamodels can be developed in an incremental manner, subject to appropriate operators such as ‘addition’, ‘restriction’, or ‘instantiation’, also subject to an appropriate notion of megamodel deltas. In previous work [16], we have introduced the notion of renarration of megamodels in an informal manner as the process of converting a collection of facts into a story, also inspired by natural language engineering [15], computer-assisted reporting [13] and database journalism [8]. In this paper, we take the next step: we enrich megamodeling with proper language support for renarration.
3 The paper’s website: http://softlang.uni-koblenz.de/megal-renarration
36
Roadmap §2 provides background on megamodeling and motivates the need for renarration. §3 recalls the MegaL language. §4 describes the specific approach to renarration. §5 provides a catalogue of operators that are used to express steps of renarration. §6 validates the approach in the context of megamodeling for Object/XML mapping. §7 discusses related work. §8 concludes the paper.
2 On the need for megamodel renarration
“A megamodel is a model of which […] some elements represent and/or refer to models or metamodels” [3]—we use this definition by interpreting the notion of (meta)models in a broad sense to include programs, documents, schemas, grammars, etc. Megamodeling is often applied in the context of model-driven engineering while we apply it in the broader software engineering and software development context.
That is, we use megamodels to model the linguistic architecture of software systems [5]. By linguistic architecture of software systems or technologies, we mean their architecture expressed in terms of the involved software languages, software technologies, software concepts, software artifacts, and the explicit relationships between all these conceptual and actual entities. In our recent work, we have shown the utility of megamodels for understanding the linguistic architecture of diverse software (language) engineering scenarios [5,16].
Consider Figure 1 for an illustrative megamodel rendered in the visual syntax MegaL/yEd [5]. The nodes represent entities (languages, schemas, tools, etc.). The edges represent relationships (‘elementOf’, ‘conformsTo’, ‘correspondsTo’, etc.). The megamodel sketches basic aspects of Object/XML mapping according to the JAXB technology for XML data binding in the Java platform. Specifically, there is the aspect of deriving an object model (i.e., Java classes) from an XML schema (see the upper data flow in the figure) and the aspect of de-serializing an XML document to an object graph in the JVM (see the lower data flow in the figure).
One impediment to megamodel comprehension is the abstraction level of megamodels. In particular, the role and the origin of the entities as well the meaning of the relationships may not be evident. In recent work [5], we have
proposed a remedy for this problem. Our proposal involves linking megamodel entities and relationships to proper artifacts or extra resources for conceptual entities.
This paper focuses on another impediment to megamodel comprehension: megamodels are essentially just graph-like structures that do not immediately provide means of order and decomposition as needed for successive examination and comprehension. Consider the figure again. The following kinds of questions naturally arise. Where to start ‘reading’ in the figure? Are there any subgraphs that can be understood independently? Do any of the entities arise as instantiations of more general entities that may be worth mentioning to facilitate understanding?
The latter impediment to comprehension is not unique to megamodeling, of course. Various modeling or specification languages are prone to the same problem. Various remedies exist, e.g., based on modularization, abstraction, refinement, annotation, and slicing. In this paper, we put to work renarration which is indeed inspired by existing ideas on refinement, modularization, and slicing.
In general, renarration is the process of creating different stories while reusing the same facts (cf. narration\(^4\)). In literature, for example, renarration is a technique to create a story by the narrator based on fixed plot elements; the story itself can be adapted to the audience and other circumstances—we refer to [2] for more background information. In megamodeling, renarration is a process of creating stories for the recreation of a megamodel. Recreation may cater for the audience’s technical background and interest, time available and yet other factors. In our experience, the process of recreating a megamodel is needed to make megamodels meaningful to humans. Recreation may be interactive, e.g., by renarrating megamodels on the whiteboard, encouraging questions from the audience, and responding to these questions in the continuation of the story. This paper provides language support for the process of renarration.
3 Megamodelling with Megal
Figure 1 provided a first illustration of the Megal [5] language for megamodelling. In the rest of the paper, we use the textual Megal syntax, i.e., Megal/TXT. A megamodel is a collection of declarations of the following kinds.
Entity declarations A name is introduced for a conceptual entity, an actual entity, or a parameter thereof; an entity type (e.g., Language or File) is assigned. For instance:
```
Java : Language // "Java" as a language entity
JavaGrammar : Artifact // the "JavaGrammar" as an artifact entity
BNF : Language // "BNF" as a language entity
?aLanguage : Language // parameter "aLanguage" for a language entity
?aProgram : File // parameter "aProgram" for a file entity
```
We speak of a conceptual entity, if it exists in our mind, as in the case of a language. We speak of an actual entity (or simply an artifact), if it is manifest in some way: it exists on the file system (e.g., a language description) or as data structure at runtime (e.g., a parse tree).
**Relationship declarations** Two declared entities (or parameters thereof) are related by a binary relationship (e.g., ‘elementOf’ or ‘conformsTo’). For instance:
```plaintext
aProgram elementOf Java // a program of the Java language
JavaGrammar elementOf BNF // the Java grammar is a BNF-style grammar
JavaGrammar defines Java // the Java grammar defines the Java language
aProgram conformsTo JavaGrammar // a program conforming to the Java grammar
```
**Entity-type declarations** There is a number of predefined, fundamental entity types, as exercised in the earlier examples, but new entity types can be defined by specialization. For instance:
```plaintext
OopLanguage < Language // an entity type for OO programming languages
FpLanguage < Language // an entity type for functional programming languages
```
**Relationship-type declarations** Likewise, there is a number of predefined, fundamental relationship types, as exercised in the illustrations above, but new relationship types can be defined on predefined as well as explicitly declared entity types. We do not further discuss such expressiveness in this paper.
The declarations simply describe a graph as illustrated in Figure 1. The order of all declarations of a megamodel is semantically irrelevant. The lack of any intrinsic notion of order (as in an imperative setting) or decomposition (as in substitution or function composition in functional programming) feeds into the comprehension challenge to be addressed by renarration. We mention in passing that megamodels have an interesting evaluation semantics. That is, declared relationships may be checked by applying some programmatic relationship-specific check on resources linked to declared entities.
### 4 Megamodel renarration
We add language support for renarration to the megamodelling language **MegaL**. We commit to a specific view on renarration such that megamodel **deltas** are used in the recreation of a megamodel through a sequence of steps with each step being effectively characterized by ingredients as follows:
- An informative **label** of the step, also serving as an ‘id’ for reference.
- The actual **delta** in terms of added and removed declarations (such as entity and relationship declarations). Added declarations are prefixed by ‘+’; removed declarations are prefixed by ‘−’. Deltas must preserve well-formedness of megamodels. In particular:
- Entities are declared uniquely.
- All entities referenced by relationship declarations are declared.
- Relationships are applied to entities of suitable types.
Consider the following megamodel (in fact, megamodeling pattern) of a file and a language being related such that the former (in terms of its content) is an element of the latter.
```
[Label="File with language", Operator="Addition"]
+ ?aLanguage : Language // some language
+ ?aFile : File // some file
+ aFile elementOf aLanguage // associate language with file
```
In a next step, let us instantiate the language parameter to actually commit to the specific language `Java`. Thus:
```
[Label="A Java file", Operator="Instantiation"]
+ Java : Language // pick a specific language
+ aFile elementOf Java // associate the file with Java
- ?aLanguage : Language // removal of language parameter
- aFile elementOf aLanguage // removal of reference to language parameter
```
Fig. 2. An illustrative renarration
- An operator to describe the intent of the step. Each operator implies specific constraints on the delta, as discussed below.
The steps are interleaved with informal explanations.
See Figure 2 for a trivial, illustrative renarration. The first step introduces some entities and relates them. Nothing is removed; thus, the use of the operator ‘Addition’. The second step instantiates the megamodel to a more concrete situation. The more general declarations are removed according to the delta and more specific declarations are added; thus, the use of the operator ‘Instantiation’.
Arguably, the instantiation could be characterized more concisely than by listing the delta, but we like to emphasize the utility of deltas for at least explaining the intended semantics of the renarration operators.
5 Renarration operators
The illustrative renarration of Figure 2 has started to reveal some operators: addition and instantiation. In this section, we provide a catalogue of operators. In the next section, the operators will be illustrated by a larger renarration.
- **Addition**: declarations are exclusively added; there are no removals. Use this operator to enhance a megamodel through added entities and to constrain a megamodel through added relationships.
- **Removal**: the opposite of **Addition**.
- **Restriction**: net total of addition and removal is such that entities may be restricted to be of more specific types. Also, the set operand of ‘elementOf’ and the super-set operand of ‘subsetOf’ relationships may be restricted.
- **Generalization**: the opposite of **Restriction**.
– **ZoomIn**: net total of addition and removal is such that relationships are decomposed to reveal more detail. Consider, for example, the relationship type `mapsTo`, which is used to express that one entity is (was) transformed into another entity. When zooming in, a relationship `x mapsTo y` could be expanded so as to reveal the function that contributes the pair `(x, y)`.
– **ZoomOut**: the opposite of ZoomIn.
– **Instantiation**: parameters are consistently replaced by actual entities. We may describe such instantiation directly by a mapping from parameters to entities as opposed to a verbose delta. (A delta is clearly obtainable from such a mapping.)
– **Parameterization**: the opposite of Instantiation.
– **Connection**: convert an entity parameter into a dependent entity, which is one that is effectively determined by relationships as opposed to being yet available for actual instantiation. Such a dependency often occurs as the result of adding other parameters, e.g., a parameter for the definition of a language. We prefix dependent entity declarations by `!' whereas `?' is used for parameters, as explained earlier.
– **Disconnection**: the opposite of Connection.
– **Backtracking**: return to an earlier megamodel, as specified by a label. This may be useful in a story, when a certain complication should only be temporarily considered and subsequent steps should relate again to a simpler intermediate state.
6 An illustrative renarration
We are going to renarrate a megamodel for Object/XML mapping. We begin with the introduction of the XML schema which is the starting point for generating a corresponding object model:
```
[Label="XML schema", Operator="Addition"]
+ XSD : Language // the language of XML schemas
+ ?anXmlSchema : File // an XML schema
+ anXmlSchema elementOf XSD // an XML schema, indeed
```
On the OO side of things, we assume a Java-based object model:
```
[Label="Object model", Operator="Addition"]
+ Java : Language // the Java language
+ ?anObjectModel : File+ // an object model organized in one or more files
+ anObjectModel elementOf Java // a Java-based object model
```
The entities `anXmlSchema` and `anObjectModel` are parameters (see the `?' prefix) in that they would only be fixed once we consider a specific software system. We assume that schema and object model are related to each other in the sense that the former is mapped to (‘transformed into’) the latter; these two data models also correspond to each other [5].
```
[Label="Schema first", Operator="Addition"]
+ anXmlSchema mapsTo anObjectModel // the schema maps to the object model
+ anXmlSchema correspondsTo anObjectModel // the artifacts are "equivalent"
```
The ‘mapsTo’ relationship is helpful for initial understanding, but more details are needed eventually. Let us reveal the fact that a ‘type-level mapping’ would be needed to derive classes from the schema; we view this as ‘zooming in’: one relationship is replaced in favor of more detailed declarations:
```
[Label="Type−level mapping", Operator="ZoomIn"]
+ ?aTypeMapping : XSD -> Java // a mapping from schemas to object models
+ aTypeMapping(anXmlSchema) |-> anObjectModel // apply function
- anXmlSchema mapsTo anObjectModel // remove too vague mapping relationship
```
It is not very precise, neither is it suggestive to say that type-level mapping results in arbitrary Java code. Instead, we should express that a specific Java subset for simple object models (in fact, POJOs for data representation without behavioral concerns) is targeted. Thus, we restrict the derived object model as being an element of a suitable subset of Java, to which we refer here as OxJava:
```
[Label="O/X subset", Operator="Restriction"]
+ OxJava : Language // the O/X−specific subset of Java
+ OxJava subsetOf Java // establishing subset relationship, indeed
+ anObjectModel elementOf OxJava // add less liberal constraint on object model
- anObjectModel elementOf Java // remove too liberal constraint on object model
```
We have covered the basics of the type level of Object/XML mapping. Let us look at the instance level which involves XML documents and object graphs (trees) related through (de-)serialization. Let us assume an XML input document for de-serialization which conforms to the XML schema previously introduced:
```
[Label="XML document", Operator="Addition"]
+ XML : Language // the XML language
+ ?anXmlDocument : File // an XML document
+ anXmlDocument elementOf XML // an XML document, indeed
+ anXmlDocument conformsTo anXmlSchema // document conforms to schema
```
The result of de-serialization is an object graph that is part of the runtime state. We assume a language for Java’s JVM-based object graphs. The object graph conforms to the object graph previously introduced:
```
[Label="Object graph", Operator="Addition"]
+ JvmGraph : Language // the language of JVM graphs
+ ?anObjectGraph : State // an object graph
+ anObjectGraph elementOf JvmGraph // a JVM−based object graph
+ anObjectGraph conformsTo anObjectModel // graph conforms to object model
```
De-serialization maps the XML document to the object graph:
```
[Label="Instance−level mapping", Operator="Addition"]
+ ?aDeserializer : XML -> JvmGraph // deserialize XML to JVM graphs
+ aDeserializer(anXmlDocument) |-> anObjectGraph // map via deserializer
```
At this point, the mappings both at type and the instance levels (i.e., aTypeMapping and aDeserializer) are conceptual entities (in fact, functions) without a trace of their emergence. We should manifest them in relation to the underlying mapping technology. We begin with the type level.
```
[Label="Code generator", Operator="Addition"]
```
+ ?anOxTechnology : Technology // a technology such as JAXB
+ ?anOxGenerator : Technology // the generation part
+ anOxGenerator partOf anOxTechnology // a part, indeed
By relating generator and type mapping, we stop viewing the (conceptual entity for the) mapping as a proper parameter; rather it becomes a dependent entity.
[Label="Dependent type−level mapping", Operator="Connection"]
+ anOxGenerator defines aTypeMapping // mapping defined by generator
+ !aTypeMapping : XSD -> Java // this is a dependent entity now
- !aTypeMapping : XSD -> Java // Ditto
Likewise, de-serialization is the conceptual counterpart for code that actually constructs and runs a de-serializer with the help of a designated library, which is another part of the mapping technology:
[Label="O/X library", Operator="Addition"]
+ ?anOxLibrary : Technology // the O/X library
+ anOxLibrary partOf anOxTechnology // an O/X part
+ ?aFragment : Fragment // source code issuing de−serialization
+ aFragment elementOf Java // source code is Java code
+ aFragment refersTo anOxLibrary // use of O/X library
Again, we eliminate the parameter for the de-serializer:
[Label="Dependent instance−level mapping", Operator="Connection"]
+ aFragment defines aDeserializer // fragment "constructs" de−serializer
+ !aDeserializer : XML -> JvmGraph // this is a dependent entity now
- !aDeserializer : XML -> JvmGraph // Ditto
Let us instantiate the mapping technology and its components to commit to the de-facto platform standard: JAXB [9]. We aim at the following replacements of parameters by concrete technology names:
[Label="JAXB", Operator="Instantiation"]
anOxTechnology => JAXB // instantiate parameter ... as ...
anOxGenerator => JAXB.xjc // ditto
anOxLibrary => JAXB.javax.xml.bind // ditto
Thus, we use qualified names for the component technologies of JAXB, thereby reducing the stress on the global namespace. We omit the the lower level meaning of the instantiation in terms of a delta.
Let us now generalize rather than instantiate. To this end, we first backtrack to an earlier state—the one before we instantiated for JAXB:
[Label="Dependent instance−level mapping", Operator="Backtracking"]
Now we can generalize further by making the language a parameter of the model. (Again, we show the concise mapping of actual entities to parameters as opposed to the delta for all the affected declarations.)
[Label="Beyond Java", Operator="Parameterization"]
Java => anOopLanguage // replace ... by parameter ...
OxJava => anOxLanguage // ditto
Arguably, we should use more specific entity types to better characterize some of the parameters of the model. For instance, the intention of the language
A more advanced approach to the renarration of megamodels may receive inspiration from, for example, model management in MDE with its management operators (e.g., for composition [1]) and grammar convergence [10] with its rich underlying operator suite of (in this case) grammar modifications.
The field of natural language engineering contains many problems such as deriving a syuzhet from a fabula, a plot from genre elements, or a story from a plot. Recent solutions to these problems are advanced, formal and automated [15], and can be reused for software language engineering to facilitate semi-automatic or genetic inference of megamodel renarrations based on given constraints.
8 Concluding remarks
We have introduced language support for renarrating megamodels. With a relatively simple language design, we have made it possible to recreate (renarrate) megamodels in an incremental manner, while expressing intents by means of designated operators along the way.
In future work, we plan to provide a precise semantics of the operators. Further, by applying renarration to a number of different megamodeling scenarios, we also hope to converge on the set of operators needed in practice. Deltas,
as such, are fully expressive to represent any sort of recreation, but the suite of operators needs to be carefully maintained to support enough intentions for convenient use and useful checks on the steps. Yet another interesting area of future work is the animation of renarrations for a visual megamodeling language; we use the visual approach already informally on the whiteboard. Finally, the improvement of megamodel comprehension through renarration should be empirically validated.
References
|
{"Source-Url": "https://ir.cwi.nl/pub/21605/21605D.pdf", "len_cl100k_base": 4857, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25985, "total-output-tokens": 6389, "length": "2e12", "weborganizer": {"__label__adult": 0.00038361549377441406, "__label__art_design": 0.0005364418029785156, "__label__crime_law": 0.0003046989440917969, "__label__education_jobs": 0.0006604194641113281, "__label__entertainment": 8.380413055419922e-05, "__label__fashion_beauty": 0.00015151500701904297, "__label__finance_business": 0.00016129016876220703, "__label__food_dining": 0.0003292560577392578, "__label__games": 0.0004184246063232422, "__label__hardware": 0.00045371055603027344, "__label__health": 0.00039005279541015625, "__label__history": 0.0002124309539794922, "__label__home_hobbies": 6.407499313354492e-05, "__label__industrial": 0.0003154277801513672, "__label__literature": 0.0007424354553222656, "__label__politics": 0.0002435445785522461, "__label__religion": 0.00049591064453125, "__label__science_tech": 0.01505279541015625, "__label__social_life": 0.00010657310485839844, "__label__software": 0.006153106689453125, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.00023925304412841797, "__label__transportation": 0.0003802776336669922, "__label__travel": 0.00015664100646972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25456, 0.01478]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25456, 0.48076]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25456, 0.83752]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2429, false], [2429, 4673, null], [4673, 7648, null], [7648, 10510, null], [10510, 12923, null], [12923, 15628, null], [15628, 18617, null], [18617, 21303, null], [21303, 22509, null], [22509, 25456, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2429, true], [2429, 4673, null], [4673, 7648, null], [7648, 10510, null], [10510, 12923, null], [12923, 15628, null], [15628, 18617, null], [18617, 21303, null], [21303, 22509, null], [22509, 25456, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25456, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25456, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2429, 2], [2429, 4673, 3], [4673, 7648, 4], [7648, 10510, 5], [10510, 12923, 6], [12923, 15628, 7], [15628, 18617, 8], [18617, 21303, 9], [21303, 22509, 10], [22509, 25456, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25456, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
45a22384d961476e41abd4616646b3b45680e3b9
|
Money for Nothing and Privacy for Free?
Swapneel Sheth, Tal Malkin, Gail Kaiser
Department of Computer Science, Columbia University, New York, NY 10027
{swapneel, tal, kaiser}@cs.columbia.edu
Abstract—Privacy in the context of ubiquitous social computing systems has become a major concern for the society at large. As the number of online social computing systems that collect user data grows, this privacy threat is further exacerbated. There has been some work (both, recent and older) on addressing these privacy concerns. These approaches typically require extra computational resources, which might be beneficial where privacy is concerned, but when dealing with Green Computing and sustainability, this is not a great option. Spending more computation time results in spending more energy and more resources that make the software system less sustainable. Ideally, what we would like are techniques for designing software systems that address these privacy concerns but which are also sustainable - systems where privacy could be achieved “for free,” i.e., without having to spend extra computational effort. In this paper, we describe how privacy can be achieved for free - an accidental and beneficial side effect of doing already existing computation - and what types of privacy threats it can mitigate. More precisely, we describe a “Privacy for Free” design pattern and show its feasibility, sustainability, and utility in building complex social computing systems.
Keywords—Design Pattern; Correlation Privacy; Web 2.0; Concept Drift; Differential Privacy;
I. INTRODUCTION
Today’s college students do not remember when social recommendations, such as those provided by Amazon, Netflix, Last.fm, and StumbleUpon, were not commonplace. Privacy in the context of these social computing systems has become a major concern for the society at large. A search for the pair of terms “facebook” and “privacy” gives nearly two billion hits on popular search engines. Recent feature enhancements and policy changes in social networking and recommender applications – as well as their increasingly common use – have exacerbated this issue [1]–[4]. With many online systems that range from providing purchasing recommendations to suggesting plausible friends, as well as media attention (e.g., the AOL anonymity-breaking incident reported by the New York Times [5]), both users of the systems and even non-users of the systems (e.g., friends, family, co-workers, etc. mentioned or photographed by users) are growing more and more concerned about their personal privacy [6].
Social computing systems, when treated in combination, have created a threat that we call “Correlation Privacy.” Narayanan and Shmatikov [7] demonstrated a relatively straightforward method to breach privacy and identify individuals by correlating anonymized Netflix movie rating data with public IMDb data. A similar de-anonymization approach could potentially be applied to any combination of such data-gathering systems, so how to safeguard against these “attacks” is an important concern for the designers of social computing systems. This is analogous to earlier work addressing queries on census data but, at that time, there were relatively few prospective attackers [8], [9].
There has been some recent work on data anonymization for privacy in software testing [10]–[12]. However, data anonymization alone may not be sufficient as Narayanan and Shmatikov show. (For more details on the related work including de-anonymization approaches, please see Section VI.) We need other techniques (which may be used orthogonal to data anonymization) to deal with privacy concerns and general approaches, design patterns, software architectures, etc. that would work across a wide variety of systems.
In this paper, we propose a design pattern, which we call “Privacy for Free,” targeted towards online social systems. In particular, we focus on systems that already have access to user data such as purchase history, movie ratings, music preferences, and friends and groups and that use complex data mining techniques for providing additional social benefits such as recommendations, top-n statistics, and so on to their users. In our software engineering community, these are systems like Mylyn [13], Codebook [14], or others([15], [16]) that have access to user (developer or end-user) interactions with software artifacts such as code, bug reports, and test cases.
The main research question we try to answer here is - Is there a general purpose architecture or design pattern that can be used with a wide range of large complex software systems, that will achieve privacy without spending any extra resources on computational overhead? We believe it is - we have discovered a technique for achieving privacy as an accidental and beneficial side effect of doing already existing computation.
The already existing computation in our case was weighing user data in a certain way - weighing recent user data exponentially more than older data to address the problem of “concept drift” [17] - to increase the relevance of the recommendations. This weighing is very common and used in a lot of systems [18]–[21]. Recent work in the databases/cs theory communities on Differential Privacy [22], [23] made us realize that our already existing computation for weighing user
data is very similar to one of the techniques for achieving differential privacy. (Intuitively, differential privacy ensures that a user’s participation (vs not participating) in a database doesn’t affect his privacy significantly. We provide more detailed information on Differential Privacy in Section III.) This resulted in the formulation of our hypothesis - if we change the existing computation so it matches the technique for achieving differential privacy (which would be a very minor and straightforward code change as the two techniques are very similar), would we get privacy as a beneficial side effect of addressing a completely different problem?
We show that it is indeed possible to get privacy as a beneficial side effect of doing some existing computation - thus, privacy for free - and this is the main contribution of our paper. We have formulated this technique as a design pattern that can be used in a wide variety of software systems to achieve "privacy for free", and show the feasibility, sustainability, and utility of using this approach to building software systems. We also contribute to the discussion in the privacy community about how to define privacy and how to achieve it. Specifically, we suggest a new direction for designing (differentially, or otherwise) private algorithms and systems motivated by what is already being done anyway.
There’s an added benefit of having privacy for free as a side-effect - even though privacy is important for users, many corporations may not be motivated to work hard on privacy. This may be due business reasons where having as much user information as possible is useful for targeted advertising, etc. If privacy could be achieved cheaply (in terms of computational or other resources), they still may not opt for it. In such cases, having privacy as a side effect of doing other computation is a very strong advantage from the users’ point of view.
The rest of the paper is organized as follows: Section II describes the motivation of our problem and why making privacy sustainable is important. Section III provides background information on Differential Privacy and Concept Drift. Section IV describes our “Privacy for Free” design pattern. Section V presents our empirical evaluations to show the feasibility, sustainability, and utility of our design pattern. Finally, we conclude the paper in Sections VI and VII with a discussion of the related work and our conclusions.
II. Motivation
Green Computing (or Green IT) is “the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems [...] efficiently and effectively with minimal or no impact on the environment” [24]. With our oil reserves projected to exhaust in less than fifty years [25], and renewable energy sources still providing only a small fraction [26], Green Computing here and now is becoming more and more important and, indeed, vital to our children and grandchildren.
An important research direction will be investigating how to build greener and more sustainable software systems from a software engineering perspective, in addition to the complementary algorithmic efficiency and systems perspective such as resource allocation, platform virtualization, and power management pursued by other computer science subdisciplines [27]. Ideally, from a sustainable software system point of view, we want to build systems that solve real-world problems by spending very little (or no) extra computational effort.
There has been some recent work in the software engineering community on data privacy [10]–[12]. This has focused on anonymization techniques to make hide sensitive data. While this work has been very promising, its goal hasn’t been to be sustainable. Clause and Orso [10] in their empirical results show that their technique takes between 2.5 minutes to 9 minutes. The time taken would probably increase for larger more complex systems. Similarly, the technique proposed in [11], [12] also requires substantial computation time. If there are millions of users of these systems, we are spending a lot of extra computational resources that aren’t needed as far the original system in concerned.
This is our main motivation for this paper. We feel that our “Privacy for Free” design pattern can result in software systems that are more sustainable and that already have privacy guarantees built in.
III. Background
Here we provide some background information on Differential Privacy and Concept Drift.
A. Differential Privacy
In the 1970s, when research into statistical databases was popular, Dalenius [28] proposed a desideratum for statistical database privacy - access to a statistical database should not enable someone to learn something about an individual that cannot be learned without access to the database. While such a desideratum would be great for privacy, Dwork et al. [22], [33] showed that this notion of absolute privacy is impossible using a strong mathematical proof. The problem with the desideratum is the presence of “Auxiliary Information”. Auxiliary Information is similar to, and a generalization of, the notion of Correlation Privacy mentioned earlier.
Dwork gives a nice example to explain how Auxiliary Information can be a problem when privacy is concerned - “Suppose one’s exact height were considered a highly sensitive piece of information, and that revealing the exact height of an individual were a privacy breach. Assume that the database yields the average heights of women of different nationalities. An adversary who has access to the statistical database and the auxiliary information “Terry Gross is two
inches shorter than the average Lithuanian woman” learns Terry Gross’ height, while anyone learning only the auxiliary information, without access to the average heights, learns relatively little.” An interesting observation made by Dwork is that the above example for breach of privacy holds regardless of whether Terry Gross’ information is part of the database or not.
To combat Auxiliary Information, Dwork proposes a new notion of privacy called Differential Privacy. Dwork’s paper is a culmination of the work started earlier and described in papers such as [29]–[31]. Intuitively, Differential Privacy guarantees privacy by saying that if an individual participates in the database, there is no additional loss of privacy (within a small factor) versus if he had not participated in the database. Formally, Differential Privacy is defined as follows: A Randomized function \( K \) gives \( \epsilon \)-differential privacy if for all data sets \( D_1 \) and \( D_2 \) differing on at most one element, and all \( S \subseteq \text{Range}(K) \),
\[
\Pr[K(D_1) \in S] \leq \exp(\epsilon) \times \Pr[K(D_2) \in S]
\]
C. Concept Drift
People’s preferences change over time - things that I like doing today may not be things I liked doing 10 years ago. If data is being mined or recommendations being generated, the age of the data needs to be accounted for. To address this problem, the notion of Concept Drift was formed [17]. This problem needs to be addressed by any field that deals with data spanning some time frame (from a few hours to months and years). An example class of systems that need to address the problem on Concept Drift is Recommender Systems. Many recommender systems use Collaborative Filtering (CF), i.e., recommending things to an individual by looking at what other users similar to the individual like [21], [36], [37]. CF algorithms typically look at the activities of individuals from the past (movies watched, things bought, etc.) and use this to derive recommendations. However, people’s preferences change over time. For example, when I am in college and taking a lot of classes, I might buy a lot of textbooks from Amazon. When I graduate, I may not need textbook recommendations. This is exactly the kind of problem that Concept Drift tries to address.
Other example classes of systems that need to address this problem are systems that mine software repositories [38], social software engineering systems [14], systems for collaboration and awareness [39], etc. For these kinds of systems, there is a lot of old and recent data available and weighing certain data differently might be essential.
D. Addressing Concept Drift
There have been different solutions proposed to address the problem of Concept Drift [17], [40], [41]. A particular solution of note is the Exponential Time Decay Algorithm [42]. The Exponential Time Decay Algorithm weighs things
done recently exponentially higher than things done in the past. It gradually decays the weight of things done in the past so that things done in the distant past do not affect the outcome as much as things done recently, thus addressing the problem on Concept Drift. The Exponential Time Decay Algorithm is very popular and used by a lot of systems [18]–[21]. For the rest of the paper, we refer to this as the CD (Concept Drift) algorithm.
Consider the Terry Gross example again and let’s assume that the database has historical data going back 100 years. As average heights change over time, the CD algorithm will weigh newer data exponentially higher than older data resulting in a weighted average height. This would reflect the recent trends but also account for older data. The CD algorithm is the another corner stone of our design pattern and we build on it more in the next section.
IV. PRIVACY FOR FREE: A DESIGN PATTERN
The CD and EM algorithms are very similar. The CD algorithm uses exponential weighing over the data while the EM algorithm chooses inputs with probability proportional to the exponential of the scoring function. If we choose the scoring function to be the timestamp of the data, the two algorithms becomes even more similar. The CD algorithm is deterministic and weighs new data exponentially higher than older data; the EM algorithm is probabilistic and chooses new data with an exponentially higher probability than older data.
This is the crux of our paper - if existing systems that already use the CD algorithm modify the code to use the EM algorithm instead, they would, as an added benefit, get the main advantage of the EM algorithm - differential privacy. Further, this privacy would not require any extra computational overhead and thus, we would get privacy for free. Systems that do not already use either the CD or the EM algorithm could still add the EM algorithm and privacy could still be viewed as being free - an added benefit of solving some other problem, which in this case is Concept Drift. Since these two algorithms are very similar, it would require a very small and straightforward change to the code to change from the CD algorithm to the EM algorithm.
The important requirement for the differential privacy guarantees to hold are that all the data access must be done via the EM algorithm, which could be implemented as a separate class or be part of a library or the data model, etc. We describe our design pattern using a modified version of the template suggested by Gamma et al. [43]. This is shown below:
Pattern Name and Classification
“Privacy for Free”, Behavioral class pattern
Intent
To provide differential privacy in social computing systems without any extra computational overhead.
Motivation
See Section II.
Applicability
Software systems that already have access to user data such as purchase history, movie preferences, interaction with software artifacts like code and bug reports.
Participants
The rest of the design of an existing system can remain unchanged. A new system can be implemented as per the necessary requirements. The only mandatory class is the EM algorithm and this must be used to access the data. There will be no other change in participants.
Collaborations
There are no requirements on participant collaborations. The only restriction is that all access to the data should be via the EM algorithm, which could be implemented as a class (or part of a library or data model).
Consequences
The design pattern will provide privacy for free without any extra computational overhead. The tradeoff is a small loss in accuracy of recommendations/data mining. See Section V-C.
Implementation
Any programming language can be used for the implementation. The only requirement would be the ability to generate pseudo-random numbers as the algorithm is probabilistic.
Related Patterns
None as we focus on systems that already have access to user data. For other kinds of systems (such as network systems), there are existing privacy patterns [44], [45].
V. EVALUATION
Our design pattern requires implementing (or substituting an existing implementation of the CD algorithm with) the EM algorithm. To evaluate our design pattern, we implemented the EM and CD algorithms and investigated the differences in these. Our goal was to answer the following research questions:
RQ1: Feasibility—Does using our design pattern guarantee differential privacy?
RQ2: Utility—Does using our design pattern affect the utility of the system to give meaningful recommendations or mine data?
RQ3: Sustainability—Can our design pattern be sustainable? Can using our design pattern result in no additional computational resources for privacy?
With RQ1, we aim to prove the primary benefit of our design pattern - guaranteeing privacy. Our goal is to show that it does indeed guarantee differential privacy making it suitable to be used in a variety of large social systems.
With RQ2, we explore the utility of using our design pattern. A “straw man” way to guarantee privacy for any recommender/data mining system is to give a random answer every time. This would not require any clever technical solutions, but this would be very bad for the overall utility of the system - the goal of most such systems is to provide relevant information. There exists a tradeoff between accuracy and privacy and we explore this here. We aim to show that, using our technique, there is a small loss in accuracy and that this loss in accuracy scales very well (roughly constant) as the size of the system increases. Thus, if a small loss in accuracy is acceptable, we can get privacy for free without spending any additional computational resources.
With RQ3, we aim to show the sustainability benefits of using our design pattern. We show that using our design pattern (and the EM algorithm) requires less CPU time than the equivalent CD algorithm. Not only do we not need any additional computational resources, we should be able to reduce computational needs by using our design pattern.
A. RQ1 - Feasibility
Our design pattern requires the use of the EM algorithm for all access to the data. The EM algorithm that we require is exactly the same as the one proposed by McSherry and Talwar [23]. The algorithm they propose can work with different scoring functions that weigh the data differently - in our case, the scoring function we use is the timestamp of the data. Our use of the EM algorithm in our design pattern can thus be viewed as an instantiation of the general EM algorithm. McSherry and Talwar show a theoretical proof for the EM algorithm to be differentially private. We do not repeat the proof here and we encourage the interested reader to look at the paper (page 5 of [23]). As all data access happens via the EM algorithm, our design pattern also guarantees differential privacy.
B. Methodology
For RQ2 and RQ3, we carried out experiments to validate our hypotheses. We use synthetic data as there are no benefits of using real world data for our hypotheses. We create an array of size $n$ and randomly fill it with values from 0 to $n - 1$. Each element has a timestamp associated with it to simulate user activity - for the purpose of this experiment, we assume that the timestamp is the array index. A lower array index indicates that the item is newer. Thus, we want to prefer items with a lower index in the output as these items indicate things that are done recently.
Using the differential privacy EM algorithm [23], we choose the scoring function to be maximized by returning a value with as low an array index as possible. Thus, we choose elements from the array with probability based on their array index.
In the experiments, we randomly generate the array and compute the score using the CD and the EM algorithms. We then plot the RMS and normalized RMS errors between these two algorithms. We discuss the results in the following subsections.
C. RQ2 - Utility
For the first set of experiments, we varied the size of the array and plotted the RMS and normalized RMS errors between the CD and EM algorithms. The results are shown in Figure 1. To smooth out the noise in the experimental results (as CD is a deterministic algorithm while EM is a probabilistic one), we ran the experiment 1000 times with each array size and took averages. The graph shows us that as the size of the input array increases, the RMS error increases linearly - this is expected as with larger array sizes, the entries in the array have correspondingly larger values (due to our methodology), resulting in linearly increasing RMS error. Meanwhile, the normalized RMS error is roughly constant.
This shows us the tradeoff between accuracy and privacy. We observe that in these experiments, the loss of accuracy is relatively small - the normalized RMS error is less than 0.4. Thus, irrespective of the data set size, switching to the EM Algorithm (as required by our design pattern) from the CD Algorithm will not worsen the accuracy of the algorithm by more than the constant factor, and we have the added benefit that the EM algorithm also guarantees differential privacy. Whether the loss of accuracy is acceptable or not (or a worthy price to pay for the free privacy) is subjective and we deliberately do not enter a philosophical debate here.
For our second set of experiments, we varied the number of trials keeping the size of the array fixed to 1000. The graph plotting the RMS error vs the number of trials is shown in Figure 2. This graph shows us that as the number of trials increases, the RMS error reduces. Thus, initially,
even though there may be a bigger error between the CD and EM algorithms, in the long run, the error will be small.
With these set of experiments, we explored the utility of our design pattern. For an existing system (that may already use an algorithm similar to the CD one), a one-time change would be required to add in the EM algorithm and retrofit the system to our design pattern. This change is relatively straightforward and could even be automated. Making such a change, albeit results in a small loss of accuracy, gives the huge benefit of getting privacy for free without spending any additional computational resources.
D. RQ3 - Sustainability
For RQ3, we want to show the sustainability of our design pattern. With the EM algorithm in place, what we ideally want is that our system does not take any additional computational resources. We decided to use the CPU processing time to estimate the computational resources needed by the two algorithms. We instrumented the CD and EM algorithms and measured how long they took in the first set of experiments in Section V-C above. The resultant graph is shown in Figure 3. The graph shows us that for all data sizes the EM algorithm took less CPU time than the CD algorithm.
Not only does the EM algorithm not require any additional computational resources, it actually reduces the existing computation. Thus, changing to our design pattern will make the software system even more sustainable.
E. Threats to Validity
The notion of Differential Privacy may not relate to the user-centric view of Privacy as users might think it “strange” that the system assumes that bad things can happen anyway - the guarantee it gives is just regarding whether the user data is part of the system or not. While that is true, we feel that differential privacy has many compelling arguments in its favor - the biggest, for us, is not having to decide what data is sensitive and what is not. The differential privacy algorithms treat all data as sensitive making it easier not to leak data by accident. One would, therefore, not have to deal with the subjective nature of deciding what’s sensitive. We also feel that the guarantee might actually make it even more compelling for the user. From their point of view - “if bad things are going to happen anyway, it’s not going to hurt me much more if I participate.. so there’s no harm in participating.”
We used synthetic data in our evaluations rather than real-world data. For the research questions that we had - feasibility, utility, sustainability - synthetic data was sufficient. The only benefit of using real world data would be to answer some other research questions that are outside the scope of this paper.
VI. RELATED WORK
There have been some recent papers on data privacy and software testing. Clause and Orso [10], proposes techniques for the automated anonymization of field data for software testing. They extend the work done by Castro et al. [46] using novel concepts of path condition relaxation and breakable input conditions resulting in improving the effectiveness of input anonymization. Our work is orthogonal to the papers on input anonymization. The problem they address is - how can users anonymize sensitive information before sending it to the teams or companies that build the software? The problem we address is - how can systems that already have access to user data (such as purchase history, movie preferences, and so on) be engineered so that they don’t leak sensitive information while doing data mining on the data? Further, the aim of our approach is to provide privacy “for free,” i.e., without spending extra computational resources on privacy. The input anonymization approaches require spending extra computation as they address a different problem. We believe that the our approach can be combined with the input anonymization approach if needed based on user needs. If users are worried about developers at the company finding out sensitive information, input anonymization is essential. If, however, they are worried about accidental data leakage through the data mining of their information, using
the “Privacy for Free” design pattern may be more suitable. This would also make the software system more sustainable as we don’t spend any computation doing the anonymization of the inputs.
Taneja et al. [11] and Grechanik et al. [12] propose using k-anonymity [47] for privacy by selectively anonymizing certain attributes of a database for software testing. Their papers propose novel approaches using static analysis for selecting which attributes to anonymize so that test coverage remains high. Similar to above, our approach is orthogonal as we focus on a design pattern that will prevent accidental leakage of sensitive information via data mining or similar techniques. Further, these approaches using k-anonymity also require significant additional computational resources and thus, may not be sustainable when energy resources are scarce.
Our differential privacy approach has the added benefit of being able to work with any kind of data and not being limited to just integers or such. Finally, work on input anonymization and k-anonymization both focus on software testing whereas our approach focuses on a design pattern for building privacy preserving systems with a specific goal - to make privacy sustainable and not require additional resources.
There has also been a lot of work related to data anonymization and building accurate data models for statistical use (e.g., [48]–[52]). These techniques aim to preserve certain properties of the data (e.g., statistical properties like average) so they can be useful in data mining while trying to preserve privacy of individual records. The broad approaches include aggregating data to a higher level of granularity or adding noise and random perturbations. As we are interested in sustainable ways of achieving privacy, these approaches are not applicable as they typically require (a lot of) extra computational effort.
While there has been a lot of interest (and research) in data anonymization, we would like to reiterate that only data anonymization might not be enough. Narayanan and Shmatikov [7] demonstrate a relatively straightforward way of breaking the anonymity of data. They show how it is possible to correlate public IMDb data with private anonymized Netflix movie rating data resulting in the potential identifi-
cation of the anonymized individuals. Backstrom et al. [53] also describe a series of attacks for de-anonymizing social networks that have been anonymized to be made available to the public. They describe two categories of attacks - active attacks where an evil adversary targets an arbitrary set of users and passive attacks where existing users try to discover their location in the network and thereby cause de-
anonymization. Their results show that, with high probability and modest computational requirements, de-anonymization is possible for a real world social network (in their case, LiveJournal [54]).
VII. Conclusion
As social computing systems that collect users’ data proliferate, privacy has and will continue to become a major concern for the society at large. The main research question that we wanted to answer is - Is there a general purpose architecture or design pattern that can be used with a wide range of large complex software systems, that will achieve privacy without spending any extra resources on computa-
tional overhead? Our “Privacy for Free” design pattern can achieve privacy as an accidental and beneficial side effect of doing some existing computation. The results of our evaluations show the feasibility, utility, and in particular, the sustainability of our approach as it does not require any additional computational resources to guarantee privacy.
Acknowledgment
Sheth and Kaiser are members of the Programming Sys-
tems Lab, funded in part by NSF CNS-0717544, CNS-
0627473 and CNS-0426623, and NIH 2 U54 CA121852-06. Malkin is a member of the Crypto Lab, funded in part by NSF 0831094 and 0347839 and DHS N66001-09-C-0080.
References
stm, July 2010.
|
{"Source-Url": "https://academiccommons.columbia.edu/doi/10.7916/D8QJ7R6H/download", "len_cl100k_base": 6248, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30099, "total-output-tokens": 11234, "length": "2e12", "weborganizer": {"__label__adult": 0.0004096031188964844, "__label__art_design": 0.00045680999755859375, "__label__crime_law": 0.0008177757263183594, "__label__education_jobs": 0.0011606216430664062, "__label__entertainment": 9.781122207641602e-05, "__label__fashion_beauty": 0.00017905235290527344, "__label__finance_business": 0.0003726482391357422, "__label__food_dining": 0.00035500526428222656, "__label__games": 0.0007977485656738281, "__label__hardware": 0.0008058547973632812, "__label__health": 0.0007328987121582031, "__label__history": 0.0003333091735839844, "__label__home_hobbies": 0.0001119375228881836, "__label__industrial": 0.00032711029052734375, "__label__literature": 0.0004878044128417969, "__label__politics": 0.0005397796630859375, "__label__religion": 0.00035881996154785156, "__label__science_tech": 0.08758544921875, "__label__social_life": 0.0001798868179321289, "__label__software": 0.0187225341796875, "__label__software_dev": 0.88427734375, "__label__sports_fitness": 0.00021457672119140625, "__label__transportation": 0.00043892860412597656, "__label__travel": 0.0001926422119140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43657, 0.0411]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43657, 0.34753]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43657, 0.89109]], "google_gemma-3-12b-it_contains_pii": [[0, 5353, false], [5353, 11025, null], [11025, 13925, null], [13925, 18650, null], [18650, 23552, null], [23552, 27693, null], [27693, 33244, null], [33244, 38807, null], [38807, 43657, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5353, true], [5353, 11025, null], [11025, 13925, null], [13925, 18650, null], [18650, 23552, null], [23552, 27693, null], [27693, 33244, null], [33244, 38807, null], [38807, 43657, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43657, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43657, null]], "pdf_page_numbers": [[0, 5353, 1], [5353, 11025, 2], [11025, 13925, 3], [13925, 18650, 4], [18650, 23552, 5], [23552, 27693, 6], [27693, 33244, 7], [33244, 38807, 8], [38807, 43657, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43657, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
9372823f35bed3b7b1caf59ea148b7c32e99b48b
|
Evaluation of Model Evaluation Criterion for Software Development Effort Estimation
S. K. Pillai, M. K. Jeyakumar
Abstract—Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.
Keywords—Software effort estimation, accuracy, Radial Basis Function, linear least squares.
I. INTRODUCTION
Modeling of a system is critical to understand and to predict its behavior. In software development due to intangible nature of software and there is no manufacturing, each software produced is unique. We only make copies of the software which is done in a short time. As the software engineering field is not yet matured like conventional engineering fields there is no established hand book. There is no standards certification for all the software. The problem becomes more complicated as the size measurement is also not universally standardized. In spite of all these problems managers and software engineers have to develop a plan using estimation techniques. Generally Lines of Code (LOC) or Function Point (FP) is used as basic size measure. Methods of varying complexity are proposed for software effort estimation. They are expert based [1], analogy based [2], analytical [3], and machine learning based [4]. Among the machine learning methods, neural networks play a major role in Software Development Effort Estimation (SDEE) [5]. One can design Radial Basis Function network (RBF) by changing only one parameter, function width (spread) which is also known as impact factor [6]. RBF is frequently used for Software Development Effort Estimation and it is shown that RBF performs better [7]–[9]. This motivates the authors to use RBF for estimating small projects. The estimate is essential at early stages of a project to plan manpower, schedule and cost. Underestimates may lead to poor quality and reducing the scope or even may lead to cancellation of the project. This can happen even to fit the project to budget due to management pressure. On the other hand overestimation can lead to underutilization of staff or an organization may lose the project in bidding itself. Both the cases are deterrent to an organization. One has to estimate effort as accurately as possible. Here lies the real problem, the definition of accuracy [10]. A new method of evaluation of accuracy based on linear least squares is proposed. A linear relationship between actual effort and predicted effort for test data is made. We have used mainly the data given in [11] for our studies. The paper is organized as follows: The next section reviews the related work followed by description of radial basis function neural network. Experimental evaluations using the new method are provided in the next section. Conclusions are given at the end followed by references.
II. RELATED STUDIES
SDEE or any prediction (forecasting) accuracy depends on the input data, algorithm used, and criteria used for accuracy computation. Generally historical data is divided into training (verification) set and testing (validation) set. Training data is used to build the model. This model is used for validation using test data. SDEE is a function of input where size of software projects plays an important role. For small projects effort required is also small. Lopez-Martin [11] used fuzzy logic model based on two independent variables New & Changed (N&C) code and Reused (R) code. He has compared the performance of fuzzy model with multiple regression model. The results indicate that there is no difference between these two models. Two fuzzy logic models Mamdani and Takai-Sugeno are studied in [12]. The evaluation of these methods with linear regression showed that Takai-Sugeno fuzzy system performs better. None of these works compares SDEE using one and two independent variables. We have used error characteristics to compare the performance of the two models as explained in [10]. We have followed the guidelines suggested in the literature to conduct statistical tests [13].
Commonly used accuracy evaluation criteria are Mean Magnitude of Relative Error (MMRE), PRED which are defined as below [10], [14].
MRE = \frac{\text{abs (actual - predicted)}}{\text{actual}} \quad (1)
Magnitude of relative error is calculated for each project. This is added for each project and average is calculated.
\text{MMRE} = \frac{\text{sum (MREi)}}{n} \quad (2)
where k is the number of projects that have a relative error MRE less than l.
If the actual value is 100 and predicted value is 10 then MRE is 90%. On the other hand if the predicted value is 100 and the actual is 10 then MRE is 900%. Although in both cases, the error is 90, MRE favors lower estimate. To avoid this, Mean Magnitude of Error Relative (MMER) is introduced where the denominator is replaced with predicted instead of actual.
\text{MER} = \frac{\text{abs (actual - predicted)}}{\text{predicted}} \quad (4)
\text{MMER} = \frac{\text{sum (MREi)}}{n} \quad (5)
This statistic favors over estimation. Another reason to support (4) is that the error (actual-predicted) is correlated with actual. To avoid the above two problems it is suggested to use balanced relative error
\text{BRE} = \frac{\text{abs (actual - predicted)}}{\text{min(actual, predicted)}} \quad (6)
Also mean of the errors or standard deviation is affected by extreme values. The problem with all of these is we are looking for a summary statistic. Instead we have proposed to fit a linear least squares curve between actual and predicted values. Ideally, this equation should have intercept zero and slope one. The major advantage of this is we are comparing with the exact values instead of looking for minimum in MMRE/MMER or maximum of PRED.
\section*{III. Measurements}
We have used the data given in [11] and [15] for our experimentation. LOPEZ1 data consists of Actual Effort (AE), N&C code (N&C) and Reused code (R) for small projects in an academic setting [11]. Effort in minutes is the dependent variable or response and the two independent variables or predictors are N&C code and R code. For training 163 projects are used and for testing 68 projects are used. Table I summarizes both training (N&C, R, AE) and test data (N&CT, RT, AET). Pearson correlation coefficients of different variables are given in Table II. It can be observed that the linear correlation of R code with Actual Effort is small compared with N&C code correlation. More details of the data are available in [11].
LOPEZ2 data consists of three independent variables, McCabe Complexity (MC), Dhama Coupling (DC), Lines of Code (LOC), and a dependent variable Development Time (DT) in minutes [14]. It has a total of 41 observations. We have randomly selected eight observations for test and the rest for training. As the sample size is not large we have provided summary statistics in Table III for the total data. Correlation coefficients of different variables are given in Table IV. It can be seen that all the correlations are significant.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Variable} & \textbf{Mean} & \textbf{Stdev} & \textbf{Minimum} & \textbf{Median} & \textbf{Maximum} \\
\hline
N&C & 35.56 & 26.60 & 10.00 & 27.00 & 137.00 \\
R & 41.82 & 30.86 & 4.00 & 34.00 & 149.00 \\
AE & 77.07 & 37.81 & 19.00 & 67.00 & 195.00 \\
N&CT & 44.93 & 21.28 & 12.00 & 41.00 & 104.00 \\
RT & 35.43 & 23.71 & 1.00 & 30.00 & 100.00 \\
AET & 79.16 & 26.47 & 11.00 & 78.00 & 144.00 \\
\hline
\end{tabular}
\end{center}
\caption{Characteristics of Lopez1 Data}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Variable} & \textbf{Mean} & \textbf{Stdev} & \textbf{Minimum} & \textbf{Median} & \textbf{Maximum} \\
\hline
MC & 2.707 & 1.006 & 1.000 & 3.000 & 5.000 \\
DC & 0.169 & 0.058 & 0.077 & 0.167 & 0.333 \\
LOC & 13.610 & 5.563 & 4.000 & 13.000 & 31.000 \\
DT & 16.634 & 3.673 & 9.000 & 16.000 & 25.000 \\
\hline
\end{tabular}
\end{center}
\caption{Characteristics of Lopez2 Data}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Variable} & \textbf{Mean} & \textbf{Stdev} & \textbf{Minimum} & \textbf{Median} & \textbf{Maximum} \\
\hline
N&C, R & 0.114 & 0.747 & -0.032 & 0.307 & 0.190 \\
N&C, AE & 0.114 & 0.747 & -0.032 & 0.307 & 0.190 \\
R, AE & 0.114 & 0.747 & -0.032 & 0.307 & 0.190 \\
N&C, RT & 0.114 & 0.747 & -0.032 & 0.307 & 0.190 \\
N&C, AET & 0.114 & 0.747 & -0.032 & 0.307 & 0.190 \\
RT, AET & 0.114 & 0.747 & -0.032 & 0.307 & 0.190 \\
\hline
\end{tabular}
\end{center}
\caption{Pearson Correlation Coefficients for Lopez1 Data}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Variable} & \textbf{Mean} & \textbf{Stdev} & \textbf{Minimum} & \textbf{Median} & \textbf{Maximum} \\
\hline
MC & 2.707 & 1.006 & 1.000 & 3.000 & 5.000 \\
DC & 0.169 & 0.058 & 0.077 & 0.167 & 0.333 \\
LOC & 13.610 & 5.563 & 4.000 & 13.000 & 31.000 \\
DT & 16.634 & 3.673 & 9.000 & 16.000 & 25.000 \\
\hline
\end{tabular}
\end{center}
\caption{Pearson Correlation Coefficients for Lopez2 Data}
\end{table}
\section*{IV. Radial Basis Function Neural Network}
Neural networks are popular in applications where we are not able to specify the exact relationship between input and output or the relationship is nonlinear. Feed forward neural networks require many parameters to be specified and are iterative in nature. However, RBF networks are iteration free and its output is determined in a straightforward manner when the output layer is linear [6]. Reference [14] concludes that for the software industry RBF network is best suited to effort prediction compared to back propagation neural network. The architecture of RBF is shown in Fig. 1 which consists of input layer, hidden layer and output layer. Hidden layer has h neurons and uses radial basis function
\begin{equation}
(\sigma) = \exp \left[ \frac{||x-c||^2}{\sigma^2} \right]
\end{equation}
\begin{equation}
F(x) = \sum_{i=1}^{n} \beta_i \varphi(x)
\end{equation}
\(c_j\) is the center and \(\sigma_j\) is the radial distance or spread. The output is given by \(F(x)\).
The output layer weights are determined using generalized inverse. In our study we have used MATLAB R2010a® Neural Network toolbox function.
V. EXPERIMENTAL RESULTS
A. LOPEZ1 Data
Studies were made for LOPEZ1 training data containing 163 observations. In a RBF neural network, the RBF \(f(x)\) has two constants \(c_j\) and \(\sigma_j\). The center, \(c_j\), is selected from the input and the user can only specify, \(\sigma_j\), the spread. The spread is varied from 0.1 to 10 for the two input N&C and R and one output effort. RBF performance, mean square error (MSE), 0.0194 is lowest when spread is 1.0 and number of hidden neurons is seven. For the single input N&C minimum MSE, 0.0190, is achieved when spread is 1.0. The trained network is used for evaluating the prediction capability of the RBF network for 68 projects. The box plot of training errors and test (prediction) errors is given in Fig. 2 for both single (RBF1) and two variables (RBF2) cases. Mean, median and inter quartile range (IQR) for the error (actual-predicted) data are given in Table V. It can be observed that the difference between one and two variables is not much. We want to validate this observation using statistical tests. The resulting p-values for t-test and Mann-Whitney nonparametric tests are given in Table VI. We have also given effect size as suggested in the literature [13]. It is clear that statistically there is no significant difference between usages of one or two variables.
We want to fit a linear least squares equation between actual and predicted effort.
\[
\text{actual effort} = a \times \text{predicted effort} + b
\]
If the actual effort and predicted effort are equal, the intercept \(b\) should be zero and slope \(a\) should be unity. The coefficients obtained for LOPEZ1 data are shown in Table VII. This result indicates that there is some bias in prediction for test data as given by the intercept. RBF estimates well for training data. The one variable test data gives slightly lower intercept and higher slope. This shows that single input is better than two inputs for prediction for LOPEZ1 data set.
B. LOPEZ2 Data
Studies were made for LOPEZ2 training data containing 33 observations. By varying the spread parameter from 0.1 to 1.0 for the three inputs McCabe Complexity, Dhama Coupling and LOC and one output Design time. RBF performance, mean square error 0.01329 is lowest when spread is 0.40 and number of hidden neurons is five. The trained network is used for evaluating the prediction capability for eight projects. The box plot of training errors and test (prediction) errors is given in Fig. 3. Mean, median and inter quartile range (IQR) for the errors are given in Table VIII.
We want to fit a linear least squares equation between actual and predicted effort. If the actual effort and predicted effort are equal, the intercept (b) should be zero and slope (a) should be unity. The coefficients obtained for LOPEZ2 data are shown in Table IX. This indicates that the prediction is not as good as training.
<table>
<thead>
<tr>
<th>TABLE VIII</th>
<th>CHARACTERISTICS OF TRAINING AND TEST ERRORS LOPEZ2 DATA</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Training</td>
</tr>
<tr>
<td>Mean</td>
<td>0.000</td>
</tr>
<tr>
<td>Median</td>
<td>-0.017</td>
</tr>
<tr>
<td>Inter Quartile Range</td>
<td>1.592</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>TABLE IX</th>
<th>COEFFICIENTS FOR TRAINING AND TEST LOPEZ2 DATA</th>
</tr>
</thead>
<tbody>
<tr>
<td>Coefficients</td>
<td>Training</td>
</tr>
<tr>
<td>Intercept (b)</td>
<td>0.000</td>
</tr>
<tr>
<td>Slope (a)</td>
<td>1.000</td>
</tr>
</tbody>
</table>
**VI. CONCLUSION**
Based on this study one may choose a single variable N&C for effort prediction of small programs as the statistical tests do not show much difference between the two cases of one and two variables for LOPEZ1 data. The new evaluation criteria of linear least squares curve fitting and checking for intercept and slope also favors one variable for effort estimation. For LOPEZ2 data RBF training is good but the accuracy of prediction is not good. Future studies should aim to reduce the intercept and make the slope of linear least squares fit between actual and predicted effort close to one. The goal of the paper is to demonstrate the use of the new evaluation criterion; we have not tried to compare different models. However, we have used two different data sets. Two major conclusions of the present study are i) The use of linear correlation, as a preprocessing step helps to select independent attributes for effort estimation, ii) The use of linear regression for evaluation of prediction capability of a model. Although, we have used effort estimation problem to demonstrate the new criterion, this method can be used for any model evaluation. This method compares with the expected value of slope one and intercept zero of a straight line compared to other summary statistic looking for a relative value of minimum or maximum.
**REFERENCES**
S. K. Pillai received B.E. in Electrical Engineering in 1971 from Madurai University, Madurai, Tamilnadu, India. He obtained M.Tech. from IIT Madras, Chennai, Tamilnadu, India, in 1973. After his masters, he joined Indian Space Research Organization and worked for 22 years. Then he joined NeST, Trivandrum, India as President and worked for eight years. Afterwards, he was Vice-President at HCL Technologies for six years. He was a Six Sigma Black Belt from American Society for quality. He holds Six Sigma Black Belt from Quality Assurance Institute and Six Sigma Master Black Belt from Indian Statistical Institute. Currently, he is working as Professor in the Department of Electrical and Electronics Engineering. Concurrently, he is also perusing his Ph.D. in Computer Science and Engineering. He has published more than 30 papers in peer reviewed journals and conferences. He is a senior member of IEEE and member of ACM. He is also a life member of Computer Society of India and National Institution for Quality and Reliability. He is Fellow of Institution of Engineers, India. His interests include soft computing and software engineering.
Dr. M. K. Jeyakumar received his Post Graduation Degree in Master of Computer Applications from Bharathidasan University, Trichirappalli, Tamilnadu, India in 1993. He fetched his M.Tech degree in Computer Science and Engineering from Manonmaniam Sundaranar University, Tirunelveli, Tamilnadu, India in 2005. He completed his Ph.D. degree in Computer Applications from Dr.M.G.R Educational and Research Institute University, Chennai, Tamilnadu, India in 2010. He is at present working as Professor in the Department of Computer Applications and Additional Controller of Examinations, Noorul Islam Centre for Higher Education, Kumararcoil, Tamilnadu, India. He has twenty years of teaching experience in this reputed institution. He has published thirty six research papers in International and National Journals. He has also presented more than twenty research papers in International and National Conferences conducted by esteemed organizations. His research interests are Mobile Computing and Network Security.
|
{"Source-Url": "http://waset.org/publications/10000667/evaluation-of-model-evaluation-criterion-for-software-development-effort-estimation", "len_cl100k_base": 4625, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 15628, "total-output-tokens": 6095, "length": "2e12", "weborganizer": {"__label__adult": 0.00037479400634765625, "__label__art_design": 0.00034046173095703125, "__label__crime_law": 0.00034809112548828125, "__label__education_jobs": 0.0026607513427734375, "__label__entertainment": 7.164478302001953e-05, "__label__fashion_beauty": 0.00018489360809326172, "__label__finance_business": 0.00061798095703125, "__label__food_dining": 0.0004379749298095703, "__label__games": 0.0005307197570800781, "__label__hardware": 0.0011157989501953125, "__label__health": 0.0008525848388671875, "__label__history": 0.00021028518676757812, "__label__home_hobbies": 0.0001361370086669922, "__label__industrial": 0.0005593299865722656, "__label__literature": 0.000286102294921875, "__label__politics": 0.0002231597900390625, "__label__religion": 0.0004343986511230469, "__label__science_tech": 0.046630859375, "__label__social_life": 0.00013303756713867188, "__label__software": 0.006320953369140625, "__label__software_dev": 0.9365234375, "__label__sports_fitness": 0.0003345012664794922, "__label__transportation": 0.00051116943359375, "__label__travel": 0.00020170211791992188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21113, 0.08253]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21113, 0.15438]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21113, 0.86792]], "google_gemma-3-12b-it_contains_pii": [[0, 5142, false], [5142, 11285, null], [11285, 13865, null], [13865, 21113, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5142, true], [5142, 11285, null], [11285, 13865, null], [13865, 21113, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21113, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21113, null]], "pdf_page_numbers": [[0, 5142, 1], [5142, 11285, 2], [11285, 13865, 3], [13865, 21113, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21113, 0.07971]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
ff1a421274606db63bb3422ee103358354055f55
|
Pitfalls and Solutions for Technical Debt Management in Agile Software Projects
S. Freire
Federal University of Bahia and Federal Institute of Ceará
N. Rios
Federal University of Rio de Janeiro
B. Pérez
University of Los Andes and Francisco de Paula S/der University
C. Castellanos
University of Los Andes
D. Correal
University of Los Andes
R. Ramač
University of Novi Sad
N. Taušan
INFORA Research Group
V. Mandić
University of Novi Sad
A. Pacheco
University of Costa Rica
G. López
University of Costa Rica
M. Mendonça
Federal University of Bahia
C. Izurieta
Montana State University and Idaho National Laboratories
D. Falessi
University of Rome “Tor Vergata”
C. Seaman
University of Maryland Baltimore Country
R. Spínola
Salvador University and State University of Bahia
Abstract—As technical debt (TD) management balances short-term and long-term goals, having information on the impediments, decision factors, enabling practices, and actions (IDEA) related to TD management can support software teams in improving their ability to effectively manage debt items. This article presents TD IDEA Diagrams for TD management in agile software projects. The diagrams are grounded in reports from 274 practitioners from six countries. They organize practices for TD prevention, monitoring, and repayment, as well as the impediments for applying them. By analyzing the diagrams, professionals can avoid the pitfalls, and increase their capacity, for better TD management.
Technical debt (TD) management encompasses the prevention, monitoring, and repayment of TD items [1], which are the result of intentional shortcuts or even mistakes taken in software development [2,8]. TD can sometimes bring a short-term benefit to the project, usually in terms of increased development speed or shortened time to market, but may have to be paid with interest later on [2]. There are specific practices for TD prevention, monitoring, and repayment, as well as practices that help software teams to better manage TD. There are also common reasons for not applying these practices, which include: (1) impediments that prevent teams who want to manage the debt from doing so; and (2) technical or administrative reasons that lead a team to decide against managing TD effectively [3].
Identifying these practices and practice avoidance reasons (PARs) aids in choosing appropriate strategies to control TD. Moreover, understanding how they interact in combination allows a comprehensive view of the TD management landscape. In this article, we present IDEA Diagrams (Impediments, Decision factors, Enabling practices, and Actions) to help frame TD management. Loosely inspired by SWOT (strengths, weaknesses, opportunities, and threats) analysis [5], the IDEA Diagrams organizes TD management practices and PARs into quadrants. This paper presents IDEA diagrams for TD prevention, monitoring, and repayment in agile projects. To populate the diagrams, we use the practices and PARs reported by 274 practitioners from the agile software industry who responded to the InsighTD survey [6].
This article supports practitioners in:
- Identifying common practices used for preventing, monitoring, and repaying debt items;
- Starting or improving TD management initiatives, presenting the PARs that curb the improvement of the team’s ability to manage debt;
- Understanding impediments to TD management practices that can be addressed in TD management initiatives.
Overall, identifying and understanding key issues (impediments and decision factors) and capabilities (actions and enabling practices) for TD management support agile teams in balancing the long-term and short-term goals of a development project, contributing to the effectiveness of their TD management initiative. See [9,10] to learn more about TD and agile software development.
HOW WE BUILT THE DIAGRAMS
Our work is part of the InsighTD Project, which is a globally distributed family of industrial surveys on TD causes, effects, and management, involving researchers from multiple countries [6]. The survey begins with characterization questions, then proceeds to ask the participants to define TD and to describe a particular example of TD. Only participants with valid responses to those questions are considered for data analysis [6]. Then, for each of the investigated TD management activities (prevention, monitoring, and repayment) it asks if the participant performed the activity (yes/no question). If “no”, it asks why. If “yes”, it asks how. Thus, for example, if a participant indicates that it is possible to prevent the TD item (yes), practices for TD prevention are elicited; otherwise (no), the participant provides reasons for not preventing the debt. A detailed description of the survey instrument, as well as its planning and data analysis procedures are presented in [6]. Several results from the InsighTD project (http://www.td-survey.com/publication-map/) have been previously published, the novelty of this paper is twofold: 1) it addresses TD management practices (particularly TD monitoring) in agile projects for the first time, and 2) it describes for the first time the IDEA diagrams.
Our findings are based on the responses (available at https://bit.ly/3w5hfCZ) from 274 practitioners from six countries all of whom indicated, in one of the characterization questions, working with agile processes. We used open coding [3,4,7] to extract the practices and PARs from the surveys’ textual responses. Two researchers independently coded the entire set of responses from Brazil, followed by discussions aimed at reaching consensus. The InsighTD teams in Chile, Colombia, Costa Rica, Serbia, and the US then used the list of codified practices and PARs to standardize the nomenclature...
found in their results. Afterwards, the Brazilian team reviewed and merged these results to provide the final list that forms the basis of the IDEA analysis (see Sidebar 1).
(Sidebar 1) Threats to the Study Validity
The main threat to the validity of the study is related to the coding of practices and PARs, which is a subjective task and could lead to biased results. Our analysis process, involving iteration, review, and multiple coders, mitigates this. We also reduced external validity threats by focusing the study on practitioners from agile software industry with diverse experience and work environments. The InsighTD questionnaire and the study planning are subject to some internal validity threats. In order to face these, we performed three internal validations by researchers from the project who didn’t work in its planning and an external independent researcher. Lastly, we also run a pilot study to check process, instruments, and procedures consistency [6].
Figure 1 presents demographic information. Although it is not possible to guarantee that participants represent all practitioners working with agile processes in the software industry in those countries, the sample encompasses a broad and diverse set of practitioners, which help strengthen the analytical generalizability of the study.
In total, we identified 73 prevention, 25 monitoring, and 17 repayment practices, as well as 15 reasons for non-prevention, 22 for non-monitoring, and 19 for non-repayment of TD, our PARs. We identified two types of practices: actions and practices for increasing a team’s ability to manage TD. The former is used to prevent, monitor, or repay debt items, such as following the project planning, creating a TD item backlog, and code refactoring, respectively. The latter improves the capacity of the team to perform those actions, for example, training and use of tools. We categorized the PARs into decision factors and impediments. A decision factor indicates that not managing the debt is a choice, for example, lack of interest and focusing on short term goals. Alternatively, impediments prevent a team who wants to manage the debt from doing so, such as short deadline and cost.
THE IDEA DIAGRAMS FOR TD PREVENTION, MONITORING, AND REPAYMENT
Inspired by SWOT analysis, which supports the definition of strategies to achieve organizational goals, organizing the strengths, weaknesses, opportunities, and threats into a matrix [5], the IDEA diagrams organize issues (decision factors and impediments) and capabilities (actions and enabling practices) into four quadrants. The scope of IDEA diagrams, however, is not organizational planning. They are aimed at supporting software teams concerned with TD management. They work as a communication tool, representing, in a simplified way, concerns that practitioners should have when improving the management of TD.
Figure 1. Overview of participants' characterization.
IDEA analysis can reveal capabilities or issues that the project team can improve, maintain, or reduce to manage debt items. To this end, we created the TD IDEA diagrams for TD prevention, monitoring, and repayment presented in Figure 2. They organize the TD management practices and PARs into the matrix’s
Figure 2. TD IDEA conceptual model (A) and TD IDEA diagrams for TD prevention (B), monitoring (C), and repayment (D) in agile processes.
quadrants following the type of each TD management variable. TD preventive, monitoring, and repayment actions indicate a team’s capability to deal with debt items, while enabling practices can increase team’s capabilities for improving TD management. Impediments represent factors out of the control of the team. Finally, the decision factors reveal the team’s points of view that lead them to not prevent, monitor, or repay debt items.
The practices and PARs are presented in their corresponding quadrants ordered by how frequently they were reported in the survey (as %). For example, short deadline (58%) means that 58% of agile practitioners who reported an impediment for TD prevention (see Figure 2B) experienced it in their projects. These percentages do not indicate if a practice or a PAR is critical. In order to make the diagrams concise, the work considers only the 10 most frequently cited practices and PARs. A detailed analysis of them by the variables presented in Fig. 1 is available at https://bit.ly/3w5hfCZ.
USING THE DIAGRAMS
IDEA analysis can assist with the definition of TD management strategies by analyzing one or two quadrants of the matrix at a time. When looking at isolated quadrants, on the left side of each diagram, the team can see which practices can be applied to prevent, monitor, or repay debt items (actions) and also which practices help enable the team to better manage debt items (enabling practices). For example, Figures 2B, 2C, and 2D show that well defined requirements, TD item backlog, and code refactoring are the most commonly used practices to prevent, monitor, and repay debt items, respectively. Training, use of tools, and investing effort on TD repayment are the most common practices used to create a development environment conducive to debt prevention, monitoring, and repayment, respectively. This constitutes valuable information for software teams initiating TD management. Teams who already have an established management process can use this as a benchmark to analyze and improve their current practices.
On the right side of the diagram, teams can see the decision factors that lead to not managing debt items and the impediments that restrict TD management. Surmounting these PARs can be decisive to successfully manage debt items. For example, the most commonly found impediments for TD prevention, monitoring, and repayment are, respectively, short deadlines, lack of time, and again lack of time. This is a strong indication that managing development time is essential to putting a TD management strategy in place. The PARs that negatively affect decisions to manage debt items include ineffective management, lack of interest, and focus only on short term goals. Changing the team’s mindset on the importance of managing TD is definitely a key issue there.
Examining relationships between quadrants can also be useful. For example, agile teams can reduce weak areas by considering the enabling practices and decision factors quadrants. For instance, if the team wants to reduce its lack of predictability in the software development (weakness) in order to better prevent TD, Figure 2B suggests that adopting risk and impact analysis and refactoring (opportunities) could help by minimizing the chances of unexpected events during software development, thus boosting predictability and TD prevention.
Analyzing the actions and impediments quadrants can help a development team reduce the impediments to TD prevention. For example, by examining Figure 2B, the team could recognize short deadlines as one of its impediments to TD prevention, and also see that investing in following the project planning, and well-planned deadlines (actions), could help overcome this impediment by ensuring that deadlines are reasonable and designed to incorporate TD prevention actions from the beginning.
As another example, consider a team trying to understand why they are not monitoring TD effectively. If the team examines the decision factors and impediments quadrants of Figure 2C, they could recognize lack of interest as a decision made by their team, meaning that few managers and developers seem to be interested in monitoring TD at all. This is a problem, obviously, and some clues to addressing it could be found in the impediment quadrant, specifically lack of knowledge on TD and lack of understanding about the impact of the debt. The team might then conclude that addressing these impediments, possibly through education and collection of data about the evolution of the product, could address the lack of interest and lead to better monitoring of TD.
Finally, examining the actions and enabling practices quadrants provide teams with a way to boost their TD management ability by suggesting other practices that could be implemented. For example, Figure 2D indicates that code refactoring, design
refactoring, and solving technical issues are repayment activities (actions). If a team recognizes these actions in their own development process but wants to build on them, the IDEA diagram suggests enabling practices for activities like investing effort on TD repayment activities and negotiating deadline extension, which would create more resources for the repayment activities to be even more effective, and would also increase the chances of continuously investing in TD repayment actions.
CONCLUSION
Some of the insights given by the IDEA diagrams may seem obvious to experienced managers. However, the diagrams provide a framework to ensure a holistic analysis for TD prevention, monitoring, and payment. For example, in the agile context:
- Repayment and monitoring actions are more commonly related to technical and managerial activities, respectively, while preventive actions are usually related to both;
- Training, using tools, and investing effort on TD repayment increase the team’s capability to manage TD;
- Ineffective management, lack of interest, and focusing on short-term goals are the primary decision factors for do not manage TD items;
- Lack of time and short deadlines are the main impediments to perform TD management.
For an agile team who wants to start managing TD, the ranked lists of practices and PARs organized in each of the TD IDEA diagrams provide valuable guidance on what to employ (practices) or curb (PARs) based on experience from other development teams. If a team already has experience in managing TD, it can identify other commonly used practices or other PARs faced, and can also identify enabling activities (enabling practices) that will improve the team’s ability to manage TD. In other words, teams can create their own TD IDEA diagrams. This exercise is beneficial in and of itself, but is also useful in comparing a team owned diagram to those of others, providing learning opportunities on TD management among teams or squads. Thus, we stimulate the readers to go further, creating TD IDEA diagrams for their own context and comparing them to the ones presented in this article.
REFERENCES
SÁVIO FREIRE is a PhD student at the Department of Computer Science at the Federal University of Bahia, Salvador, 40170-110, Bahia, Brazil, and an assistant professor at the Federal Institute of Ceará, Morada Nova, Ceará, 62.940-000, Brazil. He is a researcher at the Technical Debt Research Team (www.tdresearchteam.com) at the Salvador University. His research interest encompasses technical debt and empirical software engineering. Contact him at savio.freire@ifce.edu.br.
NICOLLI RIOS is a postdoctoral researcher at PESC (Systems and Computer Engineering Program)/COPPE (The Alberto Luiz Coimbra Institute for Graduate Studies and Research in Engineering) at Federal University of Rio de Janeiro, Rio de Janeiro, 21941-972, Brazil. She is also a researcher at the Technical Debt Research Team (www.tdresearchteam.com). She holds a PhD in Computer Science from the Federal University of Bahia. Her main research interests are technical debt and empirical software engineering. Contact her at nicolli@cos.ufrj.br.
BORIS PÉREZ is an assistant professor in Francisco de Paula Stder. University's Department of Systems and Informatics in Cúcuta, 12E-96, Colombia. Also, he is PhD candidate at the Department of Systems and Computing Engineering, Universidad de Los Andes, Bogota, 111711, Colombia. His research interests include software architecture, model-driven architecture, and technical debt. Boris received his MSc in software engineering from Universidad de Los Andes. Contact him at borisperezg@ufps.edu.co.
CAMILO CASTELLANOS is a PhD candidate in the Department of Systems and Computing Engineering, Universidad de Los Andes, Bogota, 111711, Colombia. His research fields include software architecture, big data analytics, and model-driven engineering. Contact him at cc_castellanos87@uniandes.edu.co.
DARÍO CORREAL, PhD, is associate professor at Los Andes University Bogota, 111711, Colombia. His main research topics are software architecture, solutions architectures, service-oriented software engineering and blockchain solutions. More can be found at https://profesores.virtual.uniandes.edu.co/dcorreal/es/inicio/. Contact him at dcorreal@uniandes.edu.co.
ROBERT RAMAČ is a teaching assistant of Software Engineering at the University of Novi Sad, Faculty of Technical Sciences, 21000 Novi Sad, Serbia, and a software developer at TIAC ltd. Currently he is enrolled as a PhD student at the University of Novi Sad, at the
Faculty of Technical Sciences. He received two M.Sc. degrees at the University of Novi Sad, master of Information technologies and master of Engineering management. His areas of interest are: empirical software engineering, software development, technical debt, the improvement of the software development process. Contact him at ramac.robert@uns.ac.rs.
NEBOJŠA TAUŠAN is a software engineer at INFORA Research Group, Adolfa Singera 12/A, 24000 Subotica, Serbia. His research interests include empirical software engineering, software architectures, DSLs, and technical debt. Nebojša holds a PhD degree in Information Processing Science and Software Engineering from University of Oulu, Finland, MSc and BSc degrees in Information Systems from Faculty of Economics Subotica, University of Novi Sad, Serbia. Contact him at nebojsa.tausan@infora.rs.
ALEXIA PACHECO (alexia.pacheco@ucr.ac.cr) is a Ph.D. candidate at the graduate program in Computer Science at the University of Costa Rica, Costa Rica. Her research interests include technical debt, software analytics, and empirical software engineering. She holds an MS in Applied Mathematics and a Degree in Computer Science from the University of Costa Rica.
VLADIMIR MANDIĆ is an assistant professor of software engineering at University of Novi Sad, Faculty of Technical Sciences, 21000 Novi Sad, Serbia. His research interests include empirical software engineering, software process improvement, software quality, and technical debt. He holds a PhD degree in Information Processing Science and Software Engineering from University of Oulu, Finland, MSc and BSc degrees in Computer Science from University of Novi Sad, Serbia. Contact him at vladman@uns.ac.rs.
GUSTAVO LOPEZ is a professor of software engineering and human-computer interaction at the University of Costa Rica (San Pedro, San Jose, Costa Rica). He holds a Ph.D. in Computer Science from the same university. His research interests include agile software development, software process improvement, technical debt, user experience, and usability. Contact him at gustavo.lopez_h@ucr.ac.cr.
MANOEL MENDONÇA is a professor of computer science at the Federal University of Bahia (UFBA), Salvador, Ba, 40170-115, Brazil. He holds a PhD in Computer Science from the University of Maryland. His main research interests include software visualization, empirical software engineering, and technical debt. Contact him at manoel.mendonca@ufba.br.
CLEMENTE IZURIETA is an associate professor of computer science in the Gianforte School of Computing at Montana State University, Bozeman, Montana, 59717, USA. His research focuses on quality assurance, technical debt, and cybersecurity. He is active in many collaborative projects where computing techniques enhance outcomes. He is also the chief technology officer of Authors A.I. (authors.ai). He is a Senior Member of IEEE. Contact him at clemente.izurieta@montana.edu.
DAVIDE FALESSI is an assistant professor (RTDb) at the University of Rome “Tor Vergata,” Rome, 00133, Italy. He got his bachelor, master, and PhD degrees from that same university. His research interests include technical debt and machine learning to support software engineering tasks. You can contact him at falessi@ing.uniroma2.it.
CAROLYN SEAMAN is a Professor of Information Systems at the University of Maryland Baltimore County (UMBC), Baltimore, Maryland, 21250, USA. She is also the Director of the Center for Women in Technology, also at UMBC. Her research consists mainly of empirical studies of software engineering, with particular emphases on maintenance, organizational structure, communication, measurement, and technical debt. She also investigates qualitative research methods in software engineering, as well as computing pedagogy. She holds a PhD in Computer Science from the University of Maryland, College Park, a MS from Georgia Tech, and a BA from the College of Wooster (Ohio). More can be found at https://userpages.umbc.edu/~cseaman/. Contact her at cseaman@umbc.edu.
RODRIGO SPÍNOLA is a professor of software engineering at Salvador University, Salvador, Bahia, 41770-235, Brazil, where he leads the Technical Debt Research Team (www.tdresearchteam.com). He is also a visiting professor at the State University of Bahia, Alagomhas, Bahia, 48000-000, Brazil. His research interests include technical debt and empirical software engineering. He holds a PhD and MS in Computer Science and Systems Engineering from the Federal University of Rio de Janeiro. Contact him at rodrigo.spinola@unifacs.br.
July/August 2021
|
{"Source-Url": "https://www.cs.montana.edu/izurieta/pubs/IEEE_Software_08_2021A.pdf", "len_cl100k_base": 4775, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27323, "total-output-tokens": 5969, "length": "2e12", "weborganizer": {"__label__adult": 0.00037479400634765625, "__label__art_design": 0.00034236907958984375, "__label__crime_law": 0.0003876686096191406, "__label__education_jobs": 0.004199981689453125, "__label__entertainment": 6.318092346191406e-05, "__label__fashion_beauty": 0.0001926422119140625, "__label__finance_business": 0.0013561248779296875, "__label__food_dining": 0.0003020763397216797, "__label__games": 0.00047969818115234375, "__label__hardware": 0.0004515647888183594, "__label__health": 0.0004794597625732422, "__label__history": 0.00019788742065429688, "__label__home_hobbies": 0.00011086463928222656, "__label__industrial": 0.0003294944763183594, "__label__literature": 0.0003364086151123047, "__label__politics": 0.0002930164337158203, "__label__religion": 0.0003509521484375, "__label__science_tech": 0.007232666015625, "__label__social_life": 0.00017750263214111328, "__label__software": 0.00617218017578125, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.0002570152282714844, "__label__transportation": 0.0005116462707519531, "__label__travel": 0.0001807212829589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25311, 0.01472]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25311, 0.36968]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25311, 0.92402]], "google_gemma-3-12b-it_contains_pii": [[0, 1513, false], [1513, 5807, null], [5807, 8748, null], [8748, 9193, null], [9193, 14067, null], [14067, 18289, null], [18289, 20732, null], [20732, 22844, null], [22844, 25311, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1513, true], [1513, 5807, null], [5807, 8748, null], [8748, 9193, null], [9193, 14067, null], [14067, 18289, null], [18289, 20732, null], [20732, 22844, null], [22844, 25311, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25311, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25311, null]], "pdf_page_numbers": [[0, 1513, 1], [1513, 5807, 2], [5807, 8748, 3], [8748, 9193, 4], [9193, 14067, 5], [14067, 18289, 6], [18289, 20732, 7], [20732, 22844, 8], [22844, 25311, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25311, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
95f5c5f5cf87e352405054fe1ed2c20a58e77d57
|
FaiRank: An Interactive System to Explore Fairness of Ranking in Online Job Marketplaces
Ahmad Ghizzawi, Julien Marinescu, Shady Elbassuoni, Sihem Amer-Yahia, Gilles Bisson
To cite this version:
Ahmad Ghizzawi, Julien Marinescu, Shady Elbassuoni, Sihem Amer-Yahia, Gilles Bisson. FaiRank: An Interactive System to Explore Fairness of Ranking in Online Job Marketplaces. 22nd International Conference on Extending Database Technology (EDBT), Mar 2019, Lisbon, France. 10.5441/002/edbt.2019.61. hal-02347125
HAL Id: hal-02347125
https://hal.archives-ouvertes.fr/hal-02347125
Submitted on 5 Nov 2019
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
FaiRank: An Interactive System to Explore Fairness of Ranking in Online Job Marketplaces
Ahmad Ghizzawi¹, Julien Marinescu², Shady Elbassuoni¹, Sihem Amer-Yahia², Gilles Bisson²
¹The American University of Beirut (Lebanon), ²Univ. Grenoble Alpes, CNRS, LIG (France)
¹{ahg05,se58}@aub.edu.lb, ²firstname.lastname@univ-grenoble-alpes.fr
ABSTRACT
We demonstrate FaiRank, an interactive system to explore fairness of ranking in online job marketplaces. FaiRank takes as input a set of individuals and their attributes, some of which are protected, and a scoring function, through which those individuals are ranked for jobs. It finds a partitioning of individuals on their protected attributes over which fairness of the scoring function is quantified. FaiRank has several appealing features: (1) It can be used by different users: the auditor whose role is to monitor the fairness of ranking in a job marketplace, the job owner seeking to examine the influence of a scoring function and its variants on the ranking of candidates for a job, and the end-user who wants to assess the fairness of jobs on different marketplaces; (2) It is able to quantify fairness under different data and process transparency settings: when some attributes are anonymized and when only the ranking (and not the scoring function) is available; (3) It is interactive and lets its users explore different scoring functions and examine how fairness evolves; (4) It is generic and provides the ability to quantify different notions of fairness. Our demonstration will provide attendees with several scenarios for fairness of ranking in job marketplaces to experiment with and acquire an understanding of this important research question and its impact in practice.
1 INTRODUCTION
Freelancing marketplaces have become an online destination to find a temporary job. The ranking of individuals on platforms such as Qapa and MisterTemp in France, and TaskRabbit and Fiverr in the USA, naturally poses the question of fairness. Fairness in ranking has recently received great attention from the data mining, information retrieval and machine learning communities (see for instance [1, 4, 6, 9, 10]). The most common definition of fairness in decision making was introduced in [2, 11] as demographic parity, and formalized in [3] as group fairness. This definition captures the unequal treatment of a person based on belonging to a certain group of people defined using protected attributes such as gender and ethnicity. For instance, in the French Criminal Law (Article 225-1), 23 such attributes are listed as discriminatory.¹ The exact formulation of fairness varies and the purpose of FaiRank is to explore different formulations and unveil their impact on individuals.
User Roles. FaiRank appeals to different users. The auditor, whose role is to monitor the fairness of ranking in a marketplace, can use FaiRank to examine different jobs on that marketplace and quantify their fairness. The job owner, who wants to study the behavior of a scoring function and its variants, can use FaiRank to understand their impact on the ranking of individuals, and choose fairest one. Finally, the end-user, who is being ranked, can use FaiRank to assess the fairness of jobs on different marketplaces and make an informed decision.
Positioning. Most previous work on group-level fairness have either assumed that groups are pre-defined [9] or that they are defined using a single protected attribute (e.g., males vs females or whites vs blacks) [5]. FaiRank extends prior work to examine groups of people defined by any combination of protected attributes (the so-called subgroup fairness [6]). The scoring function yields one histogram per group as a score distribution. We use the Earth Mover’s Distance (EMD) [8], a measure commonly used to compare histograms, to quantify the difference between score distributions across groups. The intuition is that if score distributions between groups differ significantly, the scoring function treats individuals in those groups unequally. This allows exploring different fairness formulations in FaiRank as any aggregation function over pairwise distances of score distributions in groups (highest average, lowest variance, etc.).
Since we do not want to focus only on pre-defined groups to quantify fairness, we must exhaust all possible ways of partitioning individuals into groups based on their protected attributes. This would capture cases where a scoring function treats males and females equally but is unfair to older African Americans compared to younger White Americans for instance. To examine all groups under different fairness definitions, we formulate an optimization problem as finding a partitioning of the ranking space, i.e., individuals and their scores, that exhibits some aggregation over pairwise partitions (e.g., the highest average EMD between partitions, the lowest average, the highest variance, etc.). Exhaustively enumerating all groups is exponential in the number of values of protected attributes. Therefore, to enable interactive response time, FaiRank relies on an efficient heuristic algorithm. At each step, the algorithm greedily splits individuals using the most unfair attribute according to the fairness definition. This local condition is akin to the one made in decision trees using gain functions [7]. The algorithm stops when there are no further attributes left to split on or when the current partitioning of individuals exhibits more unfairness than it would if its partitions were split further.
Data and Function Transparencies. In practice, data about individuals, i.e., their attributes, or the scoring function itself, may not be available. We integrate FaiRank with the k-anonymization ARX tool² and explore fairness for anonymized datasets. When the function is not available, FaiRank builds histograms using ranks of individuals rather than actual function scores.
Demonstration. Our demonstration combines the features of FaiRank to help attendees explore fairness of ranking in online job marketplaces and its impact in practice. It also sheds light
¹https://www.legifrance.gouv.fr/affichCodeArticle.do?idCodeTexte=LEGITEXT000006070719&cidArticle=LEGIARTI000006417828
²https://arxiv.deidentifier.org/
on the interplay between data and function transparencies and the ability to quantify fairness. Additionally, FaiRank enables the exploration of different scoring functions, which can help choose the fairest for a given job. Finally, FaiRank can be used with standalone datasets and scoring functions, and since it can operate under various transparency settings, it can be used as a service to quantify fairness in existing blackbox job marketplaces.
2 SYSTEM OVERVIEW
Figure 1 depicts the system architecture of FaiRank. The user can select or upload a dataset which consists of a set of individuals and their attributes. The attributes can be protected such as gender, age, location, ethnicity, etc. or reflective of the performance or skills of the individuals such as reputation, knowledge in plumbing, writing skills, and mathematical abilities. Some of these attributes can also be anonymized. The user of the system can define or select a scoring function to rank individuals. The scoring function can be defined on a subset of the attributes of the individuals, for example a linear combination of an individual’s reputation and plumbing skills, or of English writing skills and expertise in computer science. In addition, the user can filter the individuals based on protected attributes. This can be helpful in scenarios where the user is only interested in ranking a subset of individuals that satisfy certain criteria, say only individuals who speak Arabic or who are located in New York city. Instead of a scoring function, the user can also provide some ranking for the individuals (i.e., in the case that the scoring function is not transparent).
FaiRank solves an optimization problem that finds a partitioning of individuals over their protected attributes which for some job using a scoring function \( f \). Figure 2 shows one possible partitioning of those 10 individuals, that results in the interplay between data and function transparencies and the ability to quantify fairness. Additionally, FaiRank enables the exploration of different scoring functions, which can help choose the fairest for a given job. Finally, FaiRank can be used with standalone datasets and scoring functions, and since it can operate under various transparency settings, it can be used as a service to quantify fairness in existing blackbox job marketplaces.
3 QUANTIFYING FAIRNESS
3.1 Model
To quantify fairness, we model the problem as aggregating a distance between the score distributions of all possible partitions of individuals. Unlike previous work where partitions were defined or known a priori (e.g., [5]), in FaiRank we explore the space of all possible groups defined by a combination of values of the individuals’ protected attributes. The goal becomes finding an unfair partitioning of individuals under the scoring function. This can be formulated in many ways. For instance, the worst-case formulation would correspond to finding the highest distance between partitions. We cast this goal as an optimization problem as follows.
Definition 1 (Most Unfair Partitioning Problem). We are given a set of individuals \( W \), where each individual is associated with a set of protected attributes \( A = \{a_1, a_2, \ldots, a_n\} \) and observed attributes \( B = \{b_1, b_2, \ldots, b_m\} \). The protected attributes are inherent properties of the individuals such as gender, age, ethnicity, origin, etc. The observed attributes represent the skills of individuals for jobs and could include, for instance, the reputation and writing skills of an individual. We are also given a scoring function \( f : W \to [0, 1] \), which is defined using observed attributes as follows: \( f(w) = \sum_{i=1}^{m} \alpha_i b_i \), where \( \alpha_i \) is a user-defined weight for observed attribute \( b_i \). A weight of zero indicates that the corresponding attribute is not relevant for the user in ranking the individuals. Our goal is to fully partition the individuals in \( W \) into \( k \) disjoint partitions \( P = \{P_1, P_2, \ldots, P_k\} \) based on their protected attributes in \( A \) using the following optimization objective:
\[
\text{argmax}_{P} \quad \text{unfairness}(P, f)
\]
subject to
\[
\forall i, j \quad P_i \cap P_j = \emptyset \\
\bigcup_{i=1}^{k} P_i = W
\]
Another formulation, Least Unfair Partitioning Problem, would be to find the partitioning that results in the smallest unfairness (i.e., \( \text{argmin} \) instead of \( \text{argmax} \) in the formulation above).
We now define how to compute the amount of unfairness of a function \( f \) for a partitioning \( P \), or \( \text{unfairness}(P, f) \) in the above optimization problem.
Definition 2 (Average Pairwise Unfairness). For a set of individuals \( W \), a full-disjoint partitioning of the individuals \( P = \{P_1, P_2, \ldots, P_k\} \) and a scoring function \( f \), unfairness of \( f \) for the partitioning \( P \) is quantified as the average pairwise Earth Mover’s Distance (EMD) between the distribution of scores in the different partitions of \( P \), which is computed as follows:
\[
\text{unfairness}(P, f) = \frac{1}{k} \sum_{i \neq j} \text{EMD}(h(p_i, f), h(p_j, f))
\]
where \( h(p_i, f) \) is a histogram of the scores of individuals in \( p_i \).
Another possible formulation is to compute unfairness as the maximum pairwise EMD, which would correspond to finding the partitioning with the highest maximum EMD between any pair of partitions.
Example. Consider the example dataset shown in Table 1 consisting of 10 individuals on a crowdsourcing platform who are ranked for some job using a scoring function \( f \). Figure 2 shows one possible partitioning of those 10 individuals, that results
Table 1: An example dataset consisting of 10 individuals and a scoring function using Language Test and Rating
<table>
<thead>
<tr>
<th>Individual</th>
<th>Gender</th>
<th>Country</th>
<th>Year of Birth</th>
<th>Language</th>
<th>Ethnicity</th>
<th>Experience</th>
<th>Language Test</th>
<th>Rating</th>
<th>f(w)</th>
</tr>
</thead>
<tbody>
<tr>
<td>w1</td>
<td>Female</td>
<td>India</td>
<td>2004</td>
<td>English</td>
<td>Indian</td>
<td>0</td>
<td>0.50</td>
<td>0.20</td>
<td>0.29</td>
</tr>
<tr>
<td>w2</td>
<td>Male</td>
<td>America</td>
<td>1976</td>
<td>English</td>
<td>White</td>
<td>14</td>
<td>0.89</td>
<td>0.92</td>
<td>0.911</td>
</tr>
<tr>
<td>w3</td>
<td>Male</td>
<td>India</td>
<td>1976</td>
<td>Indian</td>
<td>White</td>
<td>6</td>
<td>0.65</td>
<td>0.65</td>
<td>0.65</td>
</tr>
<tr>
<td>w4</td>
<td>Male</td>
<td>Other</td>
<td>1963</td>
<td>Other</td>
<td>Indian</td>
<td>18</td>
<td>0.64</td>
<td>0.76</td>
<td>0.724</td>
</tr>
<tr>
<td>w5</td>
<td>Female</td>
<td>India</td>
<td>1963</td>
<td>Indian</td>
<td>African-American</td>
<td>21</td>
<td>0.85</td>
<td>0.90</td>
<td>0.885</td>
</tr>
<tr>
<td>w6</td>
<td>Male</td>
<td>America</td>
<td>1995</td>
<td>English</td>
<td>African-American</td>
<td>16</td>
<td>0.95</td>
<td>0.98</td>
<td>0.971</td>
</tr>
<tr>
<td>w7</td>
<td>Female</td>
<td>America</td>
<td>1982</td>
<td>English</td>
<td>White</td>
<td>0</td>
<td>0.50</td>
<td>0.20</td>
<td>0.29</td>
</tr>
<tr>
<td>w8</td>
<td>Male</td>
<td>Other</td>
<td>2008</td>
<td>English</td>
<td>Other</td>
<td>2</td>
<td>0.32</td>
<td>0.25</td>
<td>0.271</td>
</tr>
<tr>
<td>w9</td>
<td>Male</td>
<td>Other</td>
<td>1992</td>
<td>English</td>
<td>White</td>
<td>5</td>
<td>0.76</td>
<td>0.56</td>
<td>0.62</td>
</tr>
</tbody>
</table>
Figure 2: A partitioning of the example dataset
from splitting them based on Gender first, and then splitting only the Male partition based on Language to get the following partitioning of individuals: Male - English, Male - Indian, Male - Other, and Female. We quantify the unfairness of partitioning \( P \) as the average EMD between the pairs of partitions in \( P \). To identify the most unfair partitioning, one must exhaust all possible full disjoint partitionings of individuals based on their protected attributes. To do that, we generate a histogram for each partition as indicated in Figure 2 based on the function scores by creating equal bins over the range of \( f \) and counting the number of individuals whose function scores fall in each bin. The most unfair partitioning is then the one with maximum average pairwise EMD between its partitions’ histograms.
3.2 Algorithm
Our optimization problem for finding the most unfair partitioning is hard since there are many possible partitionings \( P \) (exponential in the number of protected attribute values). For this reason, we propose an efficient heuristic algorithm to identify a partitioning of individuals with respect to our optimization objective within reasonable time. Our algorithm (pseudocode given as Algorithm 1) is recursive. We describe it with one unfairness formulation (the worst-case one provided in Equation 1 and with one aggregation function, namely average). Our algorithm decides whether or not to split a given partition by comparing the average EMD of that partition with its siblings to that of its children with its siblings. The intuition behind this is that it assesses what would happen to unfairness as measured by the average EMD if the partition was replaced by its children. It only splits a partition if its average pairwise EMD with its siblings is less than the average pairwise EMD of its potential children with the partition’s siblings (that is in the case of the worst-case formulation of unfairness - other formulations require to change this test only). To invoke the algorithm for the first time, we first split the given set of individuals using the most unfair attribute and then the algorithm is called once for each resulting partition. After all recursive calls of the algorithm terminate, the output is returned as the final partitioning of the individuals.
Algorithm 1 QUANTIFY\((current\): a partition, \(sib\): a set of partitions, \(f\): a scoring function, \(A\): a set of attributes)
1. if \( A = \emptyset \) then
2. Add current to output
3. else
4. currentAvg = avg(EMD(current, sib, f))
5. \( a = \text{mostUnfair}(current, f, A) \)
6. \( A = A - a \)
7. children = split(current, a)
8. childrenAvg = avg(EMD(children, sib, f))
9. if currentAvg \( \geq \) childrenAvg then
10. Add current to output
11. else
12. for each partition \( p \) \( \in \) children do
13. QUANTIFY\((p), children - \{p\}, f, A) \)
14. end for
15. end if
16. end if
4 DEMONSTRATION SCENARIOS
FaiRank caters to different user roles. A screenshot of the interface is shown in Figure 3. A video of the demonstration is available at https://youtu.be/MckMJColcDk. We propose to demonstrate it with 3 scenarios, one per role. During the demonstration, we will rely on two types of datasets, simulated datasets mimicking crowdsourcing platforms and real-data crawled from online freelancing marketplaces. In each case, we will explore various scoring functions representing different jobs as well as variants for the same job. We will also allow the exploration of transparency settings and their effect on fairness quantification, by making use of the ARX tool to k-anonymize the datasets and by considering cases where the scoring function is available and cases where it is not.
1https://arx.deidentifier.org/
AUDITOR Scenario. This scenario provides auditors with the ability to monitor a marketplace that offers multiple jobs, each with its own scoring function. It provides a big picture to auditors and lets them identify which jobs are most unfair to which individuals based on their rankings and under different notions of fairness. For instance, an auditor may be looking to draft a "fairness" report on a freelancing marketplace such as Qapa or TaskRabbit. The auditor would want to quantify the fairness for each job offered on the platform, and identify demographics groups that are least/most favored on the platform by each job. Additionally, the auditor might consider cases where the marketplace does not provide full transparency, either in terms of attributes of its users or in terms of the scoring functions used to rank those users, and we show the effect of this on quantifying fairness compared to the case when both attributes and the scoring function are available.
JOB OWNER Scenario. This scenario emphasizes the ability to define different scoring functions, and examine their impact on individuals. This exploration will help owners understand the behavior of their scoring functions and will guide them to choose the best function for their job, i.e., the one that satisfies some desired fairness. For instance, for an online job that requires people to write code, the owner can select those for whom the scoring function induces the least unfairness.
END-USER Scenario. This scenario offers end-users the ability to immerse themselves and simulate different cases in which they are to be ranked. For instance, an end-user wishing to find a job online, can examine how unfair some job is with respect to different groups of people. Given a group to which the end-user belongs (e.g., Young professionals in Grenoble) and a job of interest (e.g., installing wood panels), the end-user can see how well the marketplace is treating that group and make an informed decision of whether to target that job or not.
ACKNOWLEDGMENTS
This work is supported by the American University of Beirut Research Board (URB)
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-02347125/document", "len_cl100k_base": 4916, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16004, "total-output-tokens": 5828, "length": "2e12", "weborganizer": {"__label__adult": 0.0004534721374511719, "__label__art_design": 0.0008172988891601562, "__label__crime_law": 0.0013513565063476562, "__label__education_jobs": 0.02740478515625, "__label__entertainment": 0.00019049644470214844, "__label__fashion_beauty": 0.0004200935363769531, "__label__finance_business": 0.005084991455078125, "__label__food_dining": 0.0007171630859375, "__label__games": 0.0012159347534179688, "__label__hardware": 0.0007681846618652344, "__label__health": 0.0013446807861328125, "__label__history": 0.0008130073547363281, "__label__home_hobbies": 0.0002236366271972656, "__label__industrial": 0.0012054443359375, "__label__literature": 0.0008611679077148438, "__label__politics": 0.0016345977783203125, "__label__religion": 0.0006480216979980469, "__label__science_tech": 0.37841796875, "__label__social_life": 0.0005512237548828125, "__label__software": 0.04718017578125, "__label__software_dev": 0.52734375, "__label__sports_fitness": 0.0003707408905029297, "__label__transportation": 0.0007729530334472656, "__label__travel": 0.0003821849822998047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22875, 0.054]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22875, 0.47837]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22875, 0.88302]], "google_gemma-3-12b-it_contains_pii": [[0, 1143, false], [1143, 7419, null], [7419, 13150, null], [13150, 18476, null], [18476, 22875, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1143, true], [1143, 7419, null], [7419, 13150, null], [13150, 18476, null], [18476, 22875, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22875, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22875, null]], "pdf_page_numbers": [[0, 1143, 1], [1143, 7419, 2], [7419, 13150, 3], [13150, 18476, 4], [18476, 22875, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22875, 0.10891]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
c419c80690f4e3a531e3750f0120d3f65ea373aa
|
Control flow graphs and loop optimizations
Agenda
• Building control flow graphs
• Low level loop optimizations
• Code motion
• Strength reduction
• Unrolling
• High level loop optimizations
• Loop fusion
• Loop interchange
• Loop tiling
Moving beyond basic blocks
• Up until now, we have focused on single basic blocks
• What do we do if we want to consider larger units of computation
• Whole procedures?
• Whole program?
• Idea: capture control flow of a program
• How control transfers between basic blocks due to:
• Conditionals
• Loops
Representation
• Use standard three-address code
• Jump targets are labeled
• Also label beginning/end of functions
• Want to keep track of targets of jump statements
• Any statement whose execution may immediately follow
execution of jump statement
• *Explicit* targets: targets mentioned in jump statement
• *Implicit* targets: statements that follow conditional jump
statements
• The statement that gets executed if the branch is not taken
A = 4
\[ t_1 = A \times B \]
repeat {
\[ t_2 = t_1 / C \]
if (t2 ≥ W) {
\[ M = t_1 \times k \]
\[ t_3 = M + I \]
}
H = I
M = t_3 - H
} until (T3 ≥ 0)
A = 4
\[ t_1 = A \times B \]
\[ t_2 = t_1 / C \]
if \( t_2 < W \) goto L2
\[ M = t_1 \times k \]
\[ t_3 = M + I \]
H = I
\[ M = t_3 - H \]
if \( t_3 \geq 0 \) goto L3
goto L1
L3: halt
Control flow graphs
• Divides statements into basic blocks
• Basic block: a maximal sequence of statements $l_0, l_1, l_2, ..., l_n$ such that if $l_j$ and $l_{j+1}$ are two adjacent statements in this sequence, then
• The execution of $l_j$ is always immediately followed by the execution of $l_{j+1}$
• The execution of $l_{j+1}$ is always immediately preceded by the execution of $l_j$
• Edges between basic blocks represent potential flow of control
CFG for running example
A = 4
\( t_1 = A \times B \)
**L1:**
\( t_2 = \frac{t_1}{c} \)
if \( t_2 < W \) goto L2
M = \( t_1 \times k \)
t3 = M + I
**L2:**
H = I
M = \( t_3 - H \)
if \( t_3 \geq 0 \) goto L3
goto L1
**L3:**
halt
How do we build this automatically?
Constructing a CFG
- To construct a CFG where each node is a basic block
- Identify *leaders*: first statement of a basic block
- In program order, construct a block by appending subsequent statements up to, but not including, the next leader
- Identifying leaders
- First statement in the program
- Explicit target of any conditional or unconditional branch
- Implicit target of any branch
Partitioning algorithm
- Input: set of statements, $\text{stat}(i) = i^{th}$ statement in input
- Output: set of leaders, set of basic blocks where $\text{block}(x)$ is the set of statements in the block with leader $x$
- Algorithm
\[
\text{leaders} = \{1\} \quad \text{//Leaders always includes first statement}
\]
\[
\text{for } i = 1 \text{ to } |n| \quad \text{//}|n| = \text{number of statements}
\]
\[
\quad \text{if } \text{stat}(i) \text{ is a branch, then}
\]
\[
\quad \quad \text{leaders} = \text{leaders} \cup \text{all potential targets}
\]
\[
\text{end for}
\]
\[
\text{worklist} = \text{leaders}
\]
\[
\text{while } \text{worklist} \text{ not empty do}
\]
\[
\quad x = \text{remove earliest statement in worklist}
\]
\[
\quad \text{block}(x) = \{x\}
\]
\[
\text{for } (i = x + 1; i \leq |n| \text{ and } i \notin \text{leaders}; i++)
\]
\[
\quad \text{block}(x) = \text{block}(x) \cup \{i\}
\]
\[
\text{end for}
\]
\[
\text{end while}
\]
Running example
1 A = 4
2 t1 = A * B
3 t2 = t1 / C
4 if t2 < W goto L2
5 M = t1 * k
6 t3 = M + I
7 H = I
8 M = t3 - H
9 if t3 ≥ 0 goto L3
10 goto L1
11 L3: halt
Running example
1 A = 4
2 t1 = A * B
3 L1: t2 = t1 / C
4 if t2 < W goto L2
5 M = t1 * k
6 t3 = M + I
7 L2: H = I
8 M = t3 - H
9 if t3 ≥ 0 goto L3
10 goto L1
11 L3: halt
Leaders = {1, 3, 5, 7, 10, 11}
Basic blocks = { {1, 2}, {3, 4}, {5, 6}, {7, 8, 9}, {10}, {11} }
Putting edges in CFG
• There is a directed edge from $B_1$ to $B_2$ if
• There is a branch from the last statement of $B_1$ to the first
statement (leader) of $B_2$
• $B_2$ immediately follows $B_1$ in program order and $B_1$ does not end
with an unconditional branch
• Input: $block$, a sequence of basic blocks
• Output: The CFG
```plaintext
for i = 1 to |block|
x = last statement of block(i)
if stat(x) is a branch, then
for each explicit target y of stat(x)
create edge from block i to block y
end for
if stat(x) is not unconditional then
create edge from block i to block i+1
end for
end for
```
A = 4
t1 = A * B
L1: t2 = t1/c
if t2 < W goto L2
M = t1 * k
t3 = M + I
L2: H = I
M = t3 - H
if t3 ≥ 0 goto L3
L3: halt
go to L1
Discussion
• Some times we will also consider the statement-level CFG, where each node is a statement rather than a basic block.
• Either kind of graph is referred to as a CFG.
• In statement-level CFG, we often use a node to explicitly represent merging of control.
• Control merges when two different CFG nodes point to the same node.
• Note: if input language is structured, front-end can generate basic block directly.
• “GOTO considered harmful”
A = 4
t1 = A * B
L1: t2 = t1/c
if t2 < W goto L2
M = t1 * k
t3 = M + I
L2: H = I
M = t3 - H
if t3 ≥ 0 goto L3
L3: halt
Loop optimization
- Low level optimization
- Moving code around in a single loop
- Examples: loop invariant code motion, strength reduction, loop unrolling
- High level optimization
- Restructuring loops, often affects multiple loops
- Examples: loop fusion, loop interchange, loop tiling
Low level loop optimizations
- Affect a single loop
- Usually performed at three-address code stage or later in compiler
- First problem: identifying loops
- Low level representation doesn’t have loop statements!
Identifying loops
- First, we must identify *dominators*
- Node $a$ dominates node $b$ if every possible execution path that gets to $b$ *must* pass through $a$
- Many different algorithms to calculate dominators – we will not cover how this is calculated
- A *back edge* is an edge from $b$ to $a$ when $a$ dominates $b$
- The target of a back edge is a *loop header*
Natural loops
- Will focus on natural loops – loops that arise in structured programs
- For a node \( n \) to be in a loop with header \( h \)
- \( n \) must be dominated by \( h \)
- There must be a path in the CFG from \( n \) to \( h \) through a back-edge to \( h \)
- What are the back edges in the example to the right? The loop headers? The natural loops?
Loop invariant code motion
- Idea: some expressions evaluated in a loop never change; they are *loop invariant*
- Can move loop invariant expressions outside the loop, store result in temporary and just use the temporary in each iteration
- Why is this useful?
Identifying loop invariant code
• To determine if a statement
\[ s: t = a \text{ op } b \]
is loop invariant, find all definitions of \( a \) and \( b \) that reach \( s \)
• \( s \) is loop invariant if both \( a \) and \( b \) satisfy one of the following
• it is constant
• all definitions that reach it are from outside the loop
• only one definition reaches it and that definition is also loop invariant
Moving loop invariant code
- Just because code is loop invariant doesn’t mean we can move it!
- We can move a loop invariant statement \( t = a \text{ op } b \) if
- The statement dominates all loop exits where \( t \) is live
- There is only one definition of \( t \) in the loop
- \( T \) is not live before the loop
- Move instruction to a \textit{preheader}, a new block put right before loop header
Strength reduction
- Like strength reduction peephole optimization
- Peephole: replace expensive instruction like a * 2 with a << 1
- Replace expensive instruction, multiply, with a cheap one, addition
- Applies to uses of an induction variable
- Opportunity: array indexing
```c
for (i = 0; i < 100; i++)
A[i] = 0;
```
```c
i = 0;
L2: if (i >= 100) goto L1
j = 4 * i + &A
*j = 0;
i = i + 1;
goto L2
L1:
```
Strength reduction
- Like strength reduction peephole optimization
- Peephole: replace expensive instruction like a * 2 with a << 1
- Replace expensive instruction, multiply, with a cheap one, addition
- Applies to uses of an induction variable
- Opportunity: array indexing
```c
for (i = 0; i < 100; i++)
A[i] = 0;
i = 0; k = &A;
L2: if (i >= 100) goto L1
j = k;
*j = 0;
i = i + 1; k = k + 4;
goto L2
L1:
```
Induction variables
• A *basic induction variable* is a variable $j$
• whose only definition within the loop is an assignment of the form $j = j \pm c$, where $c$ is loop invariant
• Intuition: the variable which determines number of iterations is usually an induction variable
• A *mutual induction variable* $i$ may be
• defined once within the loop, and its value is a linear function of some other induction variable $j$ such that
$i = c_1 \cdot j \pm c_2$ or $i = j / c_1 \pm c_2$
where $c_1, c_2$ are loop invariant
• A *family* of induction variables include a basic induction variable and any related mutual induction variables
Strength reduction algorithm
- Let $i$ be an induction variable in the family of the basic induction variable $j$, such that $i = c_1 \times j + c_2$
- Create a new variable $i'$
- Initialize in preheader
$$i' = c_1 \times j + c_2$$
- Track value of $j$. After $j = j + c_3$, perform
$$i' = i' + (c_1 \times c_3)$$
- Replace definition of $i$ with
$$i = i'$$
- Key: $c_1$, $c_2$, $c_3$ are all loop invariant (or constant), so computations like $(c_1 \times c_3)$ can be moved outside loop
Linear test replacement
- After strength reduction, the loop test may be the only use of the basic induction variable.
- Can now eliminate induction variable altogether.
- Algorithm
- If only use of an induction variable is the loop test and its increment, and if the test is always computed.
- Can replace the test with an equivalent one using one of the mutual induction variables.
```
i = 2
for (; i < k; i++)
j = 50*i
... = j
```
Strength reduction
```
i = 2; j' = 50 * i
for (; i < k; i++, j' += 50)
... = j'
```
Linear test replacement
```
i = 2; j' = 50 * i
for (; j' < 50*k; j' += 50)
... = j'
```
Loop unrolling
- Modifying induction variable in each iteration can be expensive
- Can instead *unroll* loops and perform multiple iterations for each increment of the induction variable
- What are the advantages and disadvantages?
```plaintext
for (i = 0; i < N; i++)
A[i] = ...
```
Unroll by factor of 4
```plaintext
for (i = 0; i < N; i += 4)
A[i] = ...
A[i+1] = ...
A[i+2] = ...
A[i+3] = ...
```
High level loop optimizations
- Many useful compiler optimizations require restructuring loops or sets of loops
- Combining two loops together (*loop fusion*)
- Switching the order of a nested loop (*loop interchange*)
- Completely changing the traversal order of a loop (*loop tiling*)
- These sorts of high level loop optimizations usually take place at the AST level (where loop structure is obvious)
Cache behavior
- Most loop transformations target cache performance
- Attempt to increase \textit{spatial} or \textit{temporal} locality
- Locality can be exploited when there is \textit{reuse} of data (for temporal locality) or recent access of nearby data (for spatial locality)
- Loops are a good opportunity for this: many loops iterate through matrices or arrays
- Consider matrix-vector multiply example
- Multiple traversals of vector: opportunity for spatial and temporal locality
- Regular access to array: opportunity for spatial locality
```c
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
y[i] += A[i][j] * x[j]
```
Loop fusion
- Combine two loops together into a single loop
- Why is this useful?
- Is this always legal?
Loop interchange
- Change the order of a nested loop
- This is not always legal – it changes the order that elements are accessed!
- Why is this useful?
- Consider matrix-matrix multiply when A is stored in column-major order (i.e., each column is stored in contiguous memory)
```c
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
y[i] += A[i][j] * x[j]
```
Loop interchange
- Change the order of a nested loop
- This is not always legal – it changes the order that elements are accessed!
- Why is this useful?
- Consider matrix-matrix multiply when A is stored in column-major order (i.e., each column is stored in contiguous memory)
```plaintext
for (j = 0; j < N; j++)
for (i = 0; i < N; i++)
y[i] += A[i][j] * x[j]
```
Loop tiling
- Also called “loop blocking”
- One of the more complex loop transformations
- Goal: break loop up into smaller pieces to get spatial and temporal locality
- Create new inner loops so that data accessed in inner loops fit in cache
- Also changes iteration order, so may not be legal
```plaintext
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
y[i] += A[i][j] * x[j]
for (ii = 0; ii < N; ii += B)
for (jj = 0; jj < N; jj += B)
for (i = ii; i < ii+B; i++)
for (j = jj; j < jj+B; j++)
y[i] += A[i][j] * x[j]
```
![Diagram showing loop tiling with indices i, j, and indices for loop blocking]
Loop tiling
• Also called “loop blocking”
• One of the more complex loop transformations
• Goal: break loop up into smaller pieces to get spatial and temporal locality
• Create new inner loops so that data accessed in inner loops fit in cache
• Also changes iteration order, so may not be legal
\[
\begin{align*}
\text{for } (i = 0; i < N; i++) \\
&\hspace{1em} \text{for } (j = 0; j < N; j++) \\
&\hspace{2em} y[i] += A[i][j] \times x[j]
\end{align*}
\]
\[
\begin{align*}
\text{for } (ii = 0; ii < N; ii += B) \\
&\hspace{1em} \text{for } (jj = 0; jj < N; jj += B) \\
&\hspace{2em} \text{for } (i = ii; i < ii+B; i++) \\
&\hspace{3em} \text{for } (j = jj; j < jj+B; j++) \\
&\hspace{4em} y[i] += A[i][j] \times x[j]
\end{align*}
\]
In a real (Itanium) compiler
GFLOPS relative to -O2; bigger is better
92% of Peak Performance
Loop transformations
- Loop transformations can have dramatic effects on performance
- Doing this legally and automatically is very difficult!
- Researchers have developed techniques to determine legality of loop transformations and automatically transform the loop
- Techniques like unimodular transform framework and polyhedral framework
- These approaches will get covered in more detail in advanced compilers course
|
{"Source-Url": "https://engineering.purdue.edu/~milind/ece573/2011spring/lecture-11.pdf", "len_cl100k_base": 4426, "olmocr-version": "0.1.49", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 74930, "total-output-tokens": 6004, "length": "2e12", "weborganizer": {"__label__adult": 0.0002810955047607422, "__label__art_design": 0.00023186206817626953, "__label__crime_law": 0.0002865791320800781, "__label__education_jobs": 0.0003502368927001953, "__label__entertainment": 3.993511199951172e-05, "__label__fashion_beauty": 0.00011920928955078124, "__label__finance_business": 0.00013387203216552734, "__label__food_dining": 0.00029778480529785156, "__label__games": 0.0005030632019042969, "__label__hardware": 0.0009741783142089844, "__label__health": 0.0003159046173095703, "__label__history": 0.00013875961303710938, "__label__home_hobbies": 9.351968765258788e-05, "__label__industrial": 0.0004146099090576172, "__label__literature": 0.00013399124145507812, "__label__politics": 0.00020885467529296875, "__label__religion": 0.00037598609924316406, "__label__science_tech": 0.00582122802734375, "__label__social_life": 5.447864532470703e-05, "__label__software": 0.003509521484375, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0003304481506347656, "__label__transportation": 0.00042128562927246094, "__label__travel": 0.0001742839813232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14526, 0.0354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14526, 0.79715]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14526, 0.78701]], "google_gemma-3-12b-it_contains_pii": [[0, 43, false], [43, 251, null], [251, 570, null], [570, 1030, null], [1030, 1198, null], [1198, 1392, null], [1392, 1853, null], [1853, 2123, null], [2123, 2525, null], [2525, 3479, null], [3479, 3662, null], [3662, 3966, null], [3966, 4608, null], [4608, 4741, null], [4741, 5198, null], [5198, 5326, null], [5326, 5625, null], [5625, 5841, null], [5841, 6219, null], [6219, 6587, null], [6587, 6849, null], [6849, 7275, null], [7275, 7688, null], [7688, 8119, null], [8119, 8552, null], [8552, 9210, null], [9210, 9707, null], [9707, 10323, null], [10323, 10739, null], [10739, 11150, null], [11150, 11791, null], [11791, 11898, null], [11898, 12263, null], [12263, 12638, null], [12638, 13270, null], [13270, 14006, null], [14006, 14102, null], [14102, 14526, null]], "google_gemma-3-12b-it_is_public_document": [[0, 43, true], [43, 251, null], [251, 570, null], [570, 1030, null], [1030, 1198, null], [1198, 1392, null], [1392, 1853, null], [1853, 2123, null], [2123, 2525, null], [2525, 3479, null], [3479, 3662, null], [3662, 3966, null], [3966, 4608, null], [4608, 4741, null], [4741, 5198, null], [5198, 5326, null], [5326, 5625, null], [5625, 5841, null], [5841, 6219, null], [6219, 6587, null], [6587, 6849, null], [6849, 7275, null], [7275, 7688, null], [7688, 8119, null], [8119, 8552, null], [8552, 9210, null], [9210, 9707, null], [9707, 10323, null], [10323, 10739, null], [10739, 11150, null], [11150, 11791, null], [11791, 11898, null], [11898, 12263, null], [12263, 12638, null], [12638, 13270, null], [13270, 14006, null], [14006, 14102, null], [14102, 14526, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14526, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14526, null]], "pdf_page_numbers": [[0, 43, 1], [43, 251, 2], [251, 570, 3], [570, 1030, 4], [1030, 1198, 5], [1198, 1392, 6], [1392, 1853, 7], [1853, 2123, 8], [2123, 2525, 9], [2525, 3479, 10], [3479, 3662, 11], [3662, 3966, 12], [3966, 4608, 13], [4608, 4741, 14], [4741, 5198, 15], [5198, 5326, 16], [5326, 5625, 17], [5625, 5841, 18], [5841, 6219, 19], [6219, 6587, 20], [6587, 6849, 21], [6849, 7275, 22], [7275, 7688, 23], [7688, 8119, 24], [8119, 8552, 25], [8552, 9210, 26], [9210, 9707, 27], [9707, 10323, 28], [10323, 10739, 29], [10739, 11150, 30], [11150, 11791, 31], [11791, 11898, 32], [11898, 12263, 33], [12263, 12638, 34], [12638, 13270, 35], [13270, 14006, 36], [14006, 14102, 37], [14102, 14526, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14526, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
d3fa23552553a5711a22595f47d9b9cd006b8f9a
|
NOTICES
Disclaimers
The findings in this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents.
Citation of manufacturer’s or trade names does not constitute an official endorsement or approval of the use thereof.
Destroy this report when it is no longer needed. Do not return it to the originator.
Open Source Software Tools for Anomaly Detection Analysis
Robert F. Erbacher and Robinson Pino
Computational and Information Sciences Directorate, ARL
REPORT DOCUMENTATION PAGE
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number.
PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.
1. REPORT DATE (DD-MM-YYYY)
April 2014
2. REPORT TYPE
Final
3. DATES COVERED (From - To)
September 2013
4. TITLE AND SUBTITLE
Open Source Software Tools for Anomaly Detection Analysis
5a. CONTRACT NUMBER
5b. GRANT NUMBER
5c. PROGRAM ELEMENT NUMBER
5d. PROJECT NUMBER
5e. TASK NUMBER
5f. WORK UNIT NUMBER
6. AUTHOR(S)
Robert F. Erbacher and Robinson Pino
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
U.S. Army Research Laboratory
ATTN: RDRL-CIN-D
2800 Powder Mill Road
Adelphi, MD 20783-1197
8. PERFORMING ORGANIZATION REPORT NUMBER
ARL-MR-0869
9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)
10. SPONSOR/MONITOR’S ACRONYM(S)
11. SPONSOR/MONITOR’S REPORT NUMBER(S)
12. DISTRIBUTION/AVAILABILITY STATEMENT
Approved for public release; distribution unlimited.
13. SUPPLEMENTARY NOTES
POC email: renee.e.etoty.civ@mail.mil
14. ABSTRACT
The goal of this report is to perform an analysis of software tools that could be employed to perform basic research and development of Anomaly-Based Intrusion Detection Systems. The software tools reviewed include; Environment for Developing KDD-Applications Supported by Index-Structures (ELKI), RapidMiner, SHOGUN (toolbox) Waikato Environment for Knowledge Analysis (Weka) (machine learning), and Scikit-learn. From the analysis, it is recommended to employ the SHOGUN (toolbox) or Scikit-learn as both tools are written in C++ and offers an interface for Python. The python language software is currently employed as a research tool within our in-house team of researchers.
15. SUBJECT TERMS
anomaly detection, survey, data mining
16. SECURITY CLASSIFICATION OF:
a. REPORT Unclassified
b. ABSTRACT Unclassified
c. THIS PAGE Unclassified
17. LIMITATION OF ABSTRACT
UU
18. NUMBER OF PAGES
22
19a. NAME OF RESPONSIBLE PERSON
Renee E. Etoty
19b. TELEPHONE NUMBER (Include area code)
(301) 394-1835
Form Approved
OMB No. 0704-0188
Standard Form 298 (Rev. 8/96)
Prescribed by ANSI Std. Z39.18
Contents
List of Figures iv
List of Tables iv
1. Introduction 1
2. Environment for Developing KDD-Applications Supported by Index Structures (ELKI) 1
3. RapidMiner 2
4. SHOGUN (toolbox) 3
5. Waikato Environment for Knowledge Analysis (Weka) (Machine Learning) 4
6. Scikit-Learn 5
7. Results and Discussion 6
8. Conclusions 7
9. References 9
Appendix – List of Algorithm Per Tool 11
List of Symbols, Abbreviations, and Acronyms 15
Distribution List 16
List of Figures
Figure 1. ELKI (a) user interface and (b) output results (4) .......................................................2
Figure 2. RapidMiner output results (7) ..................................................................................3
Figure 3. Screenshot of SHOGUN software tool results (8) .....................................................4
Figure 4. Screenshot of Weka software tool results (10) ..........................................................5
Figure 5. Scikit-learn results performing binary classification using nonlinear Support Vector Classification (SVC) with Radial Basis Function (RBF) kernel. The target to predict is an Exclusive Or (XOR) of the inputs. The color map illustrates the decision function learned by the SVC (14). ...........................................................................................................6
List of Tables
Table 1. Side-by-side comparison of algorithms offered by SHOGUN and Scikit-learn .............7
1. **Introduction**
Anomaly-based intrusion detection is the concept for detecting computer intrusions and misuse by monitoring network and computer activity and classifying it as either normal or anomalous (1). Classification is commonly based on heuristics or rules, rather than patterns or signatures, and will detect any type of misuse that falls out of normal system operation (2). In the case of signature based detection, the system can only detect attacks for which a signature has previously been created. In order to determine what attack traffic is, the system must be taught to recognize normal system activity. This can be accomplished using artificial intelligence techniques or neural networks (1). Another method is to define what normal usage of the system comprises using a strict mathematical model, and flag any deviation from this as an attack, known as strict anomaly detection (1). The goal of this report is to determine the suitability of current open source software packages in their usage and ability to enable our in-house team of researchers to perform basic research on anomaly-based intrusion detection algorithms.
2. **Environment for Developing KDD-Applications Supported by Index Structures (ELKI)**
The software tool ELKI stands for Environment for Developing KDD-Applications Supported by Index Structures and is a knowledge discovery in databases (KDD), data mining, and software framework developed for use in research and teaching by the database systems research unit of Professor Hans-Peter Kriegel at the Ludwig Maximilian University of Munich, Germany (3). The ELKI software package is written in Java* and intended to allow development and a platform for independent evaluation of data mining algorithms (4). The software framework is open source for scientific usage; see figure 1 for an overview.
*Java is a registered trademark of Oracle.
ELKI provides a suite of algorithms that include: K-means clustering, anomaly detection, spatial index structures, apriori algorithm, dynamic time warping, and principal component analysis. However, an internet search for publications using this particular software application platform yields results authored by the software developers. In 2011, a book chapter published by Achert et al. (5) talks about spatial outlier detection: data, algorithms, and visualization. The manuscript focuses on showcasing ELKI’s ability to integrate a geographic/geospatial information system (GIS) and a data mining system (DMS) within a single framework supported by the tool. In the demonstration, the authors demonstrated an integrated GIS-DMS system for performing advanced data mining tasks such as outlier detection on geospatial data, but which also allows the interaction with an existing GIS (5).
3. RapidMiner
The RapidMiner software, formerly Yet Another Learning Environment (YALE), is an environment for machine learning, data mining, text mining, predictive analytics, and business analytics. It is used for research, education, training, rapid prototyping, application development, and industrial applications (6). The software is distributed under the Affero General Public License (AGPL) open source license and has been hosted by SourceForge since 2004 (7).
RapidMiner provides data mining and machine learning procedures including: data loading and transformation (extract, transform, load [ETL]), data preprocessing and visualization, modeling, evaluation, and deployment. Two examples of graphical results are shown in figure 2. The data mining processes can be made up of arbitrarily nestable operators, described in eXtensible Markup Language (XML) files and created in RapidMiner's graphical user interface (GUI). RapidMiner is written in the Java programming language, and can be used for text mining, multimedia mining, feature engineering, data stream mining and tracking drifting concepts, development of ensemble methods, and distributed data mining (7). In addition, advanced features can be purchased as a commercial version of the base software and are available at the rapid-i.com Web site. In particular, beyond the free community edition, three enterprise software packages can be purchased from small, standard, and developer editions, which offer an increasing number of options and capabilities, respectively. The algorithms included in the RapidMiner software include: machine learning, data mining, text mining, predictive analytics, and business analytics.
Figure 2. RapidMiner output results (7).
4. **SHOGUN (toolbox)**
The focus of SHOGUN is on kernel machines such as support vector machines for regression and classification problems. SHOGUN also offers a full implementation of Hidden Markov models. The core of SHOGUN is written in C++ and offers interfaces for MATLAB, Octave, Python, R, Java, Lua, Ruby, and C#. SHOGUN has been under active development since 1999. Today there is a user community all over the world using SHOGUN as a base for research and
---
*MATLAB is a registered trademark of The MathWorks, Inc.
† Python is a registered trademark of Python Software Foundation.
education, and contributing to the core package. SHOGUN is a free software, open source toolbox written in C++. It offers numerous algorithms and data structures for machine learning problems. SHOGUN is licensed under the terms of the GNU General Public License version 3 or later (8). Figure 3 shows a screenshot of SHOGUN’s code and output results. The software can be obtained from the official Web site (9).

Among the software tool packages reviewed, SHOGUN offers the most features for research and development. Some of SHOGUN’s algorithms and features included in the software tool are: support vector machines, dimensionality reduction, online learning, clustering, and implemented kernels for numeric data analysis algorithms. Over 20 publications about SHOGUN are featured on its Wikipedia page (8).
5. **Waikato Environment for Knowledge Analysis (Weka) (Machine Learning)**
Weka is a suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with GUIs for easy access to this functionality. Weka is free software available under the GNU General Public License (10). The Weka software is available for download at the official Web site (11), see figure 4.
Some of the features of this software tool include: data preprocessing, clustering, expectation maximization, classification, regression, visualization, and feature selection. The primary learning methods in Weka are “classifiers,” and they induce a rule set or decision tree that models the data. Weka also includes algorithms for learning association rules and clustering data. All implementations have a uniform command-line interface. A common evaluation module measures the relative performance of several learning algorithms over a given data set (12).
6. **Scikit-Learn**
Scikit-learn (formerly scikits.learn) is an open source machine learning library for the Python programming language (13). It features various classifier engines, regression, and clustering algorithms including support vector machines, logistic regression, naive Bayes, k-means, and DBSCAN; and is designed to interoperate with NumPy and SciPy (13). The scikit license is open source, commercially usable, and the software code can be downloaded on the official Web site (14).
Figure 5. Scikit-learn results performing binary classification using nonlinear Support Vector Classification (SVC) with Radial Basis Function (RBF) kernel. The target to predict is an Exclusive Or (XOR) of the inputs. The color map illustrates the decision function learned by the SVC (14).
Scikit-learn is a Python module integrating a wide range of machine learning algorithms for supervised and unsupervised problems. The software package focuses on bringing machine learning to nonspecialists using a general-purpose high-level language; it has minimal dependencies, and is distributed under the simplified Berkeley Software Distribution (BSD) license, for use in both academic and commercial settings (15).
7. Results and Discussion
We have reviewed several open source software tools for performing research and development on anomaly detection for network security. After reviewing the flexibility and popularity of usage, we believe that in order to proceed with our evaluation we should select two packages for additional in-house testing and evaluation. Table 1 describes the two anomaly detection tools that we feel offer the most flexibility and the most number of anomaly detection algorithms for our in-house research purposes. From the table, we can see that the two packages share most of the basic algorithms that we can use in-house for performing basic research in anomaly detection. Therefore, we have submitted a formal request within our branch to install SHOGUN and Scikit-learn within the U.S. Army Research Laboratory’s (ARL) computers and we are awaiting feedback.
Table 1. Side-by-side comparison of algorithms offered by SHOGUN and Scikit-learn.
<table>
<thead>
<tr>
<th>Shogun</th>
<th>Scikit-Learn</th>
</tr>
</thead>
<tbody>
<tr>
<td>Support vector machines</td>
<td>Supervised learning:</td>
</tr>
<tr>
<td>Dimensionality reduction algorithms:</td>
<td>Generalized Linear Models</td>
</tr>
<tr>
<td>PCA, Kernel PCA, Locally Linear Embedding, Hessian Locally Linear</td>
<td>Support Vector Machines</td>
</tr>
<tr>
<td>Embedding, Local Tangent Space Alignment, Linear Local Tangent Space</td>
<td>Stochastic Gradient Descent</td>
</tr>
<tr>
<td>Alignment, Multidimensional Scaling, Isomap, Diffusion Maps, Laplacian</td>
<td>Nearest Neighbors</td>
</tr>
<tr>
<td>Eigenmaps</td>
<td>Gaussian Processes</td>
</tr>
<tr>
<td>Online learning algorithms:</td>
<td>Partial Least Squares</td>
</tr>
<tr>
<td>such as SGD-QN, Vowpal Wabbit</td>
<td>Naive Bayes</td>
</tr>
<tr>
<td>Clustering algorithms:</td>
<td>Decision Trees</td>
</tr>
<tr>
<td>k-means and GMM Kernel</td>
<td>Ensemble methods</td>
</tr>
<tr>
<td>Ridge Regression Support</td>
<td>Multiclass and multilabel algorithms</td>
</tr>
<tr>
<td>Vector Regression Hidden</td>
<td>Feature selection</td>
</tr>
<tr>
<td>Markov Models</td>
<td>Semi-Supervised</td>
</tr>
<tr>
<td>K-Nearest Neighbors</td>
<td>Linear and Quadratic Discriminant</td>
</tr>
<tr>
<td>Linear discriminant analysis</td>
<td>Analysis</td>
</tr>
<tr>
<td>Kernel Perceptrons</td>
<td>Unsupervised learning:</td>
</tr>
<tr>
<td>Kernels for numeric data:</td>
<td>Gaussian mixture models</td>
</tr>
<tr>
<td>linear gaussian</td>
<td>Manifold learning</td>
</tr>
<tr>
<td>polynomial</td>
<td>Clustering</td>
</tr>
<tr>
<td>sigmoid kernels</td>
<td>Decomposing signals in components</td>
</tr>
<tr>
<td>The supported kernels for special data include:</td>
<td>(matrix factorization problems)</td>
</tr>
<tr>
<td>include:</td>
<td>Covariance estimation</td>
</tr>
<tr>
<td>Spectrum</td>
<td>Novelty and Outlier Detection</td>
</tr>
<tr>
<td>Weighted Degree</td>
<td>Hidden Markov Models</td>
</tr>
<tr>
<td>Weighted Degree with Shifts</td>
<td></td>
</tr>
</tbody>
</table>
8. Conclusions
In this report, we have performed a review of various software tools that can be leveraged in-house to perform basic research and development of anomaly-based intrusion detection algorithms. Out of the five software tools described, it is recommended to employ the Scikit or
SHOGUN (toolbox) as both tools are written in C++ and offer an interface for Python. The python language software is commonly employed as a research tool within our in-house team of researchers.
9. References
Appendix – List of Algorithm Per Tool
ELKI
Cluster analysis:
- K-means clustering
- Expectation-maximization algorithm
- Single-linkage clustering
- DBSCAN (Density-Based Spatial Clustering of Applications with Noise)
- OPTICS (Ordering Points To Identify the Clustering Structure), including the extensions
OPTICS-OF, DeLi-C1u, HiSC, HiCO and DiSH
SUBCLU (Density-Connected Subspace Clustering for High-Dimensional Data)
Anomaly detection:
- LOF (Local outlier factor) OPTICS-OF
- DB-Outlier (Distance-Based Outliers) LOCI (Local Correlation Integral)
- LDOF (Local Distance-Based Outlier Factor) EM-Outlier
Spatial index structures:
- R-tree R*-tree M-tree
Evaluation:
- Receiver operating characteristic (ROC curve) Scatter plot
- Histogram
- Parallel coordinates
Other:
- Apriori algorithm Dynamic time warping Principal component analysis
RapidMiner
- Machine learning
This appendix appears in its original form, without editorial change.
Data mining Text mining Predictive analytics Business analytics.
SHOGUN
Support vector machines
Dimensionality reduction algorithms:
PCA, Kernel PCA, Locally Linear Embedding, Hessian Locally Linear Embedding, Local Tangent Space Alignment, Linear Local Tangent Space Alignment, Kernel Locally Linear Embedding, Kernel Local Tangent Space Alignment, Multidimensional Scaling, Isomap, Diffusion Maps, Laplacian Eigenmaps
Online learning algorithms:
such as SGD-QN, Vowpal Wabbit
Clustering algorithms:
k-means and GMM Kernel Ridge Regression Support Vector Regression Hidden Markov Models
K-Nearest Neighbors
Linear discriminant analysis
Kernel Perceptrons
Implemented kernels for numeric data include:
linear gaussian polynomial sigmoid kernels
The supported kernels for special data include:
Spectrum
Weighted Degree
Weighted Degree with Shifts
Weka
Data mining:
Data preprocessing
Clustering
Expectation maximization
Classification Regression Visualization
Feature selection
Scikit
Supervised learning: Generalized Linear Models Support Vector Machines Stochastic Gradient Descent Nearest Neighbors
- Gaussian Processes Partial Least Squares Naive Bayes
- Decision Trees
- Ensemble methods
- Multiclass and multilabel algorithms
- Feature selection
Semi-Supervised
Linear and Quadratic Discriminant Analysis
Unsupervised learning: Gaussian mixture models Manifold learning Clustering
- Decomposing signals in components (matrix factorization problems) Covariance estimation
- Novelty and Outlier Detection
- Hidden Markov Models
INTENTIONALLY LEFT BLANK.
### List of Symbols, Abbreviations, and Acronyms
<table>
<thead>
<tr>
<th>Symbol</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>AGPL</td>
<td>Afferro General Public License</td>
</tr>
<tr>
<td>ARL</td>
<td>U.S. Army Research Laboratory</td>
</tr>
<tr>
<td>BSD</td>
<td>Berkeley Software Distribution</td>
</tr>
<tr>
<td>DMS</td>
<td>data mining system</td>
</tr>
<tr>
<td>ELKI</td>
<td>Environment for Developing KDD-Applications Supported by Index-Structures</td>
</tr>
<tr>
<td>ETL</td>
<td>extract, transform, load</td>
</tr>
<tr>
<td>GIS</td>
<td>geographic/geospatial information system</td>
</tr>
<tr>
<td>GUI</td>
<td>graphical user interface</td>
</tr>
<tr>
<td>KDD</td>
<td>knowledge discovery in databases</td>
</tr>
<tr>
<td>RBF</td>
<td>Radial Basis Function</td>
</tr>
<tr>
<td>SVC</td>
<td>Support Vector Classification</td>
</tr>
<tr>
<td>Weka</td>
<td>Waikato Environment for Knowledge Analysis</td>
</tr>
<tr>
<td>XML</td>
<td>eXtensible Markup Language</td>
</tr>
<tr>
<td>XOR</td>
<td>Exclusive Or</td>
</tr>
<tr>
<td>YALE</td>
<td>Yet Another Learning Environment</td>
</tr>
<tr>
<td>No. of Copies</td>
<td>Organization</td>
</tr>
<tr>
<td>--------------</td>
<td>--------------</td>
</tr>
<tr>
<td>1 (PDF)</td>
<td>DEFENSE TECHNICAL INFORMATION CTR DTIC OCA</td>
</tr>
<tr>
<td>2 (PDF)</td>
<td>DIRECTOR US ARMY RSRCH LAB RDRL CIO LL RDRL IMAL HRA RECORDS MGMT</td>
</tr>
<tr>
<td>1 (PDF)</td>
<td>GOVT PRINTG OFC A MALHOTRA</td>
</tr>
<tr>
<td>2 (PDF)</td>
<td>DIR USARL RDRL CIN D R ERBACHER R PINO</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a599306.pdf", "len_cl100k_base": 4835, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 70636, "total-output-tokens": 6143, "length": "2e12", "weborganizer": {"__label__adult": 0.00045990943908691406, "__label__art_design": 0.0004169940948486328, "__label__crime_law": 0.00441741943359375, "__label__education_jobs": 0.003204345703125, "__label__entertainment": 0.00012958049774169922, "__label__fashion_beauty": 0.00022101402282714844, "__label__finance_business": 0.0007052421569824219, "__label__food_dining": 0.00036406517028808594, "__label__games": 0.000888824462890625, "__label__hardware": 0.0019245147705078125, "__label__health": 0.0008473396301269531, "__label__history": 0.00045680999755859375, "__label__home_hobbies": 0.00015842914581298828, "__label__industrial": 0.0009927749633789062, "__label__literature": 0.00029969215393066406, "__label__politics": 0.000988006591796875, "__label__religion": 0.00044655799865722656, "__label__science_tech": 0.21923828125, "__label__social_life": 0.0002815723419189453, "__label__software": 0.07666015625, "__label__software_dev": 0.68603515625, "__label__sports_fitness": 0.0003502368927001953, "__label__transportation": 0.0004820823669433594, "__label__travel": 0.00020182132720947263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23949, 0.03006]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23949, 0.31953]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23949, 0.81957]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 378, false], [378, 530, null], [530, 3551, null], [3551, 4015, null], [4015, 5017, null], [5017, 6908, null], [6908, 8272, null], [8272, 10134, null], [10134, 11550, null], [11550, 12608, null], [12608, 14203, null], [14203, 17702, null], [17702, 17897, null], [17897, 19901, null], [19901, 20323, null], [20323, 21277, null], [21277, 22259, null], [22259, 22838, null], [22838, 22864, null], [22864, 23634, null], [23634, 23949, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 378, true], [378, 530, null], [530, 3551, null], [3551, 4015, null], [4015, 5017, null], [5017, 6908, null], [6908, 8272, null], [8272, 10134, null], [10134, 11550, null], [11550, 12608, null], [12608, 14203, null], [14203, 17702, null], [17702, 17897, null], [17897, 19901, null], [19901, 20323, null], [20323, 21277, null], [21277, 22259, null], [22259, 22838, null], [22838, 22864, null], [22864, 23634, null], [23634, 23949, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23949, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23949, null]], "pdf_page_numbers": [[0, 0, 1], [0, 378, 2], [378, 530, 3], [530, 3551, 4], [3551, 4015, 5], [4015, 5017, 6], [5017, 6908, 7], [6908, 8272, 8], [8272, 10134, 9], [10134, 11550, 10], [11550, 12608, 11], [12608, 14203, 12], [14203, 17702, 13], [17702, 17897, 14], [17897, 19901, 15], [19901, 20323, 16], [20323, 21277, 17], [21277, 22259, 18], [22259, 22838, 19], [22838, 22864, 20], [22864, 23634, 21], [23634, 23949, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23949, 0.20325]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
cbb1a7a9c29725702bf6c2ae81460c5bf84e252e
|
[REMOVED]
|
{"Source-Url": "https://blog.mattsch.com/wp-content/papercite-data/pdf/sle-alabed2012.pdf", "len_cl100k_base": 5434, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 24701, "total-output-tokens": 6544, "length": "2e12", "weborganizer": {"__label__adult": 0.00027561187744140625, "__label__art_design": 0.0003709793090820313, "__label__crime_law": 0.0002081394195556641, "__label__education_jobs": 0.0004336833953857422, "__label__entertainment": 3.88026237487793e-05, "__label__fashion_beauty": 0.00011736154556274414, "__label__finance_business": 0.00011879205703735352, "__label__food_dining": 0.00023686885833740232, "__label__games": 0.0002849102020263672, "__label__hardware": 0.0004887580871582031, "__label__health": 0.00025534629821777344, "__label__history": 0.0001436471939086914, "__label__home_hobbies": 6.002187728881836e-05, "__label__industrial": 0.00024139881134033203, "__label__literature": 0.0001430511474609375, "__label__politics": 0.00016307830810546875, "__label__religion": 0.0003287792205810547, "__label__science_tech": 0.003826141357421875, "__label__social_life": 6.604194641113281e-05, "__label__software": 0.00439453125, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00022280216217041016, "__label__transportation": 0.00031113624572753906, "__label__travel": 0.0001652240753173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29669, 0.02216]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29669, 0.44752]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29669, 0.90507]], "google_gemma-3-12b-it_contains_pii": [[0, 2946, false], [2946, 6136, null], [6136, 9305, null], [9305, 12694, null], [12694, 15832, null], [15832, 18116, null], [18116, 21288, null], [21288, 24471, null], [24471, 26357, null], [26357, 29669, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2946, true], [2946, 6136, null], [6136, 9305, null], [9305, 12694, null], [12694, 15832, null], [15832, 18116, null], [18116, 21288, null], [21288, 24471, null], [24471, 26357, null], [26357, 29669, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29669, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29669, null]], "pdf_page_numbers": [[0, 2946, 1], [2946, 6136, 2], [6136, 9305, 3], [9305, 12694, 4], [12694, 15832, 5], [15832, 18116, 6], [18116, 21288, 7], [21288, 24471, 8], [24471, 26357, 9], [26357, 29669, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29669, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
8e7dbbb2b1ff73dc19824d42fd46af0491a8fd64
|
Towards a Collaborative Working Environment to Support Model-Based Systems Engineering
Matthias Merk¹, Gabriela Tullius² and Peter Hertkorn³
Abstract: This paper presents an approach to develop a collaborative working environment for engineering support in the field of model-based systems engineering. The need for such a system is motivated and a concept for an adaptive CSCW environment is shown.
Keywords: CSCW, Model Based Engineering, Engineering Support Tools, Collaboration, Working Environments
1 Introduction
According to Eigner et al. a modern product design process in engineering requires the following skills from an engineer: First the willingness to optimize the complete design process not only for product functionality, but also to support all subsequent product lifecycle phases like maintainability or recyclability (Design for X). Second, Engineers need to collaborate in early design phases and work interdisciplinary during all phases of the product lifecycle. Third the willingness to work with co-located engineers (distributed teams) as well as the willingness to use new methods, processes and software tools to support product development in all aspects of the engineering process [ERZ14].
This paper is focused on the research in the field of software tools for systems engineering, especially for the support of collaborative work (CSCW). According to Eigner et al. the usage of software tools in systems engineering means to use software systems not only to generate the product itself, but also to generate documentation of the work result [ERZ14]. Over the last years the focus of the development of such tools was on the support of product management issues and not on the functionality of the tools to support the design of the product [BR11].
According to Eigner et al. a decade ago the engineering community focused on CAD-Systems as software tools in the design process. Today their focus shifted to Product-Data-Management-Systems (PDM) that allow an integration of product related data throughout all the steps in the product lifecycle. This change happened because engineering activities in design and development switched from creative activities towards administrative, communicative, informative and team-oriented decision making where knowledge and
¹ Reutlingen University, Alteburgstraße 150, 72762 Reutlingen, matthias.merk@reutlingen-university.de
² Reutlingen University, Alteburgstraße 150, 72762 Reutlingen, gabriela.tullius@reutlingen-university.de
³ Reutlingen University, Alteburgstraße 150, 72762 Reutlingen, peter.hertkorn@reutlingen-university.de
data (digital product and process model) is the central element [ES01]. The digital representation of the product itself is described through different sequencing levels according to Bordegoni et al. The product is represented through a model with increasing complexity. To deal with this increasing complexity, engineers need to add knowledge to the model at each level to ensure that the engineers are able to reach the design goals [BR11]. According to Eigner et al. the usage of models instead of documents is an advantage especially in interdisciplinary projects [ERZ14].
New trends in engineering like model-based systems engineering (MBSE) are demanding new software systems to meet the changed requirements in the engineering process. MBSE is defined as a formalized usage of modeling to support the definition of requirements, development, verification and validation of a system beginning with the concept- and development phase as well as all the subsequent phases in the product lifecycle [In07]. Albers et al. are listing the main advantages of MBSE: fewer inconsistencies, less redundancies, clear communication and sustainable documentation [AZ13].
PDM-Systems are focusing only on the management and provisioning of product related data, not creation of the artefacts. Software tools to support model-based engineering need to provide a user-interface for the generation of the models PDM-Systems and MBSE-Support-Systems should work together. The PDM-System is delivering data in terms of knowledge and MBSE-Support-Systems like tools for modeling are used to design the model. You could see the PDM-System as a database for other tools.
Eigner et al. define software tools in a model-based engineering context as tools that use a formal language to describe the product in each phase of the product lifecycle. The modeling of the product is often supported by an easy to learn form of graphical input that is built upon digital models. The model is then provided to other product lifecycle phases through model transformations. The basis for a MBSE centred design process is formed by software and modeling tools that support the specification of requirements, functions and behavior of the product e.g. SysML, CAD and product life cycle management (PLM) systems [ERZ14].
According to Bordegoni et al. there should be one unique, consistent, and software independent model, that is used through all of the product phases. Such a model would improve both productivity in the engineering process and the resulting quality of the model itself [BR11]. Albers et al. show, that the problems of such a model is in the interdisciplinarity of engineering design processes. Term understanding for example can become a major communication problem. As an example Albers et al. show the different understanding of the word “function” in interdisciplinary engineering teams that is described differently as an appearing phenomenon, an effect, a behavior, a description what a system should do or a piece of software code processing data [AZ13]. We need to encounter the problem of differences in term understanding to ensure an effortless documentation, collaboration and communication throughout the system and its users.
Model-based virtual product development (MVPE) is an extension of MBSE. It extends the practices of MBSE with the aim of using as many virtual prototypes as possible [ERZ14]. Bordegoni et al. describe a paradigm shift in product development from a geometrical to a functional view [BR11]. According to Bordegoni et al. the shape of an object is not static. It is a “function of the environment, of time, of the history of the phenomena affecting the object” [BR11].
2 State of the Art
There are a number of computer-aided innovation (CAI) systems that are used to support the actual design of the product. These systems are often based on TRIZ, the theory of inventive problem solving developed by Genrich Alschuller [Al84]. The focus of TRIZ is the support of early phases in product development. Cugini et al. describe the problem of current CAI systems in the limited interoperability with other CAx tools [Cu09]. The PROSIT project tries to solve this problem through bridging CAI tools with PLM systems. According to Bordegoni et al. a modeling approach capable of representing a product at different detail levels is needed to further address this problem [BR11].
Web 2.0 technologies are used to extend CAD, PDM or PLM tools with social components to allow collaboration by discussing and sharing CAD models between co-located team members (Autodesk social share plugin4, GrabCAD Workbench5). According to Bordegoni et al. PLM systems claim to support any stage of product development but are still far away from a systematic support in the early design phases [BR11]. The work of Ubik et al. highlights that low latency remote access to 3D models can enable effective distributed collaboration over large distances [UT13]. Kim et al. proposes a method for using the X3D visualisation standard to include part geometry, product structures, and manufacturing-related properties in a web browser to have a CAD independent distributed form of visualisation [Ki15]. This technology could be used to extend web based collaboration platforms.
There is also a number of commercially available tools for the collaborative creation of different diagrams that could be used for model-based engineering methods. A small overview of three tools is presented. The focus of these early observations in commercially available tools is on the availability of data exchange and thus the possibility of integration into existing workflows. This listing is far from being complete. Further research and analyses needs to be done, a suitable method for classifying and evaluating has to be found.
4 https://apps.autodesk.com/ACD/de/Detail/Index?id=837454399538248119
5 https://grabcad.com
Cacoo\textsuperscript{6} is a web-based tool for collaborative creation of diagrams including UML class diagrams, mind maps or flowcharts. It offers features like role and resource management and the ability to share and export the diagrams. Export is limited to PNG, SVG, PDF, PS or PPT. What is missing is a universal interchange format that would allow a tight integration in the model-based engineering domain.
Creately\textsuperscript{7} is another web-based tool like Cacoo. It is able to show changes in real time to all collaborating users and supports commenting and discussing on models as well as a revision history of the changes made to the model. Export is available in PDF and SVG.
VisualParadigm\textsuperscript{8} offers the ability to design diagrams like UML or SysML and is using a central repository allowing stakeholders to comment directly on the created diagrams which can be shared online. It offers collaborative modeling functionality where you can checkout, commit and update the diagrams stored in the repository. It is possible to integrate custom plug-ins written in Java, so integration in an existing workflow is possible. The export of the diagrams in a universal interchange format (XML) is also supported.
3 Concept
In this part the concept of the system is described. The concept is divided in five parts: (1) a description of our user group, (2) a basic methodology for the development of domain-independent collaborative modelling tools, (3) the requirements of the system and (5) its basic system architecture. An example scenario at the end of this section puts the parts in relation.
3.1 User-Group
Our System is designed for engineers using model-based engineering methods. We follow the approach of the human-centred design for interactive systems [ISO10], therefore it is essential to develop a good understanding of the typical users and their activities. To gather the requirements of the main user-group, an initial persona [Co99] based on statistics e.g. provided by the “Verein Deutscher Ingenieure” (VDI - German Engineering Society) and interviews with engineers has been created. We interviewed research associates with engineering background and experience in model-based engineering to identify the requirements of our user-group. In a semi-structured open expert interview, we asked the discussion partners about collaboration tools they use for work and private tasks. We also asked them to describe their typical engineering workflow and the differences to the model-based approach. Further questions were asked to find problems they encountered.
\textsuperscript{6} https://cacoo.com
\textsuperscript{7} https://creately.com
\textsuperscript{8} https://www.visual-paradigm.com/features/collaborative-modeling
while working with model-based engineering software in general. The interviews were finished with an open discussion about future trends in engineering support and needs and requirements the discussion partners expect to see in the future. However, we will need further interviews to gain a much deeper insight in the working practices of this user-group.
3.2 Methodology
In [GBR12] a model-driven method for the development of domain-independent collaborative modeling tools is proposed. Gallardo et al. focus their work on domain-independent modeling tools, supporting the work of co-located designers working in shared workspaces using a whiteboard metaphor. The method describes four types of users. First, the user who is developing the collaborative modeling tool, second the domain expert with knowledge about the domain the tool will be used in, third a software engineer who may participate in phases where software development tools need to be manipulated, and fourth the user who will use the developed tool to build models in collaborative design sessions.
The method of Gallardo et al. is based on the following three frameworks:
**A methodological framework:** a series of phases that must be followed by the non-expert user who wishes to develop a collaborative modeling tool. These phases are: the identification of the domain, the modeling of the domain and the workspaces, the production of the collaborative modeling tool, which includes the model transformations and the generation of the tool itself and the usage of the generated tool.
**A conceptual framework:** made up of the models that are used in the meta-modeling process. These models are mainly the domain and workspace meta-models
**A technological framework:** consists of a series of plug-ins for the Eclipse platform that have been modified and extended to generate collaborative applications
For now, we will focus our conceptual work on the methodological framework. The methodological framework will form the base to all future work.
3.3 Requirements to the System
First, there are requirements which are special to the user group of our system or are defined in literature written by the user-group itself: Bordegoni et al. outlines that “there is the necessity of developing new, very friendly, very interactive interfaces, which will attract new users and support faster design iterations” [BR11]. User experience and usability research will be a major part in the design and development of the system. We focus on creating a system for the user group that adapts to the user. We don’t want the user to adapt to the system. Lemberg et al. outlines that early artefacts in the first design
phases need to have a preliminary character, so that making changes in these artefacts feels easier for the designers [LF09]). To our system this requirement means, that we need to provide data structures that allow fast, easy and unrestricted editing, manipulation and augmentation of the project data or its visualisation. The data and its visualisation needs to be divided in a representation and a data layer. Because of the adaptive design of the system, different representations and visualisations of the data can coexist in different versions. Eigner et al. describe essential things a software support needs to do in order to improve the engineering process which can be divided into two main parts: Collaboration support and the provision of the right information in the right time [ES01]. Communication support forms the basis of our system. This basis is then extended through other modules / visualisations allowing the representation of data suitable for the current task needed by a specific user group.
Second, there are requirements originating from general CSCW and innovation research: According to Herstatt et al. the stimulation of creative thinking is a prerequisite for the generation of breakthrough ideas [HK05]. According to Kyng, the most severe problems of CSCW systems are originating from a lack of end-user involvement in the development of the CSCW system [Ky91]. To overcome this problem, we will use a user-centred-design approach throughout the development of our system.
Third we have the wishes and needs of our user group. One insight we gained from these interviews is, that the interviewed engineers would like to have classical creativity techniques that should be supported by a CSCW system. As an example method 365 by Bernd Rohrbach [Ro69] was named. There is a huge number of creativity techniques from different domains (see [HM12] for an overview) that have to be evaluated if they can be integrated into a CSCW system. Another insight is the need for adaptive user interfaces. Engineering projects are often interdisciplinary and every domain uses its own working methods and tools. Therefore we need to develop a highly adaptive and customizable system to support the interdisciplinary field of engineering. With this high level of adaptivity, we are able to implement the ideas and needs of our user-group. Because of the interdisciplinary nature of engineering projects, we will need to implement our system using modular approaches to support the different needs of the domains involved in an engineering project.
3.4 Basic System Architecture
Figure 1 gives an overview of the basic concept of our adaptive CSCW environment to support collaboration in engineering. The technological basis of this concept is a central model, describing the whole project the engineers are working on. The central model is shared by all domains involved in the engineering process. Based on this model we can identify the current project phase, the data and artefacts necessary for working in the current phase as well as information about the environment the engineers are working in.
This includes time, location and devices which can be used by the engineer. Other project related knowledge and data needed for the design process is available through an interface
to a PLM-system. We will have a look at engineering design grammars like the design graph as a possible implementation for our model [AR03]. Using a graph database like Neo4j, we are able to store the model structure in a performant and persistent way. Using the transaction management of such a database, synchronization and merging problems can be solved. This is an ongoing research project with some open questions that needs to be answered in future research. We will examine on the methods used in software engineering as well as in database systems and how we can adapt them to our system to define a suitable architecture for our system.
Fig. 1: Basic concept of our adaptive CSCW-Environment for engineering collaboration
To support the different views each project member can have regarding the different roles of the other members in the project, the system needs to be adaptive. There will be different forms of adaption. First technical adaption in the form that the user interface must change appropriately depending on the device the engineer uses and second the system needs to adapt its support capabilities on the needs based on the current project phase or the demands of the engineers. We will have to investigate how the different views can affect each other in such a scenario. In the future we will support different CSCW use cases like distributed collaborative design, virtual team rooms that will be defined later on based on the requirements gathered from further interviews. For now, we will focus on the support of communication in early brainstorming phases in the product lifecycle using a modular online collaboration platform as a basis for further work.
3.5 Example Scenario
The following example scenario will help to understand our concept and the problem space: A co-located team of engineers is working on a structural element for a car. That element is used in every car of the company, so the product engineering team in Germany decided to use a model-based engineering process which allows them to model the part once and generate different variants of that part according to the specification for a number of future cars. To do that they need to model that part in their model-based design tool. To ensure the manufacturability of the part, a production planner sitting in the production site in China is involved in the design process. Because it is a visible part of the body of the
car, some Italian car exterior designers are also involved in the design process. In a monthly status meeting the project management is informed about the current project state. Let’s have a look at possible visualizations and interactions with the model for the roles product engineer, exterior designer and project management.

Fig. 2: Illustration of the adaption of the user interface for different roles and environments
The methods and tools these different roles are used to work with are likely to be different. Product engineers may work with mind maps and hand drawn technical sketches to communicate the technical or structural properties of the product, while the product designers are used to work with tools like mood boards to visualize the product styling. The managers are used to look at different kinds of diagrams to visualize the projects key figures needed for managing the project. Not only the methods or tools could be different. Also the devices used to interact with the project data or the workspace needs to be appropriate to the user and his needs.
There is also a need for collaboration between the different roles in a project. The designers could come up with designs that suits their requirements but are impossible to produce or are simply too expensive for the designated use-case and thereby could violate the requirements the managers are working with. So there have to be some kind of communication between these roles. With the concept of one model of truth all information needed to make collaborative decisions are in one place, ready to be visualized to all of the roles helping them to make a decision.
When we look at a web-based conference tool for example, supporting communication between the different roles, we could augment a presentation with visualizations customized for the different roles for example. Let's say that the designers initiated an online design review. They moderate the review in the online conference tool and provide 3D mockups of the different designs they came up with. This view is then shared to all of the participants. In addition to this view, the manager for example can see the price of the materials used for the design as well as the estimated quantity the factory would be able to produce on a daily basis. All the numbers are calculated in the background based on the design variant the designers have currently selected on the 3D Model viewer.
4 Conclusion and Further Work
We introduced the idea of a collaborative working environment to support model-based systems engineering. The next steps developing such a working environment are the following: We need to work on a catalog of support tools (for example creativity techniques) including a mapping to different phases in the engineering workflow. To do this we need deeper insight into the working practices of engineers. A model of the engineering process needs to be designed. This model will include project data allowing a reasoning engine to detect the current project state. A first prototype of the system supporting communication of distributed teams is developed based on modular software engineering methods. This prototype forms the basis of all following prototypes. Therefore modular expandability is a major requirement in the development process.
References
|
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings258/137.pdf", "len_cl100k_base": 4492, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 9828, "total-output-tokens": 6046, "length": "2e12", "weborganizer": {"__label__adult": 0.0004055500030517578, "__label__art_design": 0.0025177001953125, "__label__crime_law": 0.00042319297790527344, "__label__education_jobs": 0.009613037109375, "__label__entertainment": 0.00015091896057128906, "__label__fashion_beauty": 0.0002760887145996094, "__label__finance_business": 0.0006504058837890625, "__label__food_dining": 0.0005602836608886719, "__label__games": 0.0008101463317871094, "__label__hardware": 0.00186920166015625, "__label__health": 0.0008702278137207031, "__label__history": 0.0005559921264648438, "__label__home_hobbies": 0.000255584716796875, "__label__industrial": 0.0019092559814453125, "__label__literature": 0.0005078315734863281, "__label__politics": 0.0002644062042236328, "__label__religion": 0.0008287429809570312, "__label__science_tech": 0.357421875, "__label__social_life": 0.0002853870391845703, "__label__software": 0.0290985107421875, "__label__software_dev": 0.5888671875, "__label__sports_fitness": 0.0003666877746582031, "__label__transportation": 0.0010805130004882812, "__label__travel": 0.0002841949462890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26304, 0.03682]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26304, 0.48893]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26304, 0.91607]], "google_gemma-3-12b-it_contains_pii": [[0, 2613, false], [2613, 5845, null], [5845, 8526, null], [8526, 11303, null], [11303, 13992, null], [13992, 17299, null], [17299, 19730, null], [19730, 21411, null], [21411, 23917, null], [23917, 26304, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2613, true], [2613, 5845, null], [5845, 8526, null], [8526, 11303, null], [11303, 13992, null], [13992, 17299, null], [17299, 19730, null], [19730, 21411, null], [21411, 23917, null], [23917, 26304, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26304, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26304, null]], "pdf_page_numbers": [[0, 2613, 1], [2613, 5845, 2], [5845, 8526, 3], [8526, 11303, 4], [11303, 13992, 5], [13992, 17299, 6], [17299, 19730, 7], [19730, 21411, 8], [21411, 23917, 9], [23917, 26304, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26304, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
3cfebd89fa03908b31de70aeefae8f2060b217c1
|
Kanban - Introduction
Kanban is a Japanese word that literally means “visual card”. Kanban cards were originally used in Toyota to limit the amount of inventory tied up in “work in progress” on a manufacturing floor. Kanban not only reduces excess inventory waste, but also the time spent in producing it. In addition, all of the resources and time freed by the implementation of a Kanban system can be used for future expansions or new opportunities. The original author of Kanban was Taiichi Ohno.
What is Kanban?
Kanban term came into existence using the flavors of “visual card,” “signboard,” or “billboard”, “signaling system” to indicate a workflow that limits Work In Progress (WIP). Kanban has been used in Lean Production for over `half-century.
The core concept of Kanban includes –
- **Visualize Workflow**
- Split the entire work into defined segments or states, visualized as named columns on a wall.
- Write each item on a card and put in a column to indicate where the item is in the workflow.
- **Limit WIP**
- Assign explicit limits to how many items can be in progress at each workflow segment / state. i.e., Work in Progress (WIP) is limited in each workflow state.
- **Measure the Lead Time**
Lead Time, also known as cycle time is the average time to complete one item. Measure the Lead Time and optimize the process to make the Lead Time as small and predictable as possible.
This concept of Kanban is a direct implementation of a Lean Pull Scheduling System. An item can move to the next segment / state only when it obtains a slot in there.
Kanban - Lean Practices
The implementation of Kanban, as well as other Lean Manufacturing Methods, such as Kaizen, can have significant benefits for almost any type of work. Kanban is more effective because it visually indicates when the production should start and stop. It is faster, more efficient, and saves significant money over most other production models. It is also far more directly responsive to customer demand.
Kanban - Benefits
Kanban has the following commonly observed benefits –
- Bottlenecks become clearly visible in real-time. This leads people to collaborate to optimize the whole value chain rather than just their part.
- Useful for situations where operations and support teams have a high rate of uncertainty and variability.
- Tends to spread throughout the organization naturally, including sales and management. This increases visibility of everything that is going on at the company.
- Reduces inventory in the range of 25%-75%, thereby reducing company costs.
Since all segments/states in the workflow are visually organized, the required items, reducing the wait times and ensuring speed, continually support all the tasks in the workflow.
Overproduction of inventory is avoided, thereby saving resources and time as well. This is termed as eliminating waste.
Alignment with Agile
In agile, if values are combined with Kanban characteristics, the outcome would be Agile Kanban. This practice is gaining popularity in Software Development wherein the Agile iteration approach and Kanban value stream focus are combined.
Kanban - Characteristics
In this chapter, we will learn the characteristics of Kanban.
Flexibility in Planning
Kanban provides improvements in the workflow. With visual representation of the workflow, speed of moving from one task to another is reduced. This is accomplished through the creation of clearly marked flow lanes, Kanban cards and clearly marked columns to indicate where each item is in the workflow. If a task needs longer duration, it is allowed to execute without hindrance, and at the same time, the tasks that are completed will flow to the next state.
This allows –
- Sufficient duration for longer tasks that cannot be broken down logically.
- Preservation of value of such longer tasks.
- Effort required by each role to be expended.
- Continuous flow of the tasks that are completed without wait time.
Hence, planning is flexible and not time-boxed.
Limits Work-In-Progress (WIP)
Explicit limits are assigned to number of items that can be in progress at each workflow state, indicated by a column.
This allows –
- Reducing wait time.
- Avoiding stress on resources at a workflow state.
• Identifying bottlenecks causing an item to be in a workflow state than the anticipated time (usually average cycle time) immediately.
• Resolving bottlenecks with collaboration of the entire team.
• Decreasing dependencies in completing a task by splitting it into sub-tasks, so that the sub-task is tracked independently.
**Pull Approach**
When you have two teams and the first one is performing better than the second one, it is likely that it pushes more work than the other can actually handle. This often creates friction between the teams. A solution to this is the Pull approach.
In Pull Approach, the next team pulls work only when it is ready for it. Pull Approach is implemented by adding a buffer with limited capacity between the two teams.
The benefits of Pull Approach are −
• Avoids piling-up of work.
• Reduces wait time.
• Facilitates a team to maintain constant pace and focus on quality.
• Provides resource balancing.
**Minimize Cycle Time**
The cycle time for each task is measured and the process is optimized to reduce the cycle times.
• The bottlenecks are identified immediately and resolved collaboratively by the entire team.
• The correction loops are considered to reduce rework.
**Continuous Delivery**
Benefits of continuous delivery are −
• Short release cycles result in continuous delivery of growing product at regular intervals.
• Continuous interactions with the customer.
o To understand what customer wants.
o Not to produce anything that the customer does not need.
o Feedback on delivered modules.
• Limited requirements in each release cycle.
o Developers are not overloaded with requests. This enables them to focus on the
delivery.
o There is no partially completed work.
• Focus is on finishing work than on starting work.
o This enables focus on sustaining pace and quality of the product.
o Deliver before the customer changes mind.
• Optimize flow of Work from beginning to end.
o Helps in incremental process improvements.
Visual Metrics
Visually organized workflows (on Kanban Boards) facilitate –
• Scheduling as per WIP limits on a workflow state.
• Tracking status and progress continually.
• Assigning resources dynamically based on the role requirements.
Advantages of Visual Metrics
Each day, for each column, mark how many tasks are in it, you will see a mountain-
like chart. This chart shows the past performance and allows predicting future results.
You can gather the following information from the chart –
- Measure cycle time for each feature (or story) by marking a start date when the feature is scheduled and an end date when the feature finishes.
- Evaluate the quality of the growing product from technical, functional and user perspectives at regular time-boxes.
- Evaluate the pace of development by looking at the number of development items completed and looking at the average cycle time per development item.
- Adjust the pace of development by calculating the ratio of developer days per completed development item. You can use this ratio to estimate the completion time for the yet-to-develop items and adjust the development plan as necessary.
- Evaluate and adjust the process by using a collaborative session to identify changes that can be made to improve the quality of the product, or to improve the pace of development.
- Identify and resolve un-validated decisions by looking at the cycle time of validated decisions and focusing on the correction loops that are usually the invisible backed-up queues.
**Efficiency Through Focus**
By focusing on what a customer wants, the scope becomes clear. The focus is on delivering value to the customer.
Efficiency can be achieved in the following ways −
- A customer's expectations can be made realistic and focused with continuous interactions with the customer.
- Focus on the tasks is ensured with a limit on work-in-progress (WIP).
- The Pull approach enables resources to complete the tasks at hand before a new task is taken up.
- Optimizing lead-time (cycle time) results in faster delivery.
- Visualization of the workflow with Kanban board draws immediate attention to any bottlenecks that can be resolved immediately.
- Empowerment of the team makes the team accountable for the success.
**Kanban - Project Management**
Kanban is adapted to software development as a project management approach. Kanban in software development supports a continuous workflow, termed as Value Stream.
**Value Stream**
The Value Stream consists of all actions required to bring a project from creation to completion.
The actions can −
- Add Value to the project
- Add no Value, but unavoidable
- Add no Value, avoidable (termed as waste)
**Elimination of Waste**
Anything that does not add any value to the project is known as Waste. Kanban facilitates elimination of waste.
In software development, there are three types of waste −
- Waste in code development
- Waste in project management
- Waste in team potential
**Waste in Code Development**
Waste in code development is due to the following reasons −
• **Partially completed work** – The partially completed work can become outdated and unusable. It can be eliminated with iterative cycles and with modular code that completes within the iteration.
• **Defects** – In developing a code, correction and retesting requires time and resources. It can be eliminated with up-to-date test suite, completing testing within the iteration and continuous customer feedback.
**Waste in Project Management**
Waste in project management is due to the following reasons –
• **Extra Processes** – Unnecessary documentation that requires time and resources. It can be eliminated with –
o Pre-planning of what processes are relevant and necessary.
o Documentation review, that ensures relevant and necessary processes are followed.
• **Code Handoffs** – means passing the work from one person or team to another, after the first person’s work is complete. It may give rise to lack of knowledge. It can be eliminated by keeping the flowcharts and wireframes visible and clear.
• **Extra Functions** – These are features that are not required by the customer. Effort and time are wasted in developing the functions required to implement the features that the customer does not want. It can be eliminated with continuous interaction with customer and testers involving in the requirements gathering as they can better visualize the scenarios and expected behavior of the system.
**Waste in Team Potential**
Waste in team potential is due to the following reasons –
• **Task Switching** – It leads to the danger of multi-tasking, which is a waste. It can be eliminated with focus on a task with every release. Large process steps are segmented into tasks to –
o Improve visibility
o Reduce dependencies
o Enable easy flow of work
o Focus on the cycle-time of delivered work
o Give a way to detect and resolve bottlenecks
Waiting – Time for getting instructions or information – Team is subjected to sit idle if the decisions are not made by the team, or if the information provided to the team (developers, testers, etc.) are expensive resources. It can be eliminated by allowing the team members (developers, testers, etc.) to –
o Take decisions so that they do not have to wait for instructions
o Have access to information so that it can be used as and when required
Kanban - Agile
Agile Kanban is Agile Software Development with Kanban approach. In Agile Kanban, the Kanban board is used to visualize the workflow. The Kanban board is normally put up on a wall in the project room. The status and progress of the story development tasks is tracked visually on the Kanban board with flowing Kanban cards.
Kanban Board
Kanban board is used to depict the flow of tasks across the value stream. The Kanban board –
• Provides easy access to everyone involved in the project.
• Facilitates communication as and when necessary.
• Progress of the tasks are visually displayed.
• Bottlenecks are visible as soon as they occur.
Advantages of Kanban board
The major advantages of using a Kanban board are –
• Empowerment of Team – This means –
o Team is allowed to take decisions as and when required.
o Team collaboratively resolves the bottlenecks.
o Team has access to the relevant information.
o Team continually communicates with customer.
• Continuous Delivery – This means –
o Focus on work completion.
o Limited requirements at any point of time.
o Focus on delivering value to the customer.
Emphasis on whole project.
The tasks and stories are represented by Kanban cards. The current status of each task is known by displaying the cards in separate columns on the board. The columns are labeled as **To Do**, **Doing**, and **Done**. Each task moves from **To Do** to **Doing** and then to **Done**.
Kanban Board is updated on a daily basis as the team progresses through the development.
**WIP Limit**
The label in the Doing column also contains a number, which represents the maximum number of tasks that can be in that column at any point of time. i.e., the number associated with the **Doing** column is the WIP (Work-In-Progress) Limit.
**Pull Approach**
Pull approach is used as and when a task is completed in the Doing column. Another card is pulled from the To Do column.
**Self-directing**
In Agile Development, the team is responsible for planning, tracking, reporting and communicating in the project. Team is allowed to make decisions and is accountable for the completion of the development and product quality. This is aligned to the characteristic of empowerment of the team in Kanban.
**Continuous Flow**
In Agile development, there is no gate approach and the work flows across the different functions without wait-time. This contributes in minimizing the cycle time characteristic of Kanban.
**Visual Metrics**
In Agile Kanban, the metrics are tracked visually using –
- Kanban Board
- Burndown Chart
**Uses of Kanban board**
Kanban Board is used to –
- Measure the cycle times, that can be used to optimize average cycle time.
- Track WIP limit to eliminate waste.
- Track resource utilization to eliminate waste.
Uses of Burndown chart
Burndown chart is used to capture –
- The current status of the tasks and stories.
- The rate of progress of completing the remaining tasks.
As Kanban Board is updated daily, it contains all the information that is required by the Burndown charts.
Kanban - Lean and Agile
In Agile Kanban, the user stories are broken into tasks and Kanban cards are used to track the tasks on the Kanban board. Agile Kanban has a concept of iteration that is not present in Kanban. Further, no processes are considered.
Kanban in Value Stream
Kanban is defined to be executed in value stream with focus on delivery of value. Kanban in software development can be visualized as the features flowing across the value stream. All the Kanban characteristics (Refer Chapter - Characteristics of Kanban in this Tutorial) are met in the Kanban approach for software development.
Feature Kanban Board
Feature Kanban Board is used to track the Feature Driven Development with Kanban Approach. Each Feature is assigned to a particular release. The columns in the Kanban board represent releases. Hence, each column contains all the features assigned to the release represented by it.
Each feature is broken into stories. Each release is broken into iterations. The iteration is executed in an Agile Development approach. This can be treated as a sub-stream in the value stream, with the stories to be completed within that iteration assigned to it.
Agile Kanban in Sub-stream
Agile Kanban approach is followed within each sub-stream that is implemented as an iteration. Each story is broken into tasks in the iteration. Task Kanban board is used to track the status and progress of the story development tasks. The current status of each task is known by displaying the cards in separate columns on the board. The columns are labeled as To Do, Doing, and Done. Each task moves from To Do to Doing and then to Done.
Continuous Delivery
Continuous delivery to the customer is ensured with features tracked on feature Kanban board and stories representing features tracked on task Kanban board.
Delivery through a release is accomplished by −
- Continuous tracking
- Constant communication with the customer
- Adjusting development plan as required
- Focusing on delivery of value to the customer
Agile development as well as Kanban maintain team collaboration. This, in turn helps in identifying and resolving Bottlenecks immediately as required by Kanban. This results in accomplishment of all the needed tasks within the iteration to deliver quality product, which meets customer expectations.
Continuous Process Improvement
Kanban supports process improvements to enhance the delivery approach continuously.
Consider a requirement that is a change or addition to the product. In such a case, Kanban cards can be used to visualize the requirement passing through the processes of analysis, design, development, product integration and testing. This is different from the Waterfall approach in the sense that it does not require completion of one process for all the requirements to flow to the next process in the sequence.
Such an implementation of Kanban in product maintenance allows maintainability, reliability and integrity of the product. The required process improvements are gathered at regular intervals and implemented on a continuous basis.
Kanban - Scrum
In this chapter, we will learn the similarities and differences between Kanban and Scrum. These similarities and differences will help you in choosing the correct method for your project.
Kanban and Scrum - Similarities
Similarities between Kanban and Scrum are −
- Both are Agile.
- Both use pull scheduling.
- Both limit WIP, Kanban at task level and Scrum at sprint level.
- Both use transparency across the development.
- Both focus on delivering releasable software early.
- Both are based on self-organizing teams.
- Both require breaking the work into pieces.
- In both the methods, the release plan is continuously optimized based on empirical data (Scrum – Velocity, Kanban - Lead Time/Cycle Time).
**Kanban and Scrum - Differences**
The differences between Kanban and Scrum are as follows –
<table>
<thead>
<tr>
<th>S.No</th>
<th>Scrum</th>
<th>Kanban</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Scrum prescribes roles.</td>
<td>In Kanban, roles are optional.</td>
</tr>
<tr>
<td>2</td>
<td>Product backlog is to be prioritized.</td>
<td>Prioritization is optional.</td>
</tr>
<tr>
<td>3</td>
<td>Sprints are to be time-boxed. You can choose the length of the sprint, but once chosen, the same length is to be maintained for all the sprints.</td>
<td>Time-boxed iterations are optional.</td>
</tr>
<tr>
<td>4</td>
<td>Scrum team needs to commit to a particular amount of work for the sprint.</td>
<td>Commitment is optional.</td>
</tr>
<tr>
<td>5</td>
<td>Cross-functional teams are prescribed.</td>
<td>Cross-functional teams are optional. Specialist teams are allowed.</td>
</tr>
<tr>
<td>6</td>
<td>Uses velocity as default metric for planning and process improvement.</td>
<td>Uses lead time (cycle time) as default metric for planning and process improvement.</td>
</tr>
<tr>
<td>7</td>
<td>Items such as stories, tests must be broken down so that they can be completed within one sprint.</td>
<td>No particular item size is prescribed.</td>
</tr>
</tbody>
</table>
Sprint backlog shows what tasks are to be executed during the current sprint. These tasks are displayed on Scrum board.
Scope of the sprint is fixed. WIP is limited per unit of time (WIP limit is the velocity).
Tasks are defined at workflow level. WIP is limited per workflow state.
Additions/Changes cannot be done within a sprint. Additions /changes can be done if WIP limit is not crossed.
New Scrum board is set at the beginning of every sprint. Kanban board is persistent.
Daily meetings need to be conducted. Daily meetings are optional.
Burn-down charts are prescribed. No particular chart is prescribed.
### Kanban vs. Scrum
The following advantages can help you choose between Kanban and Scrum –
- You need to choose Kanban if you already have working processes and you want to improve without disturbing the whole system whereas you need to choose Scrum if you want to introduce a new process in the organization.
- You can use Kanban in the product development with Feature Driven Development to track the workflows in the value stream whereas you can use Scrum for the development in each iteration.
- You need to define the WIP Limits in Kanban explicitly whereas you need to define the sprint length in scrum that imposes WIP limits implicitly.
- Both Kanban and Scrum are adaptive but Scrum is more prescriptive than Kanban.
- Kanban imposes only two Rules: Visualize workflow and limit WIP whereas Scrum imposes more constraints such as time-boxed Sprints.
- Kanban leads to organizational process improvements, both in management and development. Kanban also supports maintenance activities. Scrum leads to high throughput in small development teams. It does not contribute to product development and maintenance workflows that are longer in duration with unpredictability on the size.
of work units and changes. Scrum does not emphasize on optimizing management activities.
- In Kanban, you can choose when to do planning, process improvement, and release. You can choose to do these activities on a regular basis or on-demand. Scrum iteration is one single time-boxed Sprint combining three different activities: planning, process improvement, and release (if required).
Thus, Kanban and Scrum are effective tools in their specific contexts. You can combine Kanban and Scrum to derive maximum benefits from both.
Adapting Kanban and Scrum Together
You can use Kanban and Scrum together by implementing those characteristics that will suit your needs. The constraints of both need to be considered before adapting them. For instance, Scrum requires Time-boxed Sprints and if you do away with those, you cannot say that you have implemented Scrum. Both give you a basic set of constraints to drive your own process improvement.
Kanban - Tools 1
Several project management tools that follow Kanban approach are available. In this chapter, you can have an overview of the following Kanban Tools –
- Kanban Tool
- Kanbanery
- LeanKit
- JIRA Software
- Earliz
- Targetprocess
You can get more information on these tools at the respective sites. A comparison of these tools and some more can be found at https://www.getapp.com/project-management-planning-software/.
Kanban Tool
Kanban tool is a visual project management tool. Use Kanban cards, colors, swim-lanes, tags and due dates to compose work on Kanban board. Analyze and constantly improve your process to increase business efficiency.
Following are the important features of the Kanban tool –
Online Kanban Boards
Insightful analytics
Visual Project Management
Online Documents
Drag & Drop Tasks
To-Do Lists
For more information, visit the site http://kanbantool.com/
**Kanbanery**
Kanbanery is a visual project management tool that helps you work more effectively, alone and together, by visualizing work.
Features of Kanbanery include −
- GitHub integration
- Create or copy task boards easily with templates
- iPhone and iPad apps
- API and several third party apps
- Advanced reporting
- Content-rich tasks
- Work with existing systems
- Real-time updates
For more information, visit the site, https://kanbanery.com/
**LeanKit**
LeanKit supports Kanban-based visual management. It can be used in distributed environment with access to the CEO of a company, to all employees, customers, and partners.
Features of LeanKit include −
- Visualize workflow using virtual Kanban boards.
- Plan and track work using the workflow and calendar views.
- Effective virtual and visual team collaboration.
- Stay connected on-the-go with a browser or mobile device.
- Align strategic initiatives with team-level execution using visual tiered board approach.
• Measure effectiveness using powerful reporting and analytics.
• Real-time updates and automated reports and notifications.
• Cloud-hosted and supports calendar and workflow views.
• Improve flow of work with Kanban capabilities such as policies, class of service, and WIP limits.
• Role-based security controls.
• Integrate with other systems such as Microsoft Project server, TFS, and VS online, GitHub, JIRA, Buildmaster, Oracle Primavera, and so on. Zapier offers hundreds of pre-built integrations between LeanKit and web apps, such as Google, Salesforce and Zendesk.
For more information, visit the site [http://leankit.com/](http://leankit.com/)
### JIRA Software
JIRA Software is an Agile project management tool designed for teams of every shape and size.
Features of JIRA software include –
- **Plan** – Flexible planning using Scrum or Kanban or a mixed methodology.
- **Accurate Estimations** – Estimations that help the team become more accurate and efficient. JIRA supports user story points, hours, t-shirt sizes, or any other estimation technique.
- **Value-driven prioritization** – JIRA allows prioritization of user stories, issues, and bugs in the product backlog with a simple drag and drop. This facilitates ensuring that the user stories of high customer value to be on the top.
- **Track** – Team’s work in full context is maintained with complete visibility irrespective of the geographic locations.
- **Release** – Ship with confidence and sanity knowing that the information available is always updated.
- **Report** – Improve team performance with actions based on real-time, visual data that gives the team critical insight into their agile process.
- **Workflow** – Choose a workflow that matches the way the team works or that is an out-of-the-box workflow.
• **Add-ons** – Enhance JIRA with add-ons such as portfolio for JIRA, Tempo Timesheets, Zephyr, and over 800 other add-ons that can help to get the most out of JIRA software.
• **Integrate workflow with other tools** – Upgrade your workflow with Confluence, Bitbucket, Bamboo, HipChat, and hundreds of other developer tools.
For more information, visit the site [https://www.atlassian.com/software/jira](https://www.atlassian.com/software/jira)
**Earliz**
Earliz is an online project management and monitoring software that supports smart project management and collaboration.
Features of Earliz include –
- **Gantt / Agile** – For each new project, choose between a Gantt (steps) or Agile (sprints) interface. You can change this project method any time during your project.
- **Task Management** – Structure your project by listing the different steps (or stories) of your project by dividing them into tasks.
- **Board** – Manage your project daily using the board. Based on the Kanban method, the board shows the status of all the tasks and their assignment to the project participants.
- **Synchronization** – The content of your project is automatically synced between all connected members.
- **Notifications** – Notifications alert you of project updates.
- **Project Progression** – Monitor daily the progress of your projects, the velocity of the team, and know at any time whether commitments are fulfilled.
- **Team Workload** – Visualize workloads of team members for each project and time period.
- **Time Spent** – Track and analyze participant timesheets for each project.
- **Custom Indicators** – Create indicators tailored to your needs and share them easily with stakeholders.
- **Access Right Management** – For each report, you can easily specify which members of your workspace are allowed to access it.
- **Newsfeed** – Follow all the news of your workspace, contacts, and projects.
- **Dashboard** – Get an immediate summary of what you planned for the day: meetings, tasks, and project deadlines.
Chats and Discussion Forums – Debate topics linked to your projects or workspace in discussion forums and chats.
Document sharing – Store your documents in the Earliz workspace and make them available to team members.
Planning – Create teams, assign them to projects and manage the planning of each participant.
Targetprocess
Targetprocess is a software tool to visualize and manage Agile projects with full and natural support for Scrum, Kanban or a customized Agile method. With enhanced visualization functionality, Targetprocess gives the visibility you need across the teams, projects, and the entire organization.
Features of Targetprocess include –
- iOS and Android apps
- High-level planning and tracking across the entire portfolio
- Burndown, CFD, custom graphical reports
- Release planning and Sprint planning
- REST
- Backlog story map view
- Kanban, Scrum, SAFe
- Graphical reports and dashboards
- Custom views, cards, reports, dashboards
- QA, bug tracking, test case management
- Ideal for Agile testing and quality centered teams
- Visibility of progress across multiple projects and teams
- Visualization of project data
For more information, visit the site http://agile-project-management-tool.targetprocess.com/
Kanban - Tools 2
Several project management tools that follow Kanban approach are available. In this chapter, you can have an overview of the following Kanban Tools –
- Projectplace
• Wrike
• smartQ
• Accelo Projects
• Trello
**Projectplace**
Projectplace is a no-installation project management tool that provides a comprehensive solution allowing teams and organizations to plan, visualize, and keep track of their projects in real time.
Features of Projectplace include –
• Securely store, share, version manage, discuss, review files.
• Keep track of goals and scheduled work and set priorities.
• Manage all issues on a Kanban board.
• Share screen with up to 100 people regardless of location.
• Available in 8 languages.
• The Enterprise plan allows unlimited number of projects.
• Create plans, organize work and track personal tasks.
• Complete overview of how all your projects are performing.
• All project management tools in one place.
• Customized or predefined templates, e.g. Prince2.
• Visibility of commitments and resource availability.
• Simple user account provisioning.
• Utilize lessons learned with project templates.
• Compare actual time spent with original estimations.
• Execute on your plan together with your team.
• Project planning tools.
• Kanban boards.
• Task management.
• Issue management.
• Gantt tool.
• Document management.
• Desktop add-ons for document management.
• Project management app for Android and iOS.
• Project overview.
• Communication tools.
• Online meeting tool.
• Meeting management.
• Project management templates.
• Project portfolio management.
• Resource management tool.
• Time management.
• Report management.
• Single Sign-On (SSO).
• Industry leading security.
• Customize your collaboration experience using our APIs.
For more information, visit the site https://www.projectplace.com/
Wrike
Wrike combines project management with a real-time workspace for collaboration, discussion and document sharing.
Features of Wrike include –
• Advanced task management.
• Live dashboard project overview.
• File sharing and editing.
• Create subtasks.
• Real-time activity stream.
• Progress reports.
• Task-related discussions.
• Branded workspace.
• Email-to-task syncing.
• Branded e-mail notifications.
• Automate recurring tasks and projects.
• Third party integrations with Gmail, Google Drive, Dropbox, etc.
• Project timeline view (Gantt chart).
- Workload view and scheduling.
- Calendar integrations with Outlook, Google and iCalendar.
- Time tracking.
- Android and iPhone app.
- Custom reports.
- Add-ins for Google and Apple Mail.
- Security and privacy.
- Encryption.
- Access control.
- Data policy.
For more information, visit the site https://www.wrike.com/
**smartQ**
smartQ is an agile project management tool built around a visual task board (Kanban Board). It allows you to easily distribute work, track its progress and collaborate with the team online. smartQ can track tasks, issues, tickets, i.e. it is customizable to fit any workflow.
Features of Wrike include –
- Share notes and files.
- Tickets by e-mail or form.
- iPhone app.
- Ticket form designer.
- Threaded discussions and file attachments.
- Project Performance Report.
- External access for non-registered users.
- Track tasks, issues, tickets.
- Email notifications and private notes.
- Mark tickets with three-color stars.
- Customize your ticket fields.
- Customize your workflow.
- Unified notes and files area across all the tickets.
- Board view, ticket view and list view.
Export tickets to CSV and Excel.
Customize project roles.
Team roles.
Assign people to each role.
For more information, visit the site http://www.getsmartq.com/
Accelo Projects
Accelo Projects is a Cloud Project Management Software that facilitates planning and tracking, automation and change management.
Features of Accelo Projects include –
- Project planning with Gantt charts.
- Track milestones, tasks and budgets.
- Powerful Gmail and Outlook/Office365 integrations.
- Templates and reusable project plans.
- Track time and expenses.
- Forecasts, reports and dashboards.
- Record notes, schedule meetings and calls.
- Advanced approval for time.
- Allocate time and resources.
- Create invoices for all work planned or done.
- Invoicing and payments.
- Stripe and authorize .net integrations.
- Custom fields and categories.
- Custom project types and business processes.
- Automatic e-mail attachment storage.
- Smart and shared client database.
- Client signoffs and approvals.
- Client portal.
- Task boards.
- Calendar and task sync with Google Apps and Microsoft.
For more information, visit the site https://www.accelo.com/products/projects/
Trello
Trello is a project management software that utilizes the concept of boards to represent projects and within boards, cards to represent tasks. Trello supports Team Collaboration enabling members to discuss a project in real-time. It keeps everybody informed through task assignments, activity log, and e-mail notifications.
Features of Trello include –
- Free or zero pricing for the basic service.
- Quick overview on front and back of cards.
- Easy organization with tags, labels and categories.
- Drag and drop functionality.
- In-line editing.
- Checklists, with progress meter.
- Easy uploading of files and attachments.
- Data filtering.
- Archiving of card records (e.g. comments and changes).
- Deadline reminders.
- E-mail notifications.
- Activity log.
- Assign tasks.
- Voting feature.
- Information retrieval and back-up.
- SSL encryption of data.
- Texts and visuals fit any screen size.
- Search function.
- Mobile functionality to access boards on-the-go.
- Developer API.
|
{"Source-Url": "http://educatererindia.com/wp-content/uploads/2017/06/Kanban.pdf", "len_cl100k_base": 7458, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 49002, "total-output-tokens": 8632, "length": "2e12", "weborganizer": {"__label__adult": 0.0002419948577880859, "__label__art_design": 0.0005350112915039062, "__label__crime_law": 0.0001786947250366211, "__label__education_jobs": 0.0023365020751953125, "__label__entertainment": 6.186962127685547e-05, "__label__fashion_beauty": 0.00011372566223144533, "__label__finance_business": 0.0005931854248046875, "__label__food_dining": 0.0002143383026123047, "__label__games": 0.00051116943359375, "__label__hardware": 0.00042128562927246094, "__label__health": 0.00018978118896484375, "__label__history": 0.00017011165618896484, "__label__home_hobbies": 0.00011110305786132812, "__label__industrial": 0.0004010200500488281, "__label__literature": 0.00014400482177734375, "__label__politics": 8.702278137207031e-05, "__label__religion": 0.0002956390380859375, "__label__science_tech": 0.004627227783203125, "__label__social_life": 0.00010830163955688477, "__label__software": 0.03631591796875, "__label__software_dev": 0.95166015625, "__label__sports_fitness": 0.0001748800277709961, "__label__transportation": 0.0002841949462890625, "__label__travel": 0.00018131732940673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35053, 0.00089]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35053, 0.26018]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35053, 0.92201]], "google_gemma-3-12b-it_contains_pii": [[0, 1226, false], [1226, 2575, null], [2575, 4254, null], [4254, 5824, null], [5824, 6707, null], [6707, 7934, null], [7934, 9342, null], [9342, 11235, null], [11235, 12831, null], [12831, 14491, null], [14491, 16414, null], [16414, 18190, null], [18190, 19624, null], [19624, 21441, null], [21441, 23113, null], [23113, 24278, null], [24278, 26073, null], [26073, 28113, null], [28113, 29539, null], [29539, 30768, null], [30768, 31776, null], [31776, 32895, null], [32895, 34056, null], [34056, 35053, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1226, true], [1226, 2575, null], [2575, 4254, null], [4254, 5824, null], [5824, 6707, null], [6707, 7934, null], [7934, 9342, null], [9342, 11235, null], [11235, 12831, null], [12831, 14491, null], [14491, 16414, null], [16414, 18190, null], [18190, 19624, null], [19624, 21441, null], [21441, 23113, null], [23113, 24278, null], [24278, 26073, null], [26073, 28113, null], [28113, 29539, null], [29539, 30768, null], [30768, 31776, null], [31776, 32895, null], [32895, 34056, null], [34056, 35053, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35053, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35053, null]], "pdf_page_numbers": [[0, 1226, 1], [1226, 2575, 2], [2575, 4254, 3], [4254, 5824, 4], [5824, 6707, 5], [6707, 7934, 6], [7934, 9342, 7], [9342, 11235, 8], [11235, 12831, 9], [12831, 14491, 10], [14491, 16414, 11], [16414, 18190, 12], [18190, 19624, 13], [19624, 21441, 14], [21441, 23113, 15], [23113, 24278, 16], [24278, 26073, 17], [26073, 28113, 18], [28113, 29539, 19], [29539, 30768, 20], [30768, 31776, 21], [31776, 32895, 22], [32895, 34056, 23], [34056, 35053, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35053, 0.01895]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
a8a4befc4ec409c8c90779a50afbe8d4ae281618
|
Chapter 9: Virtual Memory
张竞慧
办公室:计算机楼366室
电邮:jhzhang@seu.edu.cn
主页:http://cse.seu.edu.cn/PersonalPage/zjh/
电话:025-52091017
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Background
- **Virtual memory** – separation of user logical memory from physical memory.
- Only part of the program needs to be in memory for execution.
- Logical address space can therefore be much larger than physical address space.
- More programs can be run at the same time
- Less I/O be needed to load or swap
- Virtual memory can be implemented via:
- Demand paging
- Demand segmentation
Virtual Memory That is Larger Than Physical Memory
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Demand Paging
- Bring a page into memory only when it is needed.
- Less I/O needed
- Less memory needed
- Faster response
- More users
- Page is needed $\Rightarrow$ reference to it
- invalid reference $\Rightarrow$ abort
- not-in-memory $\Rightarrow$ bring to memory
Valid-Invalid Bit
- With each page table entry a valid–invalid bit is associated
(1 ⇒ in-memory, 0 ⇒ not-in-memory)
- Initially valid–invalid bit is set to 0 on all entries.
- During address translation, if valid–invalid bit in page table entry is 0 ⇒ page fault.
Page Table When Some Pages Are Not in Main Memory
Logical memory:
- A
- B
- C
- D
- E
- F
- G
- H
Page table:
- Frame 0: Page 4, Valid (v)
- Frame 1: Page 6, Invalid (i)
- Frame 2: Page 9, Valid (v)
- Frame 3: Page i
- Frame 4: Page i
- Frame 5: Page i
- Frame 6: Page i
- Frame 7: Page i
Physical memory:
- Frame 0: Page A
- Frame 1: Page B
- Frame 2: Page C
- Frame 3: Page D
- Frame 4: Page E
- Frame 5: Page F
- Frame 6: Page G
- Frame 7: Page H
Valid-invalid bit:
- Frame 0: Valid
- Frame 1: Invalid
- Frame 2: Valid
- Frame 3: Invalid
- Frame 4: Invalid
- Frame 5: Invalid
- Frame 6: Invalid
- Frame 7: Invalid
Page Fault
- If there is ever a reference to a page, first reference will trap to OS ⇒ page fault
- OS looks at page table to decide:
- Invalid reference ⇒ abort.
- Just not in memory.
- Get empty frame.
- Swap page into frame.
- Reset tables, validation bit = 1.
- Restart instruction
- block move
- auto increment/decrement location
Steps in Handling a Page Fault
1. Trap
2. Bring in missing page
3. Page is on backing store
4. Free frame
5. Reset page table
6. Restart instruction
Load M
Operating System Concepts
Performance of Demand Paging
- Extreme case – start process with no pages in memory
- OS sets instruction pointer to first instruction of process, non-memory-resident -> page fault
- And for every other process pages on first access
- Pure demand paging
Performance of Demand Paging
- Page Fault Rate $0 \leq p \leq 1.0$
- if $p = 0$ no page faults
- if $p = 1$, every reference is a fault
- Effective Access Time (EAT)
$EAT = (1 - p) \times \text{memory access}$
+ $p$ (page fault overhead)
+ [swap page out]
+ swap page in
+ restart overhead
Performance of Demand Paging
- Memory access time = 200 nanoseconds
- Average page-fault service time = 8 milliseconds
- EAT = \((1 - p) \times 200 + p \times 8\) milliseconds
\[
= 200 + p \times 7,999,800
\]
- If one access out of 1,000 causes a page fault, then EAT = 8.2 microseconds.
- This is a slowdown by a factor of 40!!
Performance of Demand Paging
- If want performance degradation < 10 percent
\[ 220 > 200 + 7,999,800 \times p \]
\[ 20 > 7,999,800 \times p \]
\[ p < 0.0000025 \]
- < one page fault in every 400,000 memory accesses
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Process Creation
Virtual memory allows other benefits during process creation:
- Copy-on-Write
- Memory-Mapped Files (Later)
Copy-on-Write
- Copy-on-Write (COW) allows both parent and child processes to initially *share* the same pages in memory. (No demand paging)
- If either process modifies a shared page, only then is the page copied.
- COW allows more efficient process creation as only modified pages are copied.
Before Process 1 Modifies Page C
- process\(_1\) connects to page A in physical memory.
- process\(_1\) connects to page B in physical memory.
- process\(_1\) connects to page C in physical memory.
- process\(_2\) connects to page A in physical memory.
After Process 1 Modifies Page C
process₁
physical memory
page A
page B
page C
Copy of page C
process₂
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
What happens if there is no free frame?
- Page replacement – find some page in memory, but not really in use, swap it out
- algorithm
- performance – want an algorithm which will result in minimum number of page faults
- Same page may be brought into memory several times
Page Replacement
- Prevent over-allocation of memory by modifying page-fault service routine to include page replacement.
- Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk.
- Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory.
Need For Page Replacement
Logical memory for user 1:
- Frame 0: H
- Frame 1: load M
- Frame 2: J
- Frame 3: M
Page table for user 1:
- Page 0: valid
- Page 1: valid
- Page 2: valid
- Page 3: invalid
Logical memory for user 2:
- Frame 0: A
- Frame 1: B
- Frame 2: D
- Frame 3: E
Page table for user 2:
- Page 4: valid
- Page 5: valid
- Page 6: valid
- Page 7: valid
Basic Page Replacement
1. Find the location of the desired page on disk.
2. Find a free frame:
- If there is a free frame, use it.
- If there is no free frame, use a page replacement algorithm to select a victim frame.
3. Read the desired page into the (newly) free frame. Update the page and frame tables.
4. Restart the instruction.
Page Replacement
1. swap out victim page
2. change to invalid
3. swap desired page in
4. reset page table for new page
Page Replacement Algorithms
- Want lowest page-fault rate.
- Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string.
- In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
Graph of Page Faults Versus The Number of Frames
The graph shows the relationship between the number of page faults and the number of frames. As the number of frames increases, the number of page faults decreases exponentially.
First-In-First-Out (FIFO) Algorithm
- Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
- 3 frames (3 pages can be in memory at a time per process)
1 | 1 | 4 | 5
2 | 2 | 1 | 3 | 9 page faults
3 | 3 | 2 | 4
First-In-First-Out (FIFO) Algorithm
- 4 frames
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>1</td>
<td>5</td>
<td>10 page faults</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td></td>
<td>2</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>4</td>
<td></td>
<td>3</td>
<td></td>
</tr>
</tbody>
</table>
- FIFO Replacement – Belady’s Anomaly
- more frames ⇒ more page faults
FIFO Illustrating Belady’s Anomaly
FIFO Page Replacement
reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
page frames
7 7 7 2
0 0 0
1 1
2 2 4 4 4 0
3 3 3 2 2 2
1 1
0 0
3 2
1 0 0
2 2 1
Optimal Algorithm
- Replace page that will not be used for longest period of time.
- 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1
2
3
4
5
6 page faults
- How do you know this?
- Used for measuring how well your algorithm performs.
Optimal Page Replacement
| reference string | 7 | 0 | 1 | 2 | 0 | 3 | 0 | 4 | 2 | 3 | 0 | 3 | 2 | 1 | 2 | 0 | 1 | 7 | 0 | 1 |
| page frames | 7 | 7 | 7 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 7 |
| | 0 | 0 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 0 |
| | 1 | 1 | 3 | 3 | 3 | 1 | 1 | 1 | 1 | 1 | 1 |
Least Recently Used (LRU) Algorithm
- Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
- Counter implementation (how?)
- Every page entry has a counter; every time a page is referenced through this entry, copy the clock into the counter.
- When a page needs to be changed, look at the counters to determine which are to change.
LRU Page Replacement
reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
page frames
7 7 7 2 2
0 0 0 0
1 1 1 3
4 4 4 0
0 0 3 3
3 2 2 2
1 1 1
3 0 0
2 2 7
LRU Algorithm (Cont.)
- Stack implementation – keep a stack of page numbers in a double link form: (How?Exercise1)
- Page referenced:
- move it to the top
- requires 6 pointers to be changed
- No search for replacement
Use Of A Stack to Record The Most Recent Page References
reference string
4 7 0 7 1 0 1 2 1 2 7 1 2
stack before a
2
1
0
7
4
stack after b
7
2
1
0
4
a
b
LRU Approximation Algorithms
- Reference bit
- With each page associate a bit, initially = 0
- When page is referenced bit set to 1.
- Replace the one which is 0 (if one exists). We do not know the order, however.
- Additional-Reference-Bits Algorithm
- Keep an 8-bit bytes for each page
- At regular intervals shifts the bits right 1 bit, shift the reference bit into the high-order bit
- Interpret these 8-bit bytes as unsigned intergers, the page with lowest number is the LRU page
LRU Approximation Algorithms (Cont.)
- Second chance (linklist approach?)
- Need reference bit.
- Clock replacement.
- If page to be replaced (in clock order) has reference bit = 1. then:
- set reference bit 0.
- leave page in memory.
- replace next page (in clock order), subject to same rules.
Second-Chance (clock) Page- Replacement Algorithm
circular queue of pages
(a)
next victim
reference bits
pages
0
0
1
0
1
0
...
1
1
reference bits
pages
0
0
1
0
1
0
...
1
1
(b)
Counting Algorithms
- Keep a counter of the number of references that have been made to each page.
- LFU Algorithm: replaces page with smallest count.
- MFU Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used.
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Allocation of Frames
- Each process needs **minimum** number of pages. *(why? Restart instruction)*
- Example: IBM 370 – 6 pages to handle SS MOVE instruction:
- Instruction is 6 bytes, might span 2 pages.
- 2 pages to handle **from**.
- 2 pages to handle **to**.
- Two major allocation schemes.
- Fixed allocation
- Priority allocation
Fixed Allocation
- Equal allocation – e.g., if 100 frames and 5 processes, give each 20 pages.
- Proportional allocation – Allocate according to the size of process.
\[
s_i = \text{size of process } p_i \\
S = \sum s_i \\
m = \text{total number of frames} \\
a_i = \text{allocation for } p_i = \frac{s_i}{S} \times m
\]
Priority Allocation
- Use a proportional allocation scheme using priorities rather than size.
- If process $P_i$ generates a page fault,
- select for replacement one of its frames.
- select for replacement a frame from a process with lower priority number.
Global vs. Local Allocation
- **Global** replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another. (benefit? Weakness?)
- **Local** replacement – each process selects from only its own set of allocated frames.
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Thrashing
- If a process does not have “enough” pages, the page-fault rate is very high. This leads to: (how it happens?)
- low CPU utilization.
- operating system thinks that it needs to increase the degree of multiprogramming.
- another process added to the system.
**Thrashing** ≡ a process is busy swapping pages in and out.
Why does paging work? (how to know the frame number to allocate to a process?)
Locality model
- Process migrates from one locality to another.
- Localities may overlap.
Why does thrashing occur?
\[ \Sigma \text{size of locality} > \text{total memory size} \]
Locality In A Memory-Reference Pattern
Working-Set Model
- $\Delta \equiv$ working-set window $\equiv$ a fixed number of page references
Example: 10,000 instruction
- $WSS_i$ (working set of Process $P_i$) = total number of pages referenced in the most recent $\Delta$ (varies in time)
- if $\Delta$ too small will not encompass entire locality.
- if $\Delta$ too large will encompass several localities.
- if $\Delta = \infty \implies$ will encompass entire program.
Working-Set Model (Cont.)
\[ D = \sum WSS_i \equiv \text{total demand frames} \]
- if \( D > m \) \( \Rightarrow \) Thrashing
- Policy if \( D > m \), then suspend one of the processes.
Working-set model
Page reference table
\[ \ldots 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 1 3 2 3 4 4 4 3 4 4 4 \ldots \]
\[ \Delta \]
\[ t_1 \]
\[ WS(t_1) = \{1, 2, 5, 6, 7\} \]
\[ \Delta \]
\[ t_2 \]
\[ WS(t_2) = \{3, 4\} \]
Keeping Track of the Working Set
- Approximate with interval timer + a reference bit
- Example: $\Delta = 10,000$
- Timer interrupts after every 5000 time units.
- Keep in memory 2 bits for each page.
- Whenever a timer interrupts copy and sets the values of all reference bits to 0.
- If one of the bits in memory = 1 $\Rightarrow$ page in working set.
Why is this not completely accurate?
- Improvement = 10 bits and interrupt every 1000 time units.
What happens when page fault occurs?
- Establish “acceptable” page-fault rate.
- If actual rate too low, process loses frame.
- If actual rate too high, process gains frame.
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Memory-Mapped Files
- Memory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory.
- A file is initially read using demand paging. A page-sized portion of the file is read from the file system into a physical page. Subsequent reads/writes to/from the file are treated as ordinary memory accesses.
- Simplifies file access by treating file I/O through memory rather than `read()` `write()` system calls.
- Also allows several processes to map the same file allowing the pages in memory to be shared.
Memory Mapped Files
process A
virtual memory
physical memory
process B
virtual memory
disk file
Memory-Mapped Shared Memory in Windows
[Diagram showing memory-mapped shared memory between two processes.]
Example: linux mmap
- Following link:
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Allocating Kernel Memory
- Treated differently from user memory
- Often allocated from a free-memory pool
- Kernel requests memory for structures of varying sizes, fragmentation need to be taken care of
- Some kernel memory needs to be contiguous
Buddy System
- Allocates memory from fixed-size segment consisting of physically-contiguous pages
- Memory allocated using **power-of-2 allocator**
- Satisfies requests in units sized as power of 2
- Request rounded up to next highest power of 2
- When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2
- Continue until appropriate sized chunk available
Buddy System Allocator
21KB kernel memory allocation request
Fragmentation problem
Slab Allocator
- Alternate strategy
- **Slab** is one or more physically contiguous pages
- **Cache** consists of one or more slabs
- Single cache for each unique kernel data structure
- Each cache filled with **objects** – instantiations of the data structure
Slab Allocator (Cont)
- When cache created, filled with objects marked as **free**
- When structures stored, objects marked as **used**
- If slab is full of used objects, next object allocated from empty slab
- If no empty slabs, new slab allocated
- Benefits include no fragmentation, fast memory request satisfaction
Slab Allocation
- **Kernel objects**
- **Caches**
- **Slabs**
- 3 KB objects
- 7 KB objects
Physical contiguous pages
Slab Allocation
- For example process descriptor is of type struct task_struct
- Approx 1.7KB of memory
- New task -> allocate new struct from cache
- Will use existing free struct task_struct
- Slab can be in three possible states
- Full – all used
- Empty – all free
- Partial – mix of free and used
- Upon request, slab allocator
- Uses free struct in partial slab
- If none, takes one from empty slab
- If no empty slab, create new empty
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Other Issues -- Prepaging
Prepaging
- To reduce the large number of page faults that occurs at process startup
- Prepage all or some of the pages a process will need, before they are referenced (work set model)
- But if prepaged pages are unused, I/O and memory was wasted
Assume $s$ pages are prepaged and $\alpha$ of the pages is used
- Is cost of $s \cdot \alpha$ save pages faults $>$ or $<$ than the cost of prepaging $s \cdot (1 - \alpha)$ unnecessary pages?
- $\alpha$ near zero $\Rightarrow$ prepaging loses
Other Issues – Page Size
Page size selection must take into consideration:
- Fragmentation(?)
- Table size (?)
- I/O overhead(?)
- Locality(?)
Other Issues – TLB Reach
- TLB Reach - The amount of memory accessible from the TLB
- TLB Reach = (TLB Size) X (Page Size)
- Ideally, the working set of each process is stored in the TLB
- Otherwise there is a high degree of page faults
- Increase the Page Size
- Provide Multiple Page Sizes
- This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation
Other Issues – Program Structure
- Program structure
- int A[][] = new int[1024][1024];
- Each row is stored in one page
- Program 1
```java
for (j = 0; j < A.length; j++)
for (i = 0; i < A.length; i++)
A[i, j] = 0;
```
1024 x 1024 page faults
- Program 2
```java
for (i = 0; i < A.length; i++)
for (j = 0; j < A.length; j++)
A[i, j] = 0;
```
1024 page faults
Chapter 9: Virtual Memory
- Background
- Demand Paging
- Copy-on-Write
- Page Replacement
- Allocation of Frames
- Thrashing
- Memory-Mapped Files
- Allocating Kernel Memory
- Other Considerations
- Operating-System Examples
Operating System Examples
- Windows XP
- Solaris
Windows XP
- Uses demand paging with clustering. Clustering brings in pages surrounding the faulting page.
- Processes are assigned working set minimum and working set maximum.
- Working set minimum is the minimum number of pages the process is guaranteed to have in memory.
Windows XP (Cont)
- A process may be assigned as many pages up to its working set maximum.
- When the amount of free memory in the system falls below a threshold, **automatic working set trimming** is performed to restore the amount of free memory.
- Working set trimming removes pages from processes that have pages in excess of their working set minimum.
Solaris
- Maintains a list of free pages to assign faulting processes
- *Lotsfree* – threshold parameter (amount of free memory) to begin paging
- *Desfree* – threshold parameter to increasing paging
- *Minfree* – threshold parameter to being swapping
Solaris (Cont)
- Paging is performed by *pageout* process.
- Pageout scans pages using modified clock algorithm.
- *Scanrate* is the rate at which pages are scanned. This ranges from *slowscan* to *fastscan*.
- Pageout is called more frequently depending upon the amount of free memory available.
Solaris 2 Page Scanner
- 8192 fastscan
- 100 slowscan
Scan rate vs. amount of free memory:
- minfree
- desfree
- lotsfree
|
{"Source-Url": "https://cse.seu.edu.cn/_upload/article/files/22/a3/68d756ba4f25af4b228ce502be98/929c5817-3fd1-486e-a781-788c093dad3f.pdf", "len_cl100k_base": 6028, "olmocr-version": "0.1.50", "pdf-total-pages": 80, "total-fallback-pages": 0, "total-input-tokens": 104807, "total-output-tokens": 9045, "length": "2e12", "weborganizer": {"__label__adult": 0.0002827644348144531, "__label__art_design": 0.0003154277801513672, "__label__crime_law": 0.00025844573974609375, "__label__education_jobs": 0.0011920928955078125, "__label__entertainment": 5.793571472167969e-05, "__label__fashion_beauty": 0.00011909008026123048, "__label__finance_business": 0.00015985965728759766, "__label__food_dining": 0.00026345252990722656, "__label__games": 0.0007672309875488281, "__label__hardware": 0.0028858184814453125, "__label__health": 0.0003294944763183594, "__label__history": 0.00023698806762695312, "__label__home_hobbies": 0.00013399124145507812, "__label__industrial": 0.0005426406860351562, "__label__literature": 0.00022351741790771484, "__label__politics": 0.00016927719116210938, "__label__religion": 0.00042510032653808594, "__label__science_tech": 0.03814697265625, "__label__social_life": 7.605552673339844e-05, "__label__software": 0.00974273681640625, "__label__software_dev": 0.94287109375, "__label__sports_fitness": 0.0002868175506591797, "__label__transportation": 0.0004420280456542969, "__label__travel": 0.0001575946807861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21107, 0.08672]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21107, 0.37935]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21107, 0.77219]], "google_gemma-3-12b-it_contains_pii": [[0, 125, false], [125, 351, null], [351, 761, null], [761, 812, null], [812, 1038, null], [1038, 1320, null], [1320, 1587, null], [1587, 2212, null], [2212, 2561, null], [2561, 2746, null], [2746, 3007, null], [3007, 3319, null], [3319, 3655, null], [3655, 3877, null], [3877, 4103, null], [4103, 4231, null], [4231, 4529, null], [4529, 4783, null], [4783, 4892, null], [4892, 5118, null], [5118, 5396, null], [5396, 5776, null], [5776, 6145, null], [6145, 6490, null], [6490, 6610, null], [6610, 6907, null], [6907, 7136, null], [7136, 7347, null], [7347, 7615, null], [7615, 7650, null], [7650, 7813, null], [7813, 8068, null], [8068, 8491, null], [8491, 8829, null], [8829, 8992, null], [8992, 9224, null], [9224, 9382, null], [9382, 9881, null], [9881, 10196, null], [10196, 10398, null], [10398, 10682, null], [10682, 10908, null], [10908, 11258, null], [11258, 11580, null], [11580, 11843, null], [11843, 12118, null], [12118, 12344, null], [12344, 12681, null], [12681, 12941, null], [12941, 12980, null], [12980, 13419, null], [13419, 13608, null], [13608, 13854, null], [13854, 14368, null], [14368, 14509, null], [14509, 14735, null], [14735, 15297, null], [15297, 15397, null], [15397, 15506, null], [15506, 15616, null], [15616, 15842, null], [15842, 16094, null], [16094, 16516, null], [16516, 16601, null], [16601, 16865, null], [16865, 17187, null], [17187, 17308, null], [17308, 17771, null], [17771, 17997, null], [17997, 18517, null], [18517, 18662, null], [18662, 19082, null], [19082, 19523, null], [19523, 19749, null], [19749, 19799, null], [19799, 20075, null], [20075, 20433, null], [20433, 20686, null], [20686, 20984, null], [20984, 21107, null]], "google_gemma-3-12b-it_is_public_document": [[0, 125, true], [125, 351, null], [351, 761, null], [761, 812, null], [812, 1038, null], [1038, 1320, null], [1320, 1587, null], [1587, 2212, null], [2212, 2561, null], [2561, 2746, null], [2746, 3007, null], [3007, 3319, null], [3319, 3655, null], [3655, 3877, null], [3877, 4103, null], [4103, 4231, null], [4231, 4529, null], [4529, 4783, null], [4783, 4892, null], [4892, 5118, null], [5118, 5396, null], [5396, 5776, null], [5776, 6145, null], [6145, 6490, null], [6490, 6610, null], [6610, 6907, null], [6907, 7136, null], [7136, 7347, null], [7347, 7615, null], [7615, 7650, null], [7650, 7813, null], [7813, 8068, null], [8068, 8491, null], [8491, 8829, null], [8829, 8992, null], [8992, 9224, null], [9224, 9382, null], [9382, 9881, null], [9881, 10196, null], [10196, 10398, null], [10398, 10682, null], [10682, 10908, null], [10908, 11258, null], [11258, 11580, null], [11580, 11843, null], [11843, 12118, null], [12118, 12344, null], [12344, 12681, null], [12681, 12941, null], [12941, 12980, null], [12980, 13419, null], [13419, 13608, null], [13608, 13854, null], [13854, 14368, null], [14368, 14509, null], [14509, 14735, null], [14735, 15297, null], [15297, 15397, null], [15397, 15506, null], [15506, 15616, null], [15616, 15842, null], [15842, 16094, null], [16094, 16516, null], [16516, 16601, null], [16601, 16865, null], [16865, 17187, null], [17187, 17308, null], [17308, 17771, null], [17771, 17997, null], [17997, 18517, null], [18517, 18662, null], [18662, 19082, null], [19082, 19523, null], [19523, 19749, null], [19749, 19799, null], [19799, 20075, null], [20075, 20433, null], [20433, 20686, null], [20686, 20984, null], [20984, 21107, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21107, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 21107, null]], "pdf_page_numbers": [[0, 125, 1], [125, 351, 2], [351, 761, 3], [761, 812, 4], [812, 1038, 5], [1038, 1320, 6], [1320, 1587, 7], [1587, 2212, 8], [2212, 2561, 9], [2561, 2746, 10], [2746, 3007, 11], [3007, 3319, 12], [3319, 3655, 13], [3655, 3877, 14], [3877, 4103, 15], [4103, 4231, 16], [4231, 4529, 17], [4529, 4783, 18], [4783, 4892, 19], [4892, 5118, 20], [5118, 5396, 21], [5396, 5776, 22], [5776, 6145, 23], [6145, 6490, 24], [6490, 6610, 25], [6610, 6907, 26], [6907, 7136, 27], [7136, 7347, 28], [7347, 7615, 29], [7615, 7650, 30], [7650, 7813, 31], [7813, 8068, 32], [8068, 8491, 33], [8491, 8829, 34], [8829, 8992, 35], [8992, 9224, 36], [9224, 9382, 37], [9382, 9881, 38], [9881, 10196, 39], [10196, 10398, 40], [10398, 10682, 41], [10682, 10908, 42], [10908, 11258, 43], [11258, 11580, 44], [11580, 11843, 45], [11843, 12118, 46], [12118, 12344, 47], [12344, 12681, 48], [12681, 12941, 49], [12941, 12980, 50], [12980, 13419, 51], [13419, 13608, 52], [13608, 13854, 53], [13854, 14368, 54], [14368, 14509, 55], [14509, 14735, 56], [14735, 15297, 57], [15297, 15397, 58], [15397, 15506, 59], [15506, 15616, 60], [15616, 15842, 61], [15842, 16094, 62], [16094, 16516, 63], [16516, 16601, 64], [16601, 16865, 65], [16865, 17187, 66], [17187, 17308, 67], [17308, 17771, 68], [17771, 17997, 69], [17997, 18517, 70], [18517, 18662, 71], [18662, 19082, 72], [19082, 19523, 73], [19523, 19749, 74], [19749, 19799, 75], [19799, 20075, 76], [20075, 20433, 77], [20433, 20686, 78], [20686, 20984, 79], [20984, 21107, 80]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21107, 0.01724]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
1f5073a005df3e74b0a334879f29d4ef561be5ba
|
Institute of Electrical and Electronics Engineers Inc.
Terms of use:
The terms and conditions for the reuse of this version of the manuscript are specified in the publishing policy. For all terms of use and more information see the publisher's website.
Agent Abstractions for Engineering IoT Systems: a Case Study in Smart Healthcare
Eloisa Vargiu* and Franco Zambonelli†
* Eurecat Technology Center, eHealth Unit
Barcelona, Spain
eoloisa.vargiu@eurecat.org
† Dipartimento di Scienze e Metodi dell’Ingegneria
Università di Modena e Reggio Emilia, Italy
franco.zambonelli@unimore.it
Abstract—Despite the rapid progresses in IoT research, a general principled software engineering approach for the systematic development of IoT systems and applications is still missing. In this article, we show that agent-oriented concepts and abstractions can play a key role in the design and development of IoT systems and applications, and could represent the ground on which to shape a new IoT-oriented software engineering discipline. A case study in the area of smart healthcare is adopted as a running example to ground the discussion.
I. INTRODUCTION
Despite the great deal of worldwide researches in the area of the Internet of things (IoT) [9], the technologies to make it a systematic reality are far form being assessed. Early researchers in the IoT area have mostly focussed on communication issues and on enabling interoperability [3]. More recently, great efforts has been devoted at promoting means to facilitate the integration of resources and services towards the provisioning of software-defined distributed services for the IoT. For instance, as in the “Web of Things” (WoT) vision [8], by promoting the provisioning of resources in an IoT network in term of Web Services, and thus making it possible to develop distributed and coordinated IoT services by using standard Web technologies.
WoT is definitely promising and will most likely represent a keystone technology in the future of IoT. Indeed, along the WoT lines, a number of different approaches (in terms of, e.g., supporting middleware [16], [12] and programming approaches [4]) are being proposed to support the development of IoT systems and applications. Yet, a common unifying approach supporting their design and development, grounded on a common set of abstractions, models, and methodologies, is still missing. Also, relying on WoT concepts only for the design and development of IoT systems, one can miss identifying some key characteristics that will necessarily characterize many IoT services, such goal-oriented and autonomous behaviors [10]. Overall, this limits the possibility of promoting a systematic and disciplined approach for the development of complex IoT systems, and thus limits unfolding the full potentials of the IoT vision.
This article attempts at framing some key general characteristics related to the engineering of complex IoT systems and applications, by synthesizing the common features of existing proposals and application scenarios, and by bringing in the lessons of agent-based computing and agent-oriented software engineering. The so analyzed common characteristics are then used to identify some key software engineering abstractions around which the process of developing IoT systems and applications could revolve. Such abstractions – due to the inherent presence in IoT systems and applications of autonomous and goal-oriented behaviours – will exploit some key concept of agent-based computing and agent-oriented software engineering [17], and can be used to define a set guidelines for IoT-oriented software engineering.
To exemplify the analysis, we refer a specific case study, representative of a larger class of IoT scenarios, in the smart healthcare area: IoT enriched houses to support smart health monitoring and care. We assume houses are densely enriched with connected sensors and actuators: light and heat controllers, gas and smoke detectors, presence and motion sensors, door (main doors, internal doors, fridge, kitchen furniture) sensors, electric consume sensors, shutter/curtain controller, as well as sensorized everyday objects (e.g., cup, fork, cane). Moreover, also medical devices (e.g., pulse-oximetry, smart scale) may be provided to patients in order to automatically send health status information and measures. In such a scenario, different actors (from medical doctors to patients and their family members) can contribute to set up a variety of IoT services to support both medical doctors in the monitoring and care activities of individuals, and to help individuals and their family members in their everyday self-managed healthcare activities.
II. BACKGROUND
The definition of general software engineering principles requires identifying the general features and issues that characterize most current approaches to IoT systems design and development.
A. Things
The “things” in the IoT vision may encompass a large number of physical objects, and also include places and persons.
Physical objects and places can be made trackable and controllable by connecting them to low-cost wireless electronic devices. At the lower end of the spectrum, RFID tags or Bluetooth beacons, based on low-cost and short-range communication protocols, can be attached to any kind of objects to enable tracking their positions and status, and possibly to associate some digital information with them. More advanced devices integrating environmental or motion sensors (i.e., accelerometers) can detect the present and the past activities associated with objects or with some place. In addition, one can make objects actuable – enabling the remote control of their configuration/status via proper digitally-controller actuators – and possibly autonomous – delegating them of autonomously direct their activities. In this perspective, autonomous robots and autonomous objects [1] are components that will increasingly populate the IoT universe.
To exemplify, in the smart healthcare scenario: attach RFID to everyday objects in houses, such as a glass to detect the quantity of ingested water; integrate some kind of remote controller (e.g., Arduino-based) to turn on/off the light in a specific room, in order to enable controlling via, e.g., a mobile phone its turning on/off; automatically open and close the shutter/curtain depending on the performed activities, the context (the hour, the day), and/or user’s habits; last but not least, robots for home assistance are gaining momentum (e.g., the Giraff plus [2]).
Concerning persons, other than simply users of the technology, they can also be perceived at first-class entities of the overall IoT vision. Simply for the fact of having a mobile phone, they can be sensed in their activities and positions, and they can be asked to act in the environment or supply sensing. In the smart healthcare scenario, beside continuously detecting the position and activities of people in order to get ready to manage any possible emergency situation (e.g., fall detection [13]), one can also think at involving them in self-monitoring and supply information to the overall health monitoring system [7].
B. Software Infrastructures
To make “things” usable and capable of serving purposes, there is need of software infrastructures (that is, of IoT middleware [14]) capable both of supporting the “gluing” of different things and of providing some means for stakeholders and users to access the IoT system and take advantage of its functionalities.
Concerning the “glue”, this involves a variety of technical issues.
There are interoperability issues, to enable a variety of very heterogeneous things to interact with each other, via a set of common name spaces, uniform communication protocols and data representation schemes; and semantic issues, because a common semantics for concepts must be defined to enable cooperation and integration of things. For both these issues, however, a large body of proposals (dating back to the early years of IoT research) exists. Thus, for our purposes in this article, we assume the existence of proper technical solutions.
Rather, key open “gluing” issues of relevance for software engineering include discovery, Group Formation, and Coordination. IoT systems functionalities derive from the orchestrated exploitation of a variety of things, possibly involving a variety of users and stakeholders. In the smart healthcare scenario, it is desirable to automatically configure a given room (e.g., bedroom) for a given context (e.g., time to go to sleep). This requires involving the lightening and shutter system, and consider recommendations by caregivers and clinicians [7]. Thus, it implies to discovery and establish relations between things, between things and humans, and coordinating their activities also accounting for their social relations [2]. Clearly, for the above coordination mechanisms to work, context-awareness and self-adaptation are required. In fact, the inherent ephemerality, unreliability, and mobility of system components (e.g., things such as everyday objects at home may come and go, can be moved around, and can be placed in corners without wireless connections) makes it impossible to anticipate which things will be available and for how long during their exploitation. This requires mechanisms for discovery, group formation, and coordination that are capable of dynamically self-adapting to the general context in which they act, or possibly even self-organize in a context-aware way. [11], [18].
Concerning the “access” to the functionalities and capabilities of individual things by users, the scene is currently dominate by the so called “Web of Things” (WoT) vision [8]. The idea is to expose services and functionalities of individual things in terms of REST services, enabling the adoption of assessed web technologies as far as discovery of things and provisioning of coordinated group services are concerned. Concerning middleware infrastructures, a variety of proposal to support the provisioning of IoT services and applications have appeared [16], [4], [14]. Beside their specificities, most of these proposals rely on: some basic infrastructure to support the WoT approach (i.e., to expose things in terms of simple services); some means to support, in according to a specific coordination model, the discovery of things (and of their associated services), and the coordinated activities of groups of things; and some solutions to make services and applications capable of self-adapting and self-organizing in a context-aware and unsupervised way.
C. Services and Applications
With the term “IoT System” we generally refer to the overall set of IoT devices and to the associated middleware infrastructure devoted to manage their networking and their context-aware interactions. Logically above an IoT system, specific software can be deployed to orchestrate the activities of the system so as to provide:
- A number of specific services. That is, means to enable stakeholders and users to access and exploit individual things and direct/activate their sensing/actuating capabilities, but also coordinated services that access groups of things and coordinate their sensing/actuating capabilities. For instance, in a smart home instrumented for healthcare,
other than to services to access and control individual appliances, one can think at providing a coordinated service that, by accessing and directing the lightening system, the light sensors, and the windows obscuring system in a specific room, can modify the overall situation of that room depending on the specific need of the person occupying it.
- A number of more general-purpose applications or suites, intended as more comprehensive software systems intended to both regulate the overall functioning of an IoT system (or of some of its parts), so as to ensure specific overall behaviour of the system, as well as to provide an harmonized set of services to access the system and (possibly) its configuration. In the smart home scenario, one can think at applications to control the overall heating systems and lightening systems of a set of houses hosting patients with a specific health problem, and giving medical doctors and/or carers the access to services to change the configuration of the associated parameter.
Clearly, depending on the specific scenario, one can think at IoT systems in which services may exist only confined within the context of some general application, but also at scenarios in which there are services that can be deployed as stand-alone software.
III. SOFTWARE ENGINEERING ABSTRACTIONS AND THE ROLE OF AGENT-BASED COMPUTING
Based on the above overview of IoT issues, we now try to synthesize the central concepts and abstractions around which the development of IoT systems (spanning analysis, design, and implementation) should be centered, and discuss how these directly relate to concepts and abstractions developed in the context of agent-based computing [10], [17]. Figure 1 graphically frames such concepts in a logical stack.
A. Actors
The first activity in the analysis of a system-to-be concern identifying the stakeholders and users of the system, aka the “actors”. That is, those persons/organizations who will own, manage, and/or use the system and its functionalities, and from which requirements should be elicited.
In the case of IoT systems, the distinction between IoT services and applications, and the presence of an IoT middleware to support them and to manage individual things, naturally leads to the identification of three main abstract classes of “actors”:
- **Global Managers**: These are the owners of an overall IoT system and infrastructure, or delegates empowered to exert control and establishing policies over the configuration, structure, and overall functioning of its applications and services. In the smart healthcare scenario, the global manager corresponds to the system manager devoted to control the overall IoT system of the smart houses set according to the directives of the medical doctors, e.g., for deciding heating levels or for surveillance strategies.
- **Local Managers**: These are owners/delegates (whether permanently or on a temporary basis) of a limited portion of the IoT system, empowered to enforce local control and policies for that portion of the system. In the smart healthcare scenario, these could correspond to the house owners, empowered to control the IoT system in their houses and rooms, and tune the local parameters and exploit its services according to own specific needs.
- **Users**: These are persons or groups that have limited access to the overall configuration of the IoT applications and services, i.e., cannot impose policies on them, but are nevertheless entitled to exploit its services. In the smart healthcare scenario, these include the patients with limited abilities, authorized to access specific services (e.g., regulating specific appliances), but not entitled to modify the overall configuration of their houses (in charge of medical doctors and partly of their responsible family members.
The three identified classes of actors are of a very general nature, beside the smart healthcare scenario. For example, in a scenario of energy management in a smart city, they could correspond to, respectively: city managers, house/shop owners, private citizens and tourists. In the area of urban mobility, they could correspond to, respectively: mobility managers, parking owners or car sharing companies, private drivers.
B. Functionalities
Once the key actors are identified, the analysis preceding design and implementation cannot – for IoT systems and applications – simply reduce to elicit from them the functionalities (i.e., the specific services) that things or group of things has to provide, but has to account for a more comprehensive approach. In fact:
- Beside things provided with basic sensing/actuating functionalities, one should consider the presence of smarter
things that can be activated to perform in autonomy some long-term activities associated with their nature and with their role in the socio/physical environment in which they situate. These can range from simply cleaning robots to more sophisticated autonomous personal assistants [1].
- IoT applications are not simply concerned with providing a suite of coordinated functionalities, but they should also globally regulate the activities of the IoT systems on a continuous basis, according to the policies established by its stakeholders and to their objectives.
As a consequence, other than analyzing the specific functionalities to deliver, one also has to identify the policies and goals to be associated with services and applications, i.e., the desirable “state of the affairs” to strive for in the context of the socio-cyber-physical system where IoT applications and services operate.
In this perspective, the general classes of functionalities to be identified for the development of IoT applications and services include:
- **Policies** express desirable permanent configurations or states of functioning of an overall IoT system (global policies) or portions of it (local policies), and have the aims of regulating the overall underlying IoT system. In the smart healthcare scenario, global policies can be defined, e.g., to specify the maximum sleeping hours, the maximum time for sedentary activities, and have this monitored by not-intrusive sensors in order to invite people to make more activities or to go for resting whenever needed. Policies are meant to be always active and actively enforced. Although, from the software engineering viewpoint, the focus is mostly on application-level policies, policies can also account for the proper configuration of the underlying hardware and network infrastructures. The definition of global and local policies is generally in charge of the global managers, although local managers can be also entitled to enforce temporary local policies on local portions of the system (provided they do not contrast with the ones imposed by the global managers).
- **Goals** express desirable situations or state of the affairs that, in specific cases, can/should be achieved. The activation of a goal may rely on specific pre-conditions (i.e., the occurrence of specific events or the recognition of some specific configurations in the IoT system) or may also be specifically activated upon user action (e.g., the activation of a goal is invokable “as a service”). The typical post-condition (deactivating the pursuing of a goal) is the achievement of the goal itself. In the smart healthcare scenario, one example could be that of activating an evacuation procedure upon detection of fire by a smoke sensor (pre-conditions), whose goal is to immediately send assistance at home (e.g., an ambulance) and to communicate with the familiar to make a visit and support the patient (post-condition). To this end, the activation of a goal can trigger the activities of contacting caregivers and familiar. As it was the case for policies, the definition of global and local goals is generally in charge of global, and sometimes of local, managers, whereas users can be sometimes entitled to activate simple local goals (or goals associated to individual things) “as a service”.
- **Functions** define the sensing/computing/actuating capabilities of individual things or of group of things, or the specific resources that are to be made available to managers and users in the context of specific IoT application and services. Functions are typically made accessible in the form of services, and can sometime involve the coordinated access to the functions of a multitude of individual things. In the smart healthcare scenario, one can think at the individual functionalities of a door sensor in a fridge (e.g., to control opening/closing), as well as more complex functionalities that can be achieved by orchestrating things (e.g., controlling food in the fridge to verify if the shopping list updating it with needed food). Functions and the associated services are typically defined by global and possibly local managers, but are exploited also by the everyday users of the IoT systems (e.g., the patient and her/his caregivers).
Clearly, the concepts of goals and policies are central in the research area of agent systems and multiagent systems, and will require, to be realized, components with autonomous and social behaviour, capable of working together towards the achievement of goals and the enforcement of policies.
C. Software Components and Their Coordination
Moving from analysis to the design of an actual system and of its components, one should consider that the “things” to be involved in the implementation of the identified functionalities can correspond to a variety of different objects and devices, other than to places and humans, each relying on a plethora of different technologies and capabilities. Accordingly, from both the gluing software infrastructure and the software engineering viewpoints, it is necessary to define higher-level abstractions to practically and conceptually handle the design and development of application and services, and to harmonically exploit all the components of the IoT system.
Most of the proposal for programming models and middleware acknowledge this need, by virtualizing individual things in some sort of software abstraction [8]. The WoT perspective abstracts things and their functionalities in terms of generic resources, to be accessed via RESTful calls, possibly associating external software HTTP “gateways” to individual things if they cannot directly support HTTP interfacing. Other
approaches suggest adopting a more standard SOA or object-oriented approach. Surprisingly, only a few proposals consider associating autonomous software agents to individual things [15], despite the fact goals to be pursued in autonomy may be associated to things, a feature that service-oriented approaches can hardly accommodate.
In addition, as already stated, some “things” make no sense as individual entities as far as the provisioning of specific services and applications is concerned, and are to be considered part of a group and be capable of providing their services as a coordinated group. This applies both to the cases in which a multitude of equivalent devices must be collectively exploited abstracting from the presence of the individuals [4], and to the cases in which the functionalities of the group complement with each other and needs to be orchestrated [15]. However, due to the dynamic and contextual nature of IoT scenario, traditional service-oriented orchestration methods, although necessary, are not enough to
With these considerations in mind, in an effort of synthesizing from a variety of different proposals and of bringing in as needed agent-oriented concepts, we suggest the unifying abstractions of avatars and coalitions (See Figure 2).
Avatars. Borrowing the term from [12] (to distinguish from software agents but nevertheless borrowing several features from them) we define an avatar as the general abstraction for individual things and also for group of things (and possibly other avatars) that contribute to define a unique functionality/service. Avatars abstract away from the specific physical/social/technological characteristics of the things they represent, and are defined by means of:
- **Identity.** An avatar has a unique identity and is addressable. An avatar representing a group does not necessarily hides the identities of inner avatars, but it has its own identity.
- **Services.** These represent access point for exploiting the peculiar capabilities of avatars. That is, depending on the kinds of things and functionalities a service abstracts: triggering and directing the sensing/computing/actuating capabilities, or accessing some managed resources.
- **Goals.** Goals, in the sense of desired state of the affairs, can be associated to avatars. A goals may have a pre-condition for autonomous activation, or may be explicitly activated by a user or by another avatar.
- **Events.** Events represent specific state of the affairs that can be detected by an avatar, and that may be of interests to other avatars or to users. Other avatars or users can subscribe to events of interest.
Clearly, for group of avatars, an internal orchestration scheme must be defined for coordinating the activities/functionualities of the things (or of the other avatars) it includes. In general terms – and in accord to assessed service-oriented approaches – an orchestration scheme defines the internal workflow of activities among the composing things and avatars, and the constraints/conditions they are subjected to. Orchestration scheme may also account for contextual information, to make the activities of the group of context-aware. The need of defining orchestrations schemes and constraints to rules the access and usage of (group of) things is generally attributed – with specific characteristics and terminologies – in most middleware and programming approaches for IoT [16], [4].
The avatar abstraction is in line, and account for all the typical characteristics, of most existing IoT approaches. However, the stateful concepts of goals and events make avatars go beyond RESTful approaches. Indeed, these concepts make an avatar more than simply a service provider, turning them into autonomous entities capable of goal-oriented and situated behaviour. Although most existing approaches recognize the need to somehow incorporate similar concepts within RESTful architectures [8], a few of them explicitly refer to agent-based computing, where such concepts belong to.
Coalitions. In this case, and without fear of borrowing the term from the area of multiagent systems [6], we define a coalition as a group of avatars that coordinate each other’s activities in order to reach specific goals, or enact specific policies. Accordingly, coalitions may be of a temporary or permanent nature. Unlike avatar groups, coalitions does not necessarily have an identity, and does not necessarily provide services.
To define and bring a coalition in action, the abstraction of coalition must be defined (at least) in terms of a coordination scheme that should include:
- **Rules for membership**, to specify the conditions upon which an avatar should/could enter a coalitions. From the viewpoint of individual avatars, the act of entering a coalition can be represented by the activation of a specific goal based on pre-conditions that correspond to the rules for membership [5].
- **Coordination pattern**, to define the pattern (interaction protocol and shared strategy) by which the members of the coalition have to interact. The coordination pattern may include an explicit representation of the goal by which the coalition has been activated. However, such goal can also be implicit in the definition of the protocol and of the strategy.
- **Coordination law**, to express constraints that must be enforced in the way the avatars involved in the coalition should act and interact.
In addition, one can consider the possibility to subscribe to events occurring within the coalition.
The view of avatar coalitions can be of use to realize policies, or to aggregate groups of avatar based on similarity, so as to make them work collectively in a mission-oriented way without forcing them to specific identity-centered orchestration scheme. This is coherent with the idea of multiagent societies and, in general, of distributed dynamic coordination [10]. Also, this is in line with nature-inspired approaches [18], and approaches to aggregate programming.
D. From Design to Implementation
The identification of avatars, avatar groups, and coalitions, abstracts away from implementation issues. However, the implementation of individual avatars associated to actual “things” and of the necessary software for supporting for the orchestration schemes of avatar groups and the coordination patterns of coalitions, has to eventually follow.
In our perspective, and comparing against the state of the art in the area, avatars, groups and coalitions are abstract enough concepts to tolerate implementation – with different efforts – above most existing systems and infrastructures. However, this article at least contributes in suggesting a more deep adoption of multiagent concepts and – consequently – of multiagent languages and middleware infrastructure – in the development of next-generation IoT systems.
IV. CONCLUSIONS AND FUTURE WORK
Despite the large number of research works that attack specific problems related to the design and development of IoT applications and services, a general software engineering approach is still missing. This paper, by having proposed and framed some key conceptual abstractions revolving about the IoT universe, and showing how these related to agent-based computing, can represent a first small step towards a general discipline for engineering IoT systems and applications.
As IoT technologies mature, and real-world experiences accumulate, more research in the area of software engineering for IoT systems will be needed, and these will have increasingly exploit contaminations with the area of agent-oriented software engineering [17] and of software engineering for self-adaptive and self-organizing systems [18], and eventually leading to the identification of a widely accepted general methodology – and associated tools – for the IoT-oriented software engineering.
REFERENCES
|
{"Source-Url": "https://iris.unimore.it/retrieve/e31e124d-1942-987f-e053-3705fe0a095a/ICNSC.pdf", "len_cl100k_base": 5575, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22052, "total-output-tokens": 7386, "length": "2e12", "weborganizer": {"__label__adult": 0.00047659873962402344, "__label__art_design": 0.0007891654968261719, "__label__crime_law": 0.0005440711975097656, "__label__education_jobs": 0.001068115234375, "__label__entertainment": 0.00011008977890014648, "__label__fashion_beauty": 0.000293731689453125, "__label__finance_business": 0.00032067298889160156, "__label__food_dining": 0.0006246566772460938, "__label__games": 0.0008535385131835938, "__label__hardware": 0.0031890869140625, "__label__health": 0.004058837890625, "__label__history": 0.0004715919494628906, "__label__home_hobbies": 0.0002149343490600586, "__label__industrial": 0.0009145736694335938, "__label__literature": 0.000453948974609375, "__label__politics": 0.00042176246643066406, "__label__religion": 0.0007905960083007812, "__label__science_tech": 0.449462890625, "__label__social_life": 0.0001386404037475586, "__label__software": 0.01145172119140625, "__label__software_dev": 0.521484375, "__label__sports_fitness": 0.0005106925964355469, "__label__transportation": 0.0011138916015625, "__label__travel": 0.0003070831298828125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33931, 0.01647]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33931, 0.58867]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33931, 0.92478]], "google_gemma-3-12b-it_contains_pii": [[0, 569, false], [569, 5348, null], [5348, 11637, null], [11637, 16353, null], [16353, 22058, null], [22058, 28069, null], [28069, 32697, null], [32697, 33931, null]], "google_gemma-3-12b-it_is_public_document": [[0, 569, true], [569, 5348, null], [5348, 11637, null], [11637, 16353, null], [16353, 22058, null], [22058, 28069, null], [28069, 32697, null], [32697, 33931, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33931, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33931, null]], "pdf_page_numbers": [[0, 569, 1], [569, 5348, 2], [5348, 11637, 3], [11637, 16353, 4], [16353, 22058, 5], [22058, 28069, 6], [28069, 32697, 7], [32697, 33931, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33931, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
f77bf0c20f36ce7234daea597f2eeb0cce99a5fe
|
An executable system architecture approach to discrete events system modeling using sysML in conjunction with colored petri net
Renzhong Wang
Cihan H. Dagli
Missouri University of Science and Technology, dagli@mst.edu
Follow this and additional works at: http://scholarsmine.mst.edu/faculty_work
Part of the Operations Research, Systems Engineering and Industrial Engineering Commons
Recommended Citation
http://scholarsmine.mst.edu/faculty_work/2060
An Executable System Architecture Approach to Discrete Events System Modeling Using SysML in Conjunction with Colored Petri Net
Renzhong Wang, Cihan H Dagli
System Engineering Graduate Program, Missouri University of Science and Technology
600 W 14th Street, Rolla, MO, USA, 65409-0370, Email: rwkb4@mst.edu, dagli@mst.edu
Abstract – This paper proposes an executable system architecting paradigm for discrete event system modeling and analysis through integration of a set of architecting tools, executable modeling tools, analytical tools, and visualization tools. The essential step is translating SysML-based specifications into Colored Petri Nets (CPNs) which enables rigorous static and dynamic system analysis as well as formal verification of the behavior and functionality of the SysML-based design. A set of tools have been studied and integrated that enable a structured architecture design process. Some basic principles of executable system architecture for discrete event system modeling that guide the process of executable architecture specification and analysis are discussed. This paradigm is aimed at general system design. Its feasibility was demonstrated with a C4-type network centric system as an example. The simulation results was used to check the overall integrity and internal consistency of the architecture models, refine the architecture design, and, finally, verify the behavior and functionality of the system being modeled.
Keywords – Discrete-event system, SysML, CPN, Modeling, Executable Architecture
I. INTRODUCTION
Architecture modeling furnishes abstractions for use in managing complexities, allowing engineers to visualize the proposed system and to analyze the problem domain and describe and specify the architecture for the solution domain. However, most architecture packages still only produce static products. Static models are hard to verify and validate because in such models the collaborations between various components defined in the architecture and the information flows among them are specified in a static way. Consequently, they fail to depict the temporal relationships of those components as well as resource utilization over time and thus provide little information about how the system behaves in operational environments. For example, it is very hard, if not impossible, to explore causally chained events and possible system states given a trigger. Rigorous verification and validation of system specifications requires executable models. Simulation capability is typically integrated with executable architectures to further support dynamic analysis of system behavior, performance, and effectiveness.
The significance of executable modeling increase as systems become more complex. Many studies have been undertaken in this field, especially in the software industry. Among them, several schemes have been developed to make Unified Modeling Language (UML) executable, such as Executable UML (xUML), Executable and Translatable UML (ET-UML), and Virtual Machines (VM). However, because their goal is automatic code generation, these approaches are all based on UML StateChart Variants, which means they take an asynchronous view of the system and focus on the reactive behavior of the individual object. For the purpose of general system modeling, the UML state machines lack well-defined executable semantics, do not support modeling of multiple instances of classes, and do not scale well to large systems.
An alternative approach to the executable architecture specification is to incorporate Colored Petri Nets (CPNs) as a supplement to UML diagrams. Currently, much of the work in this field is concerned with the transformation process [1, 2, 3]. Petri Nets have also been used to ascribe formal execution semantics to UML notions via a rule-based approach [4]. Some research even proposes a CPN profile for UML [5]. However, much of the work is still based on the transformation of UML state machines [1]. Only a few studies that emphasize the interactive behavior between systems components can be found in literature [6, 7]. Using only CPN to specify and simulate a system is also possible [8]. However, this method is not very common because CPN is not good at giving purely static descriptions of system architecture.
The MITRE Corporation developed an Executable Architecture Methodology for Analysis (EAMA) that translates DoDAF architecture into an executable form using a federation of business process models, communications network models, and combat simulations. Its primary application is in enterprise architecture. Still, relatively few studies [9] can be found that derives executable models for general system from System Modeling Language (SysML) specifications. The research described in this paper will contribute to this field of study.
The paper is organized as follows. Section II discusses the methodologies that supports executable architecting paradigm. Section III presents their application to the modeling and analysis of the Global Earth Observation System of Systems (GEOSS). Finally Section IV sums up the conclusions and discusses directions for further research. The reader is assumed to be familiar with the basic ideas of SysML and CPN.
II. PROPOSED APPROACHES
A. Executable System Architecting Paradigm
The executable architecting is not yet a mature field. No single modeling tool currently available comes close to
supporting the full range of capabilities needed for executable architecting (e.g. specification, presentation, simulation, and analysis of both the static structure and dynamic behavior of a system). Therefore, this paper proposes a combined use of several related tools in an effort to take the immediate advantage of the best features of each tool. The interoperability of these tools is, therefore, required and studied.
For the specification of formal models, the Systems Modeling Language (SysML) is preferred because it supports the development of a broad range of systems thanks to its rich set of diagrams, rigorous syntax and semantics, and easiness to interpret. Extended from UML, SysML is an object-oriented modeling language so it shares the same primitives and basic concepts with many other object-oriented modeling languages, which provides a basis for model interoperability. However, SysML is weak in executable semantics, which limits its capabilities to analyze and verify defined specifications.
Formal specification of the executable model requires well defined executable semantics. The chosen of the modeling language depends on the system to be modeled, the abstraction level to work on, and the system behavior of interest. In many modern engineering systems such as communication networks, flexible manufacturing systems, control systems, transportation systems, and C4 systems, the behavior of interest is driven only by events that occur at discrete time points. Such systems can be best specified by discrete-event models. As defined in [10], discrete-event models represent the operation of a system as a chronological discrete sequence of events. Each event occurs at an instant in time and marks a change of state in the system. Therefore, an executable architecture specified in this way is a dynamic model that defines the precise event sequences, the conditions under which event is triggered and information is produced or consumed, and the proprieties of producers, consumers and other resources associated with the operation of the system. Usually, the complexity of such systems stems from the fact that the overall system behavior is not only determined by the components individually but also from their interactions. Therefore, the target system should be modeled as a collection of objects and their interconnections, information to be processed and exchanged, the order of events, and other properties.
A variety of executable formalisms have been developed that support the development of discrete-event systems and offer the capabilities for dynamic behavior analysis, for example, Finite-automata, StateCharts, DEVS (Discrete Event System Specification), Petri nets, and GSMP (Generalized Semi-Markov Processes). The approach proposed in this paper intends to accommodate as broad a range of systems as possible. Hence, we are interested in a modeling formalism that is sufficiently general, i.e. independent of domain and technological substance of a specific system, and easy to map to selected formal model specifications, i.e. SysML. More specifically, we want an executable modeling formalism that is based on generic dynamic systems concepts, i.e. states and transitions, supports concurrency, synchronization and resource sharing, offers hierarchical description and modularity, and facilitates analysis and verification. Based on these criteria, the Colored Petri Net (CPN) emerges as the best choice among those formalisms investigated. The basic notation for Petri Nets is a bipartite graph consisting of places and transitions that alternate on a path and are connected by directional arcs [11]. Tokens are used to mark places, which represent the state of a system. When certain conditions hold, transitions will be fired, causing a change in the placement of tokens and thus the change of system states. CPNs extend the vocabulary of basic Petri Nets allowing tokens to have an associated attribute and add new features that enable Petri Nets to scale to large system modeling. CPNs combine the strength of ordinary Petri Nets with the strength of a high-level programming language, which provides the primitives for definition of data types and manipulation of their data values [12]. Reference [12] provides an in-depth discussion of the advantages of using CPN. What also needs to be mentioned is that three characteristics distinguish CPNs from other executable formalisms. First, CPNs offer an advantage of combining a well-defined mathematical foundation, an interactive graphical representation and simulation, and the capabilities to carry out simulations and formal verifications. Secondly, it is possible to use the same (or at least very similar) models to check both the logical or functional correctness of a system and for performance analysis [15]. Finally, CPNs are very flexible in token definition and manipulation. Various architectural elements, e.g. components, tasks, messages, events, and even use cases can all be described by different types of tokens. This feature makes CPN modeling even more flexible and capable of modeling a large variety of systems.
Formal models specified by SysML can be transformed to executable models represented by CPNs by following well defined procedures and mappings between these two notations. This transformation supplements SysML modeling with formal dynamic semantic plus the behavioral modeling and analysis strength. CPNs have a formal, mathematical representation, which not only can unambiguously define the behavioral properties but also forms the foundation for formal analysis methods. Information about the structure and simulation of a CPN can easily be extracted and communicated with external applications and processes, which provides a means to enhance simulations and further extend CPN’s capabilities in model analysis with the aids of other analysis tools.
In summary, by integrating the above mentioned tools, we can create an executable architecture paradigm that offers a structured design process as shown in Fig. 1. This is an iterative process starting with requirements analysis and specification, through which the desired behavior of the system is captured. The executable model (represented by CPN) developed from the static model (a set of SysML diagrams) is capable of generating dynamic behavior (the behavior as modeled). Key information can be extracted from the simulation to support architecture evaluation and analysis. Based on the results, the system can be modified and another design cycle can begin. Finally, by comparing the desired behavior and the behavior as modeled, we are able to verify the system architecture being designed.
B. Transformation from SysML to CPN
Pre-conditions. The architecture specification should be formal enough to accommodate executable semantics. That is, it must capture sufficient representations of architectures and be unambiguous and consistent. An architecture will not be fully operational until all nodes and activities are properly configured and connected and are consistent in terminology, definition, and data exchange syntax.
Transformation schemes Based on Static View versus Dynamic View: The goal of developing an executable architecture model in this paper is to facilitate the investigation of system wide properties. Because the interactive behavior of system components is of greater interest than the reactive behavior of individual components, it is better to define the executable model that relies on synchronous operation calls between operational nodes. For this reason, the SysML-to-CPN transformation discussed in this paper is primarily based on SysML sequence diagrams.
Basic mapping from SysML models to CPNs:
Places/Transitions. There exist two basic alternatives to map from the SysML specifications to the CPNs resulting in two different interpretations:
1. Identify actions with places in the CPN. In this case, the state of a system can be interpreted as what the system is doing. This is the way that UML/SysML state machine usually adopted, which is good for depicting the reactive behavior of a system.
2. Identify actions with transitions. In this case, the state of the system can be interpreted as a set of conditions that a system is holding and a set of effects after the system did something.
The second alternative is chosen in this paper for the following two reasons. First, the hierarchical (or modularity) of CPN is achieved by means of substitution transitions. By identifying actions with transitions, we can decompose the action using substitution transition to further elaborate on that behavior or decompose the entity that generates this action. Secondly, modeling actions by transitions allows us to model data flow and/or control flow more clearly. This is desirable since the essence of many discrete systems is information processing.
Actors: Since, for every action, there is an actor that is responsible for that action, the active objects (objects having an action) in SysML can be bound to transitions.
Tokens: When places are used to model conditions and effects, tokens can be used to model resources, control signals, and input/output message or other entities to be processed or exchanged during a transition (action).
Transformation procedures. The transformation from SysML specifications to CPNs must be faithful for the simulation of the executable model to be used to verify and validate the SysML model. Accordingly, an unambiguous mapping between the elements of various SysML diagrams and CPNs must be established. Fig. 2 outlines the procedure used in this paper for synthesizing a CPN model from a SysML model. Note that, in order to facilitate simulation and performance analysis, some extra CPN constructs such as simulation monitors, which are not converted directly from the SysML model, are allowed to be added to the original CPN model, provided that the logic of the system is not impacted.
Step 0: Augment the sequence diagram(s). For each object in the sequence diagram(s), add the operation description to the appropriate position on the lifeline in between the input and output message/event. The operations should have been defined in block definition diagram(s).
Step 1: Create a transition for each operation in the sequence diagram(s) (preferably also list the object description next to the operation description).
Step 2: Create a substitution transition for each nested sequence diagram.
Step 3: Create a place for each message/event between lifelines. Assign the appropriate color set and create the corresponding declaration in the index.
Step 4: Create arcs between transitions and the places according to the sequence diagrams. There should be a one-to-one matching between the numbers of message/event in the sequence diagrams and the number of places between transitions in the CPN model.
Step 5: Add Arc inscriptions, guard functions, or code segments derived from the rules associated with each operation.
Step 6: Create a sub-page for each substitution transition.
6.1. Follows step 0 to 5 to create all the related transitions, places and arcs.
6.2. Assign Input, Output, and I/O ports places.
Step 7: Assign socket places and connect all substitution transitions and their sub-pages.
Step 8: Specify initial markings for each related places.
Fig. 2. Translation Schemes from SysML to CPN
Based on the procedure outlined in Fig. 2, the basic mappings between elements in SysML diagrams and elements in a CPN model are generated and presented in Table 1, which
also establishes a concordance between various entities within a set of SysML diagrams.
Table 1. Mapping between Elements in a SysML Model and a CPN Model
<table>
<thead>
<tr>
<th>System Entities</th>
<th>Elements in SysML Diagrams</th>
<th>Elements in CPN Model</th>
</tr>
</thead>
<tbody>
<tr>
<td>Active object</td>
<td>Interacting Object</td>
<td>(Substitution)</td>
</tr>
<tr>
<td>Passive / connector object</td>
<td>N/A</td>
<td>Place</td>
</tr>
<tr>
<td>Transient information / event</td>
<td>Information on the message line between lifelines</td>
<td>Token</td>
</tr>
<tr>
<td>Operation Call</td>
<td>Information on the message line and/or description line</td>
<td>Interface specification</td>
</tr>
<tr>
<td>Flow</td>
<td>Message line between lifelines</td>
<td>Dashed line connecting Object flow and action</td>
</tr>
<tr>
<td>Module</td>
<td>Nested sequence diagram</td>
<td>Child activity diagram</td>
</tr>
</tbody>
</table>
C. Integration of CPN with Supporting Tools
The CPN modeling language is supported by CPN Tools, which is a graphical software tool for creating, editing, simulating and analyzing CPN models. CPN Tools provides Comms/CPN, a CPN ML library, which allows CPN Tools to communicate based on TCP/IP with external application and processes, which provides a means of extending CPN’s simulation capabilities, e.g. extraction of useful information, Graphic User Interface (GUI), instant feedback, and interactive control of the simulation process. For example, two GUIs are often used with a CPN, BRITNeY Suite [13] and Graphviz. The BRITNeY Suite is a java application that can run on top of CPNs. During the simulation, users can control the simulation execution only through this GUI. A variety of graphic outputs such as the Message Sequence Charts (MSCs) and the State Space Graphs can be generated after the simulation. They are important means for analyzing the behavior of the system being modeled. Graphviz is another option for generating various graphical outputs from CPN simulation, e.g. State Space Graphs. More software tools supporting the analysis of CPN can be found in [14].
D. Architecture Analysis and Evaluation
CPN models and simulations contain detailed quantitative information about the performance of a system, such as throughput, processing time, queue lengths, resource utilization, etc., which can be extracted to support the investigation and discovery of structural and dynamic system properties, which reflect correctly the behavior of the system in reality.
Three forms of architecture evaluation, logical, behavioral, and performance, are described in [7]. The logic is examined by testing each step of the execution to ensure that the model follows the desired logic. The behavior of the system can be observed directly from the simulation. However, it is often beyond the capability of human beings to observe the details of a simulation by watching the CPN and its markings. A numbers of alternative behavioral analysis methods are provided in [14] such as simulation report, report places, business charts, Message Sequence Charts (MSCs), state space reports (dead transitions, liveness, home properties, deadlock, conservation properties, etc.), and state space graphs. Reference [15] provides an overview of some new performance analysis facilities supported by the latest version of CPN Tools.
Behavior and Functionality Verification. When the whole set of conditions and events of a system are specified correctly in a CPN model, the model should be able to undergo appropriate sequences of state transitions. Therefore, we can verify the system design by comparing the behavior as modeled and the desired behavior. The former can be obtained from the Message Sequence Charts (MSCs) while the latter are captured by the sequence diagrams. If the comparison shows a match, the model can be verified and validated. If the match is insufficient, then either the architecture model needs to be modified in order to better represent the system architecture or the system architecture needs to be reconfigured in order to better satisfy the requirements.
Identification of Missing Specifications and Missing Requirements. Missing specifications can be identified in the process of both executable model synthesis and simulation because an incomplete model is not executable. Simulation runs can also reveal missing requirements, which, in this context, are functions or capabilities that the system must support in order to generate the required behavior or performance but have not been specified yet.
III. APPLYING EXECUTABLE ARCHITECTURE PARADIGM TO GEOSS
The Global Earth Observation System of Systems (GEOSS) is a system of networked sensors, communication devices, storage devices, computers, and other resources used in concert to observe the Earth. In this paper, GEOSS was modeled as a distributed multi-task concurrent information processing system with high interoperability, maintainability, and
expandability. The challenge is to model the management, retrieval, and processing of the observation datasets and information products in a distributed and heterogeneous computational environment that links distributed centers, users, data, applications, computer networks, and storage resources.
A system design that is resilient to change is highly desirable. Hence, the Model Driven Architecture (MDA) approach [16] was employed to guide the architecture development process. The MDA approach enables the same model specifying business processes or application functionality to be realized on multiple platforms. The benefit is great improvement in portability, interoperability, reusability, and maintainability. In general system design, MDA can be achieved through an iterative refinement process, which is driven by introducing domain information such as structural, behavioral interoperability, and interfacing requirements of system components. The resulting system architecture was a layered architecture rather than the typical federated one. This style of organizing the components standardizes the architecture while greatly leveraging flexibility. Fig. 3 is a SysML block definition diagram showing the relationships of various components within GEOSS. The system activities are realized as five layers and a cross-cutting section based on their roles in data and information processing. Lower layers provide service to upper layers and upper layers are logically closer to end users.
Fig. 3. Internal Block Diagram - GEOSS Internal Connections
Layer 1, the User Interface, comprises components that interact directly with end users and end-user tools. Layer 2, Applications and Tools, comprises common applications and tools that provide services to user applications. Layer 3, Configuration and Execution Management, comprises “service modules” that manage distributed resources such as application environment configuration, distributed computational resources coordination, and application input and output, archives and workflow management. Layer 4, Resource Access, provides the data transmission service and the standard protocols for accessing raw services. Layer 5, Resources, represents all the physical raw resources such as distributed database and storage, computational hardware and software, sensors, and data collection centers. Some cross-cutting components providing functionality that spans multiple layers are identified and grouped into a package called Common Services.
The behavior of the system is specified using SysML activity diagrams and sequence diagrams. Since the SysML-to-CPN transformation methods developed in this paper is primarily based on the latter, the example shown here only includes sequence diagrams. Fig. 4 depicts the sub-activity of collecting observation data.
The MDA principle fosters modularity which can be reflected through nested sequence diagrams. These modules, achieved through CPN substitution transitions, can be developed and tested in isolation. Fig. 5 depicts the CPN module derived from the sequence diagram presented in Fig. 4 using the transformation rules defined in Section II.
Three animation tools supported by BRITNeY have been used in this paper:
1. Interactive Control, which includes accepting inputs from outside users and providing graphical feedback,
2. Message Sequence Charts (MSCs), and
By comparing the above SysML sequence diagram and the corresponding MSC, we can conclude that the behavior as molded (reflected by MSC) conforms to the desired behavior (captured in the sequence diagrams). Thus, the system architecture can be verified.
IV. CONCLUSION AND FUTURE WORK
This paper introduced an executable system architecting solution based on SysML-CPN transformation. The approach proposed here models interactive behavior between various system components using states and transitions, as well as conditions and events, as the core semantics. To achieve this framework, a set of methodologies including a formal transformation procedure, a well-defined mapping between these formalisms, and some architecture analysis technologies were developed. This paradigm facilitates the investigation of system design before the implementations starts. The benefit is an improved understanding of key design issues, fewer design errors, and faster system development. The feasibility of this paradigm has been demonstrated using an information system as an example. This methodology should be able to generalize and be applied to a broad range of discrete-event driven concurrent system designs.
In this paper, the modeling activities emphasize the functional aspects of the system. Nonfunctional performance, such as time related, resources related, optimization, scheduling, security, and reliability, may be more of our interest sometimes. Quite often, non-functional performance is emergent behavior and thus simulation plays an even more important role in performance forecasting, evaluation, and verification. Further study need to be carried out in this area. Non-functional concerns are often coupled. For example, resource constraints may impact processing time and cause task scheduling and prioritization problems. Non-functional requirements can also impose constraints on the functional behavior. For example, security requirements may require the system to provide registration, subscription, authorization, or authentication services while accessibility may require resource control and prioritization capabilities. In order to simulate and measure the non-functional performance, some mathematical methods, computation intelligence tools, and other external simulation environments may need to be integrated into the executable model. More analysis techniques should be studied and integrated.
REFERENCE
|
{"Source-Url": "http://scholarsmine.mst.edu/cgi/viewcontent.cgi?article=3059&context=faculty_work", "len_cl100k_base": 5341, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 37551, "total-output-tokens": 6758, "length": "2e12", "weborganizer": {"__label__adult": 0.0003781318664550781, "__label__art_design": 0.000720977783203125, "__label__crime_law": 0.0004773139953613281, "__label__education_jobs": 0.0020008087158203125, "__label__entertainment": 0.00013244152069091797, "__label__fashion_beauty": 0.00020492076873779297, "__label__finance_business": 0.0004978179931640625, "__label__food_dining": 0.0004057884216308594, "__label__games": 0.0007748603820800781, "__label__hardware": 0.0015869140625, "__label__health": 0.0007157325744628906, "__label__history": 0.0004963874816894531, "__label__home_hobbies": 0.000141143798828125, "__label__industrial": 0.0011196136474609375, "__label__literature": 0.0004351139068603515, "__label__politics": 0.0003705024719238281, "__label__religion": 0.000591278076171875, "__label__science_tech": 0.33935546875, "__label__social_life": 0.00013625621795654297, "__label__software": 0.01363372802734375, "__label__software_dev": 0.63427734375, "__label__sports_fitness": 0.0003421306610107422, "__label__transportation": 0.00110626220703125, "__label__travel": 0.00024390220642089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31746, 0.01456]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31746, 0.56891]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31746, 0.91103]], "google_gemma-3-12b-it_contains_pii": [[0, 677, false], [677, 6115, null], [6115, 12817, null], [12817, 17703, null], [17703, 22576, null], [22576, 24138, null], [24138, 25740, null], [25740, 29176, null], [29176, 31746, null]], "google_gemma-3-12b-it_is_public_document": [[0, 677, true], [677, 6115, null], [6115, 12817, null], [12817, 17703, null], [17703, 22576, null], [22576, 24138, null], [24138, 25740, null], [25740, 29176, null], [29176, 31746, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31746, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31746, null]], "pdf_page_numbers": [[0, 677, 1], [677, 6115, 2], [6115, 12817, 3], [12817, 17703, 4], [17703, 22576, 5], [22576, 24138, 6], [24138, 25740, 7], [25740, 29176, 8], [29176, 31746, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31746, 0.07767]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
dc067ae804b5cce2d501c66942b2c5fc794071ea
|
Topics
- Heap allocation
- Manuel heap allocation
- Automatic memory reallocation (GC)
Limitations of Stack Frames
• A local variable of P cannot be stored in the activation record of P if its duration exceeds the duration of P
• Example: Dynamic allocation
int * f() { return (int *) malloc(sizeof(int)); }
Currying Functions
```c
int (*)(int x) f(int x)
{
int g(int y)
{
return x + y;
}
return g;
}
int (*h)() = f(3);
int (*j)() = f(4);
int z = h(5);
int w = j(7);
```
Program Runtime State
- Code segment
- Stack segment
- Data Segment
- Machine Registers
fixed heap
Data Allocation Methods
• Explicit deallocation
• Automatic deallocation
Explicit Deallocation
- Pascal, C, C++
- Two basic mechanisms
- void * malloc(size_t size)
- void free(void *ptr)
- Part of the language runtime
- Expensive
- Error prone
- Different implementations
Memory Structure used by `malloc()`/`free()`
Simple Implementation
SET the polymorphic chunk pointer First_chunk pointer TO
Beginning of available memory;
SET the polymorphic chunk pointer One past available memory TO
Beginning of available memory + Size of available memory;
SET First_chunk pointer .size TO Size of available memory;
SET First_chunk pointer .free TO True;
FUNCTION Malloc (Block size) RETURNING a polymorphic block pointer:
SET Pointer TO Pointer to free block of size (Block size);
IF Pointer /= Null pointer: RETURN Pointer;
Coalesce free chunks;
SET Pointer TO Pointer to free block of size (Block size);
IF Pointer /= Null pointer: RETURN Pointer;
RETURN Solution to out of memory condition (Block size); call gc
PROCEDURE Free (Block pointer):
SET Chunk pointer TO Block pointer - Administration size;
SET Chunk pointer .free TO True;
FUNCTION Pointer to free block of size (Block size)
RETURNING a polymorphic block pointer:
// Note that this is not a pure function
SET Chunk pointer TO First_chunk pointer;
SET Requested chunk size TO Administration size + Block size;
WHILE Chunk pointer /= One past available memory:
IF Chunk pointer .free:
IF Chunk pointer .size - Requested chunk size >= 0:
// large enough chunk found:
Split chunk (Chunk pointer, Requested chunk size);
SET Chunk pointer .free TO False;
RETURN Chunk pointer + Administration size;
// try next chunk:
SET Chunk pointer TO Chunk pointer + Chunk pointer .size;
RETURN Null pointer;
PROCEDURE Split chunk (Chunk pointer, Requested chunk size):
SET Left_over size TO Chunk pointer .size - Requested chunk size;
IF Left_over size > Administration size:
// there is a non-empty left-over chunk
SET Chunk pointer .size TO Requested chunk size;
SET Left_over chunk pointer TO
Chunk pointer + Requested chunk size;
SET Left_over chunk pointer .size TO Left_over size;
SET Left_over chunk pointer .free TO True;
Coalescing Chunks
PROCEDURE Coalesce free chunks:
SET Chunk pointer TO First_chunk pointer;
WHILE Chunk pointer /= One past available memory:
IF Chunk pointer .free:
Coalesce with all following free chunks (Chunk pointer);
SET Chunk pointer TO Chunk pointer + Chunk pointer .size;
PROCEDURE Coalesce with all following free chunks (Chunk pointer):
SET Next_chunk pointer TO Chunk pointer + Chunk pointer .size;
WHILE Next_chunk pointer /= One past available memory
AND Next_chunk pointer .free:
// Coalesce them:
SET Chunk pointer .size TO
Chunk pointer .size + Next_chunk pointer .size;
SET Next_chunk pointer TO Chunk pointer + Chunk pointer .size;
Fragmentation
• **External**
– Too many small chunks
• **Internal**
– A use of too big chunk without splitting the chunk
• Freelist may be implemented as an array of lists
Garbage Collection
ROOT SET
Stack + Registers
Garbage Collection
ROOT SET
a
b
c
d
e
f
HEAP
What is garbage collection
• The runtime environment reuse chunks that were allocated but are not subsequently used
• garbage chunks
– not live
• It is undecidable to find the garbage chunks:
– Decidability of liveness
– Decidability of type information
• conservative collection
– every live chunk is identified
– some garbage runtime chunk are not identified
• Find the reachable chunks via pointer chains
• Often done in the allocation function
typedef struct list {struct list *link; int key} *List;
typedef struct tree {int key;
struct tree *left;
struct tree *right} *Tree;
foo() { List x = cons(NULL, 7);
List y = cons(x, 9);
x->link = y;
}
void main() {
Tree p, r; int q;
foo();
p = maketree(); r = p->right;
q = r->key;
showtree(r);}
typedef struct list {struct list *link; int key} *List;
typedef struct tree {int key; struct tree *left;
struct tree *right} *Tree;
foo() { List x = cons(NULL, 7);
List y = cons(x, 9);
x->link = y;
}
void main() {
Tree p, r; int q;
foo();
p = maketree(); r = p->right;
q= r->key;
showtree(r);}
stack
heap
p
q
r
x
y
link
key
7
link
key
9
18
typedef struct list {struct list *link; int key} *List;
typedef struct tree {int key;
struct tree *left;
struct tree *right} *Tree;
foo() {
List x = create_list(NULL, 7);
List y = create_list(x, 9);
x->link = y;
}
void main() {
Tree p, r; int q;
foo();
p = maketree(); r = p->right;
q = r->key;
showtree(r);}
Outline
• Why is it needed?
• Why is it taught?
• Reference Counts
• Mark-and-Sweep Collection
• Copying Collection
• Generational Collection
• Incremental Collection
• Interfaces to the Compiler
Tracing
A Pathological C Program
\[
a = \text{malloc}(\ldots) ; \\
b = a ; \\
\text{free} (a) ; \\
c = \text{malloc} (\ldots) ; \\
\text{if} \ (b == c) \ \text{printf}(\text{"unexpected equality"}) ;
\]
Garbage Collection vs. Explicit Memory Deallocation
- Faster program development
- Less error prone
- Can lead to faster programs
- Can improve locality of references
- Support very general programming styles, e.g. higher order and OO programming
- Standard in ML, Java, C#
- Supported in C and C++ via separate libraries
- May require more space
- Needs a large memory
- Can lead to long pauses
- Can change locality of references
- Effectiveness depends on programming language and style
- Hides documentation
- More trusted code
Interesting Aspects of Garbage Collection
• Data structures
• Non constant time costs
• Amortized algorithms
• Constant factors matter
• Interfaces between compilers and runtime environments
• Interfaces between compilers and virtual memory management
Reference Counts
• Maintain a counter per chunk
• The compiler generates code to update counter
• Constant overhead per instruction
• Cannot reclaim cyclic elements
Another Example
Another Example ($x \rightarrow b=\text{NULL}$)
Code for p := q
IF Points into the heap (q):
Increment q .reference count;
IF Points into the heap (p):
Decrement p .reference count;
IF p .reference count = 0:
Free recursively depending on reference counts (p);
SET p TO q;
Recursive Free
PROCEDURE Free recursively depending on reference counts(Pointer);
WHILE Pointer /= No chunk:
IF NOT Points into the heap (Pointer): RETURN;
IF NOT Pointer .reference count = 0: RETURN;
FOR EACH Index IN 1 .. Pointer .number of pointers - 1:
Free recursively depending on reference counts
(Pointer .pointer [Index]);
SET Aux pointer TO Pointer;
IF Pointer .number of pointers = 0:
SET Pointer TO No chunk;
ELSE Pointer .number of pointers > 0:
SET Pointer TO
Pointer .pointer [Pointer .number of pointers];
Free chunk(Aux pointer); // the actual freeing operation
Lazy Reference Counters
- Free one element
- Free more elements when required
- Constant time overhead
- But may require more space
Reference Counts (Summary)
- Fixed but big constant overhead
- Fragmentation
- Cyclic Data Structures
- Compiler optimizations can help
- Can delay updating reference counters from the stack
- Implemented in libraries and file systems
- No language support
- But not currently popular
- Will it be popular for large heaps?
Mark-and-Sweep(Scan) Collection
- **Mark** the chunks reachable from the roots (stack, static variables and machine registers)
- **Sweep** the heap space by moving unreachable chunks to the freelist (Scan)
The Mark Phase
for each root v
DFS(v)
function DFS(x)
if x is a pointer and chunk x is not marked
mark x
for each reference field f_i of chunk x
DFS(x.f_i)
The Sweep Phase
p := first address in heap
while p < last address in the heap
if chunk p is marked
unmark p
else let $f_1$ be the first pointer reference field in p
p.$f_1$ := freelist
freelist := p
p := p + size of chunk p
Sweep
Diagram of a linked list structure.
Cost of GC
• The cost of a single garbage collection can be linear in the size of the store
– may cause quadratic program slowdown
• Amortized cost
– collection-time/storage reclaimed
– Cost of one garbage collection
• $c_1 R + c_2 H$
– H - R Reclaimed chunks
– Cost per reclaimed chunk
• $(c_1 R + c_2 H)/(H - R)$
– If $R/H > 0.5$
• increase H
– if $R/H < 0.5$
• cost per reclaimed word is $c_1 + 2c_2 \approx 16$
– There is no lower bound
The Mark Phase
for each root v
\[ \text{DFS}(v) \]
function DFS(x)
if x is a pointer and chunk x is not marked
\[ \text{mark } x \]
for each reference field \( f_i \) of chunk x
\[ \text{DFS}(x.f_i) \]
Efficient implementation of Mark(DFS)
- Explicit stack
- Parent pointers
- Pointer reversal
- Other data structures
Adding Parent Pointer
Avoiding Parent Pointers (Deutch-Schorr-Waite)
• Depth first search can be implemented without recursion or stack
• Maintain a counter of visited children
• Observation:
– The pointer link from a parent to a child is not needed when it is visited
– Temporary store pointer to the parent (instead of the field)
– Restore when the visit of child is finished
Arriving at C
Visiting n-pointer field D
SET old parent pointer TO parent pointer ;
SET Parent pointer TO chunk pointer ;
SET Chunk pointer TO n-th pointer field of C;
SET n-th pointer field in C TO old parent pointer;
About to return from D
SET old parent pointer TO Parent pointer;
SET Parent pointer TO n-th pointer field of C;
SET n-th pointer field of C TO chunk pointer;
SET chunk pointer TO old parent pointer;
Compaction
• The sweep phase can compact adjacent chunks
• Reduce fragmentation
Copying Collection
- Maintains two separate heaps
- \texttt{from-space}
- \texttt{to-space}
- \texttt{next} pointer to the next free chunk in \texttt{from-space}
- A pointer \texttt{limit} to the last chunk in \texttt{from-space}
- If \texttt{next} = \texttt{limit} copy the reachable chunks from \texttt{from-space} into \texttt{to-space}
- set \texttt{next} and \texttt{limit}
- Switch from \texttt{from-space} and to \texttt{to-space}
- Requires type information
Breadth-first Copying Garbage Collection
next := beginning of to-space
scan := next
for each root r
r := Forward(r)
while scan < next
for each reference field $f_i$ of chunk at scan
scan.$f_i$ := Forward(scan.$f_i$)
scan := scan + size of chunk at scan
The Forwarding Procedure
function Forward(p)
if p points to from-space
then if p.f_1 points to to-space
return p.f_1
else for each reference field f_i of p
next.f_i := p.f_i
p.f_1 := next
next := next size of chunk p
return p.f_1
else return p
A Simple Example
```c
struct DL{
int data;
struct DL* f;
struct DL *b
}
```
Before Forward(f400)
From-Space
f400
17
f800
0
f
f800
13
0
f400
stack
f400
to-Space
t600
scan
next
b
After Forward(f400) before Forward(f800)
After Forward(f800)
Before Forward(0)
After Forward(0)
Before Forward(0)
After Forward(0)
Before Forward(f400)
After Forward(f400)
From-Space
<table>
<thead>
<tr>
<th>f400</th>
<th>17</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>t600</td>
</tr>
<tr>
<td></td>
<td>0</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>f800</th>
<th>13</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>t612</td>
</tr>
<tr>
<td></td>
<td>f400</td>
</tr>
</tbody>
</table>
stack
| t600 |
to-Space
<table>
<thead>
<tr>
<th>17</th>
</tr>
</thead>
<tbody>
<tr>
<td>t600</td>
</tr>
<tr>
<td>t612</td>
</tr>
<tr>
<td>13</td>
</tr>
<tr>
<td>0</td>
</tr>
<tr>
<td>0</td>
</tr>
</tbody>
</table>
b
f
b
f
scan next
12
left
right
link
7
left
right
link
59
left
right
link
9
left
right
link
20
left
right
15
left
right
37
left
right
37
left
right
59
left
right
next
scan
Amortized Cost of Copy Collection
c_3R / (H/2 - R)
Locality of references
• Copy collection does not create fragmentation
• Cheney's algorithm may lead to subfields that point to far away chunks
– poor virtual memory and cache performance
• DFS normally yields better locality but is harder to implement
• DFS may also be bad for locality for chunks with more than one pointer fields
• A compromise is a hybrid breadth first search with two levels down (Semi-depth first forwarding)
• Results can be improved using dynamic information
The New Forwarding Procedure
function Forward(p)
if p points to from-space
then if p.f₁ points to to-space
return p.f₁
else Chase(p); return p.f₁
else return p
function Chase(p)
repeat
q := next
next := next + size of chunk p
r := null
for each reference field fᵢ of p
q.fᵢ := p.fᵢ
if q.fᵢ points to from-space and
q.fᵢ.f₁ does not point to to-space
then r := q.fᵢ
p.f₁ := q
p := r
until p = null
Generational Garbage Collection
• Newly created objects contain higher percentage of garbage
• Partition the heap into generations \( G_1 \) and \( G_2 \)
• First garbage collect the \( G_1 \) heap
– chunks which are reachable
• After two or three collections chunks are promoted to \( G_2 \)
• Once a while garbage collect \( G_2 \)
• Can be generalized to more than two heaps
• But how can we garbage collect in \( G_1 \)?
Scanning roots from older generations
• **remembered list**
– The compiler generates code after each destructive update: \[ b.f_i := a \]
to put \( b \) into a vector of updated objects scanned by the garbage collector
• **remembered set**
– remembered-list + “set-bit”
• **Card marking**
– Divide the memory into \( 2^k \) cards
• **Page marking**
– \( k = \) page size
– virtual memory system catches updates to old-generations using the dirty-bit
Incremental Collection
• Even the most efficient garbage collection can interrupt the program for quite a while
• Under certain conditions the collector can run concurrently with the program (mutator)
• Need to guarantee that mutator leaves the chunks in consistent state, e.g., may need to restart collection
• Two solutions
– compile-time
• Generate extra instructions at store/load
– virtual-memory
• Mark certain pages as read(write)-only
• a write into (read from) this page by the program restart mutator
Tricolor marking
• Generalized GC
• Three kinds of chunks
– White
• Not visited (not marked or not copied)
– Grey
• Marked or copied but children have not been examined
– Black
• Marked and their children are marked
Basic Tricolor marking
while there are any grey objects
select a grey chunk $p$
for each reference field $f_i$ of chunk $p$
if chunk $p.f_i$ is white
color chunk $p.f_i$ grey
color chunk $p$ black
Invariants
• No black points to white
• Every grey is on the collector's (stack or queue) data structure
Establishing the invariants
- Dijkstra, Lamport, et al
- Mutator stores a white pointer $a$ into a black pointer $b$
- color $a$ grey (compile-time)
- Steele
- Mutator stores a white pointer $a$ into a black pointer $b$
- color $b$ grey (compile-time)
- Boehm, Demers, Shenker
- All black pages are marked read-only
- A store into black page mark all the objects in this page grey (virtual memory system)
- Baker
- Whenever the mutator fetches a pointer $b$ to a grey or white object
- color $b$ grey (compile-time)
- Appel, Ellis, Li
- Whenever the mutator fetches a pointer $b$ from a page containing a non black object
- color every object on this page black and children grey (virtual memory system)
Interfaces to the Compiler
• The semantic analysis identifies chunk fields which are pointers and their size
• Generate runtime descriptors at the beginning of the chunks
– Can employ different allocation/deallocation functions
• Pass the descriptors to the allocation function
• The compiler also passes pointer-map
– the set of live pointer locals, temporaries, and registers
• Recorded at ?-time for every procedure
Summary
- Garbage collection is an effective technique
- Leads to more secure programs
- Tolerable cost
- But is not used in certain applications
- Realtime
- Generational garbage collection works fast
- Emulates stack
- But high synchronization costs
- Compiler can allocate data on stack sometimes
- Escape analysis
- May be improved
|
{"Source-Url": "http://www.cs.tau.ac.il/~msagiv/courses/wcc13/gc.pdf", "len_cl100k_base": 4493, "olmocr-version": "0.1.53", "pdf-total-pages": 72, "total-fallback-pages": 0, "total-input-tokens": 96403, "total-output-tokens": 7133, "length": "2e12", "weborganizer": {"__label__adult": 0.0003752708435058594, "__label__art_design": 0.000247955322265625, "__label__crime_law": 0.0002872943878173828, "__label__education_jobs": 0.00029587745666503906, "__label__entertainment": 4.172325134277344e-05, "__label__fashion_beauty": 0.00015103816986083984, "__label__finance_business": 0.0001544952392578125, "__label__food_dining": 0.00042128562927246094, "__label__games": 0.0004875659942626953, "__label__hardware": 0.00118255615234375, "__label__health": 0.0004322528839111328, "__label__history": 0.0002028942108154297, "__label__home_hobbies": 0.00010764598846435548, "__label__industrial": 0.0003995895385742187, "__label__literature": 0.0001646280288696289, "__label__politics": 0.00022494792938232425, "__label__religion": 0.0004565715789794922, "__label__science_tech": 0.0045623779296875, "__label__social_life": 5.9723854064941406e-05, "__label__software": 0.0031604766845703125, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0003802776336669922, "__label__transportation": 0.0005884170532226562, "__label__travel": 0.00022983551025390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17079, 0.04594]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17079, 0.2944]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17079, 0.73507]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 88, false], [88, 312, null], [312, 502, null], [502, 603, null], [603, 677, null], [677, 882, null], [882, 927, null], [927, 1784, null], [1784, 2483, null], [2483, 2933, null], [2933, 3641, null], [3641, 3820, null], [3820, 3868, null], [3868, 3916, null], [3916, 4380, null], [4380, 4725, null], [4725, 5119, null], [5119, 5472, null], [5472, 5678, null], [5678, 5874, null], [5874, 6409, null], [6409, 6662, null], [6662, 6828, null], [6828, 6828, null], [6828, 6844, null], [6844, 6892, null], [6892, 7138, null], [7138, 7809, null], [7809, 7942, null], [7942, 8268, null], [8268, 8475, null], [8475, 8638, null], [8638, 8899, null], [8899, 8899, null], [8899, 8942, null], [8942, 8942, null], [8942, 9417, null], [9417, 9626, null], [9626, 9743, null], [9743, 9765, null], [9765, 10128, null], [10128, 10142, null], [10142, 10348, null], [10348, 10548, null], [10548, 10629, null], [10629, 11104, null], [11104, 11385, null], [11385, 11707, null], [11707, 11796, null], [11796, 11909, null], [11909, 11950, null], [11950, 11988, null], [11988, 12023, null], [12023, 12061, null], [12061, 12333, null], [12333, 12333, null], [12333, 12333, null], [12333, 12487, null], [12487, 12487, null], [12487, 12487, null], [12487, 12539, null], [12539, 13026, null], [13026, 13593, null], [13593, 14021, null], [14021, 14487, null], [14487, 15015, null], [15015, 15251, null], [15251, 15580, null], [15580, 16313, null], [16313, 16737, null], [16737, 17079, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 88, true], [88, 312, null], [312, 502, null], [502, 603, null], [603, 677, null], [677, 882, null], [882, 927, null], [927, 1784, null], [1784, 2483, null], [2483, 2933, null], [2933, 3641, null], [3641, 3820, null], [3820, 3868, null], [3868, 3916, null], [3916, 4380, null], [4380, 4725, null], [4725, 5119, null], [5119, 5472, null], [5472, 5678, null], [5678, 5874, null], [5874, 6409, null], [6409, 6662, null], [6662, 6828, null], [6828, 6828, null], [6828, 6844, null], [6844, 6892, null], [6892, 7138, null], [7138, 7809, null], [7809, 7942, null], [7942, 8268, null], [8268, 8475, null], [8475, 8638, null], [8638, 8899, null], [8899, 8899, null], [8899, 8942, null], [8942, 8942, null], [8942, 9417, null], [9417, 9626, null], [9626, 9743, null], [9743, 9765, null], [9765, 10128, null], [10128, 10142, null], [10142, 10348, null], [10348, 10548, null], [10548, 10629, null], [10629, 11104, null], [11104, 11385, null], [11385, 11707, null], [11707, 11796, null], [11796, 11909, null], [11909, 11950, null], [11950, 11988, null], [11988, 12023, null], [12023, 12061, null], [12061, 12333, null], [12333, 12333, null], [12333, 12333, null], [12333, 12487, null], [12487, 12487, null], [12487, 12487, null], [12487, 12539, null], [12539, 13026, null], [13026, 13593, null], [13593, 14021, null], [14021, 14487, null], [14487, 15015, null], [15015, 15251, null], [15251, 15580, null], [15580, 16313, null], [16313, 16737, null], [16737, 17079, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17079, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17079, null]], "pdf_page_numbers": [[0, 0, 1], [0, 88, 2], [88, 312, 3], [312, 502, 4], [502, 603, 5], [603, 677, 6], [677, 882, 7], [882, 927, 8], [927, 1784, 9], [1784, 2483, 10], [2483, 2933, 11], [2933, 3641, 12], [3641, 3820, 13], [3820, 3868, 14], [3868, 3916, 15], [3916, 4380, 16], [4380, 4725, 17], [4725, 5119, 18], [5119, 5472, 19], [5472, 5678, 20], [5678, 5874, 21], [5874, 6409, 22], [6409, 6662, 23], [6662, 6828, 24], [6828, 6828, 25], [6828, 6844, 26], [6844, 6892, 27], [6892, 7138, 28], [7138, 7809, 29], [7809, 7942, 30], [7942, 8268, 31], [8268, 8475, 32], [8475, 8638, 33], [8638, 8899, 34], [8899, 8899, 35], [8899, 8942, 36], [8942, 8942, 37], [8942, 9417, 38], [9417, 9626, 39], [9626, 9743, 40], [9743, 9765, 41], [9765, 10128, 42], [10128, 10142, 43], [10142, 10348, 44], [10348, 10548, 45], [10548, 10629, 46], [10629, 11104, 47], [11104, 11385, 48], [11385, 11707, 49], [11707, 11796, 50], [11796, 11909, 51], [11909, 11950, 52], [11950, 11988, 53], [11988, 12023, 54], [12023, 12061, 55], [12061, 12333, 56], [12333, 12333, 57], [12333, 12333, 58], [12333, 12487, 59], [12487, 12487, 60], [12487, 12487, 61], [12487, 12539, 62], [12539, 13026, 63], [13026, 13593, 64], [13593, 14021, 65], [14021, 14487, 66], [14487, 15015, 67], [15015, 15251, 68], [15251, 15580, 69], [15580, 16313, 70], [16313, 16737, 71], [16737, 17079, 72]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17079, 0.02773]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
19849cbc29f8e005c67c8d81a59882c5d7859eb0
|
A tutorial for \textbf{blockcluster} R package
Version 1.01
Parmeet Singh Bhatia
INRIA-Lille, parmeet.bhatia@inria.fr
Contents
1 Introduction 1
2 Package details 2
2.1 cocluster function 2
2.2 cocluststrategy function 3
\hspace{1em} 2.2.1 Understanding various input parameters 5
2.3 Example using simulated Binary dataset 6
3 Examples with real datasets 7
3.1 Image segmentation 7
3.2 Document clustering 8
4 Remarks 9
Abstract
\textbf{blockcluster} is a newly developed R package for co-clustering of binary, contingency and continuous data. The core library is written in C++ and \textbf{blockcluster} API acts as a bridge between C++ core library and R statistical computing environment. The package is based on recently proposed [4], [2], [3] latent block models for simultaneous clustering of rows and columns. This tutorial is based on the package version 1.01.
1 Introduction
Cluster analysis is an important tool in a variety of scientific areas such as pattern recognition, information retrieval, micro-array, data mining, and so forth. Although many clustering procedures such as hierarchical clustering, k-means or self-organizing maps, aim to construct an optimal partition of objects or, sometimes, of variables, there are other methods, called block clustering methods, which consider simultaneously the two sets and organize the data into homogeneous blocks. Let $\mathbf{x}$ denotes a $n \times d$ data matrix defined by $\mathbf{x} = \{(x_{ij}); i \in I \text{ and } j \in J\}$, where $I$ is a set of $n$ objects (rows, observations, cases etc) and $J$ is a set of $d$ variables (columns, attributes etc). The basic idea of these methods consists in making permutations of objects and variables in order to draw a correspondence structure on $I \times J$. For illustration, consider Figure 1 where a binary data set defined on set of $n = 10$ individuals $I = A, B, C, D, E, F, G, H, I, J$ and set of $d = 7$ binary variables $J = 1, 2, 3, 4, 5, 6, 7$ is re-organized into a set of $3 \times 3$ clusters by permuting the rows and columns.
Owing to ever increasing importance of Co-clustering in variety of scientific areas, we have recently developed a R package for the same called blockcluster. The R package blockcluster allows to estimate the parameters of the co-clustering models [1] for binary, contingency and continuous data. This package is unique from the point of view of generative models it implements (latent block models), the used algorithms (BEM, BCEM) and, apart from that, special attention has been given to design the library for handling very huge data sets in reasonable time. The R package is already available on CRAN at http://cran.r-project.org/web/packages/blockcluster/index.html. In this tutorial, I will elaborate the usage of our R package blockcluster.
2 Package details
This package contains two main functions namely cocluster and cocluststrategy to perform co-clustering and to set various input parameters respectively. The package also contains two helper functions namely summary and plot to get the summary of estimated model parameters and to plot the results respectively. I will first give details of the two main functions. The helper functions are self-explanatory and I will use them in various examples for better understanding.
2.1 cocluster function
This is the main function of blockcluster package that performs Co-clustering for binary, contingency and continuous data. The prototype of the function is as follows:
cocluster(data, datatype, model, nbcocluster, strategy = cocluststrategy())
The various inputs of cocluster functions are as follows:
- **data**: Input data as matrix (or list containing data matrix, numeric vector for row effects and numeric vector column effects in case of contingency data with known row and column effects.)
- **datatype**: This is the type of data which can be ”binary”, ”contingency” or ”continuous”.
- **model**: This is the name of model. The various models that are available in package are given in Table [1]
• **nbcocluster**: Integer vector specifying the number of row and column clusters respectively.
• **strategy**: This input can use to control various input parameters. It can be created using the function `cocluststrategy` as explained in Section 2.2.
The only mandatory inputs to the function `cocluster` are `data`, `datatype` and `nbcocluster`. The default model for each data-type is the most general model with free row and column proportions and unequal dispersion/variance for each block. Furthermore we have default set of input parameters which works well in most cases which are explained in further details in Section 2.2.
<table>
<thead>
<tr>
<th>Model</th>
<th>Datatype</th>
<th>Proportions</th>
<th>Dispersion/Variance</th>
<th>Initialization</th>
</tr>
</thead>
<tbody>
<tr>
<td>pik_rhol_epsilonkl</td>
<td>binary</td>
<td>unequal</td>
<td>unequal</td>
<td>CEM</td>
</tr>
<tr>
<td>pik_rhol_2epsilon</td>
<td>binary</td>
<td>unequal</td>
<td>equal</td>
<td>CEM</td>
</tr>
<tr>
<td>pi_rhol_epsilonkl</td>
<td>binary</td>
<td>equal</td>
<td>unequal</td>
<td>CEM</td>
</tr>
<tr>
<td>pi_rhol_2epsilon</td>
<td>binary</td>
<td>equal</td>
<td>equal</td>
<td>CEM</td>
</tr>
<tr>
<td>pik_rhol_sigma2kl</td>
<td>continuous</td>
<td>unequal</td>
<td>unequal</td>
<td>CEM</td>
</tr>
<tr>
<td>pik_rhol_s2sigma</td>
<td>continuous</td>
<td>unequal</td>
<td>equal</td>
<td>CEM</td>
</tr>
<tr>
<td>pi_rhol_sigma2kl</td>
<td>continuous</td>
<td>equal</td>
<td>unequal</td>
<td>CEM</td>
</tr>
<tr>
<td>pi_rhol_s2sigma</td>
<td>continuous</td>
<td>equal</td>
<td>equal</td>
<td>CEM</td>
</tr>
<tr>
<td>pik_rhol_unknown</td>
<td>contingency</td>
<td>unequal</td>
<td>N.A</td>
<td>CEM</td>
</tr>
<tr>
<td>pi_rho_unknown</td>
<td>contingency</td>
<td>equal</td>
<td>N.A</td>
<td>CEM</td>
</tr>
<tr>
<td>pik_rho_known</td>
<td>contingency</td>
<td>unequal</td>
<td>N.A</td>
<td>Random</td>
</tr>
<tr>
<td>pi_rho_known</td>
<td>contingency</td>
<td>equal</td>
<td>N.A</td>
<td>Random</td>
</tr>
</tbody>
</table>
Table 1: Various models available in package `blockcluster`.
### 2.2 `cocluststrategy` function
In the package `blockcluster`, we have a function called `cocluststrategy` which can be used to set the values of various input parameters. In the following example, we call the function `cocluststrategy` without any arguments and then we called the overloaded function `summary` to see default values of various input parameters.
```r
R > defaultstrategy <- cocluststrategy()
R > summary(defaultstrategy)
```
******************************************************************
Algorithm: XEMStrategy
Initialization method(There is no default value):
Stopping Criteria: Parameter
Various Iterations
Number of global iterations while running initialization: 10
Number of iterations for internal E-step: 5
Number of EM iterations used during xem: 50
Number of EM iterations used during XEM: 500
Number of xem iterations: 5
Number of tries: 2
Various epsilons
**************
Tolerance value used while initialization: 0.01
Tolerance value for internal E-step: 0.01
Tolerance value used during xem: 1e-04
Tolerance value used during XEM: 1e-10
******************************************************************
One thing which is worth noting in the summary output (above) is that there is no default value for initialization method. It will be set automatically depending on the type of input model. To set these input parameters, we have to pass appropriate arguments to function `cocluststrategy` as shown in example below where I set `nbtry`, `nbxem` and `algo` parameters.
```
R > newstrategy <- cocluststrategy (nbtry =5 , nbxem =10 , algo = 'XCEMStrategy ')
```
The `newstrategy` object can then be passed to function `cocluster` to perform Co-clustering using the newly set input parameters. The various input arguments for the function `cocluststrategy` are as follows:
- **algo**: The valid values for this parameter are "XEMStrategy" (Default) and "XCEM-Strategy". This parameter sets the algorithm/strategy to run the model. The algorithms used are BEM (Block EM algorithm) for "XEMStrategy" and BCEM (Block classification EM algorithm) for "XCEMStrategy".
- **stopcriteria**: It specifies the stopping criteria. It can be based on either relative change in parameters value (preferred) or relative change in log-likelihood. Valid criterion values are "Parameter" and "Likelihood". Default criteria is "Parameter".
- **initmethod**: Method to initialize model parameters. The valid values are "CEMInit", "FuzzyCEMInit" and "RandomInit". For now only one kind of initialization exist for every model currently available in the package. Hence default value for initialization is set according to the model.
- **nbinititerations**: Number of Global iterations used in initialization step. Default value is 10.
- **initepsilon**: Tolerance value used inside initialization. Default value is 1e-2.
- **nbiterations_int**: Number of iterations for internal E step. Default value is 5.
- **epsilon_int**: Tolerance value for relative change in Parameter/likelihood for internal E-step. Default value is 1e-2.
- **nbtry**: Number of tries (XEM steps). Default value is 2.
- **nbxem**: Number of xem steps. Default value is 5.
- **nbiterationsxem**: Number of EM iterations used during xem step. Default value is 50.
- **nbiterationsXEM**: Number of EM iterations used during XEM step. Deafult value is 500.
- **epsilonxem**: Tolerance value used during xem step. Default value is 1e-4.
- **epsilonXEM**: Tolerance value used during XEM step. Default value is 1e-10.
To understand many of the above input parameters, we need to have some basic idea about the algorithms and the way they are run inside package `blockcluster`, which is why I have a separate dedicated section [2.2.1](#) for the same.
2.2.1 Understanding various input parameters
You might be wondering why there are so many types of iterations and tolerances inside the package. Well to get some basic understanding about various input parameters, it is important to know a bit about the algorithms. I am not going to provide here full fledged theory of these algorithms but shall give enough details to make you understand the meaning of all the input parameters. If you go through the papers of latent block models, you will see that the algorithms are called Block EM (BEM) algorithm and Block CEM (BCEM) algorithm. From now on I will explain everything using BEM but it is applicable in same way to BCEM algorithm. The BEM algorithm can be stated as follows in laymen language.
1. Run EM algorithm on rows.
2. Run EM algorithm on columns.
3. Iterate between above two steps until convergence.
We use the following strategy to run the above algorithm.
1. Run the BEM Algorithm for 'nbxem' number of times (with high tolerance and low number of iterations) and keep the best model parameters (based on likelihood) among these runs. We can this step as 'xem' step.
2. Starting with the best model parameters, run the algorithm again but this time with a low value of epsilon (low tolerance) and a high number of iterations. We can this step as 'XEM' step.
3. Repeat above two steps for 'nbtry' number of times and keep the best model estimation.
With this background, I will explain various input parameters.
- **nbxem, nbtry:** As explained above these numbers represents the number of time we run 'xem' step and 'xem'+'XEM' step respectively. The tuning of the values of 'nbxem' and 'nbtry' need to be done intuitively, and could have a substantial effect on final results. A good way to set these values is to run co-clustering few number of times and check if final log-likelihood is stable. If not, one may need to increase these values. In practice, it is better to increment 'nbxem' as it could lead to better (stable) results without compromising too much the running time.
- **nbiterationsxem, nbiterationsXEM:** These are number of iterations for BEM algorithm i.e the number of times we run EM on rows and EM on columns. As the name suggests, they are respectively for 'xem' and 'XEM' steps.
- **nbiterations_int:** This is the number of iterations for EM algorithm on rows/columns.
- **epsilonxem, epsilonXEM:** These are tolerance values for BEM algorithm during 'xem' and 'XEM' step respectively.
- **epsilon_int:** This is the tolerance value for EM algorithm on rows/columns.
- **initepsilon, nbinititerations:** These are the tolerance value and number of iterations respectively used during initialization of model parameters.
2.3 Example using simulated Binary dataset
I have simulated binary data-set with parameters given in Table 2. The class mean and dispersion are respectively represented by $a$ and $\epsilon$, whereas $\pi$ and $\rho$ represents row and column proportions respectively. The data consist of 1000 rows (samples) and 100 columns (variables) with two clusters on rows and three clusters on columns. The following R commands shows how to load the library, process the data and visualize/summarize results using `blockcluster`.
\[
\begin{array}{cccc}
a, \epsilon & 0, 0.1 & 0, 0.3 & 1, 0.1 \\
& 1, 0.3 & 1, 0.2 & 0, 0.1 \\
\end{array}
\]
[\pi] = [0.6, 0.4, \rho] = [0.3, 0.3, 0.4]
Table 2: Parameters for simulation of binary data.

Figure 2: Original and co-clustered binary data (a), and distributions for each block along with various mixture densities (b).
```r
R > library("blockcluster")
R > data("binarydata")
R > out <- cocluster(binarydata, datatype = "binary", nbcocluster = c(2, 3))
R > summary(out)
```
**Model Family : Bernoulli Latent block model**
**Model Name : pik_rhol_epsilonkl**
Model Parameters..
Class Mean:
\[
\begin{array}{ccc}
& [1,] & [2,] & [3,] \\
[1,] & 0 & 0 & 1 \\
[2,] & 0 & 1 & 0 \\
\end{array}
\]
Class Dispersion:
\[
\begin{array}{ccc}
& [1,] & [2,] & [3,] \\
[1,] & 0.09798013 & 0.3022391 & 0.1011803 \\
\end{array}
\]
The following R command is used to plot the original and co-clustered data (Figure 2(a)) with aspect ratio set to false (it is true by default). When `asp` is set to false, R graphics will optimize the output figure for the display, hence the original aspect ratio may not be conserved.
\[ R > \text{plot(out, asp = 0)} \]
To Plot various block distributions (Figure 2(b)), the following R command is used with `type` argument of overloaded `plot` function set to 'distribution' (type is 'cocluster' by default which plots the original and Co-clustered data as shown in (Figure 2(a))).
\[ R > \text{plot(out, type = 'distribution')} \]
### 3 Examples with real datasets
To arouse your curiosity, I will demonstrate the applicability of package on real data. In the following sections, I gave two examples: one for Image segmentation and other for document (co-)clustering.
#### 3.1 Image segmentation
Automatic image segmentation is an important technique and have numerous applications especially in fields of medical imaging. Here I present an interesting application of co-clustering (as pre-processing step) for segmenting object(s) in image. I assume that the object pixels follows Gaussian distribution. Hence I run the `blockcluster` package with Gaussian Family model `pik_rhol_sigma2kl` on image shown in Figure 3. It can be clearly seen that the image got nicely segmented into snake and insect in two different blocks.

3.2 Document clustering
Document clustering is yet another data mining technique where co-clustering seems to be very useful. Here we run our package on one of the datasets being used in [1] which is publicly available at [ftp://ftp.cs.cornell.edu/pub/smart](ftp://ftp.cs.cornell.edu/pub/smart). We mix Medline (1033 medical abstracts) and Cranfield (1398 aeronautical abstracts) making a total of 2431 documents. Furthermore, we used all the words (excluding stop words) as features making a total of 9275 unique words. The data matrix consist of words on the rows and documents on the columns with each entry giving the term frequency, that is the number of occurrences of corresponding word in corresponding document. I assume that the term frequency follows Poisson distribution. Hence we can apply the model \( \texttt{pik\_rhol\_unknown} \) available in our package for contingency (Poisson Family) datasets with unknown row and column effects. Table 3 shows the confusion matrix and compare our results with classical bipartite spectral graph partitioning algorithm of [1], where we have obtained 100 percent correct classification. Figure 4 depicts the \(2 \times 2\) checkerboard pattern in the data matrix, hence confirming the more frequent occurrence of particular set of words in one document and vice-versa. Please note that the data matrix images are extremely sparse (data points almost invisible) and have been processed using simple image processing tools for visualization purpose only.

<table>
<thead>
<tr>
<th></th>
<th>Medline</th>
<th>Cranfield</th>
</tr>
</thead>
<tbody>
<tr>
<td>Medline</td>
<td>1026</td>
<td>0</td>
</tr>
<tr>
<td>Cranfield</td>
<td>7</td>
<td>1400</td>
</tr>
</tbody>
</table>
(a)
<table>
<thead>
<tr>
<th></th>
<th>Medline</th>
<th>Cranfield</th>
</tr>
</thead>
<tbody>
<tr>
<td>Medline</td>
<td>1033</td>
<td>0</td>
</tr>
<tr>
<td>Cranfield</td>
<td>0</td>
<td>1398</td>
</tr>
</tbody>
</table>
(b)
Table 3: Confusion Matrix: Results reported in [1] (a), and Results using blockcluster (b). The difference in number of Cranfield documents is because we made use of the already available data extracted from the documents and there are two less documents data in the same.

Figure 4: Original data matrix with words on rows and documents on columns (a), and checkerboard pattern in words by documents matrix obtained after performing co-clustering (b).
4 Remarks
In this tutorial, I have given a brief introduction about the blockcluster R package. I have demonstrated the use of package using Binary data-set but the package can be used in similar fashion for other types of data. Please note that this tutorial is based on version 1.01. For future release, we have already included new functionalities into the package which I shall explain in next tutorial once we release the new package on CRAN. In the meantime, if you have any questions or suggestions, do not hesitate to contact me personally or putting it on public forum at https://gforge.inria.fr/forum/forum.php?forum_id=11190&group_id=3679.
References
|
{"Source-Url": "https://modal.lille.inria.fr/wikimodal/lib/exe/fetch.php?media=blockcluster_tutorial_v1.01.pdf", "len_cl100k_base": 4767, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 19569, "total-output-tokens": 5174, "length": "2e12", "weborganizer": {"__label__adult": 0.00034809112548828125, "__label__art_design": 0.0006866455078125, "__label__crime_law": 0.0005278587341308594, "__label__education_jobs": 0.001674652099609375, "__label__entertainment": 0.0001538991928100586, "__label__fashion_beauty": 0.00020945072174072263, "__label__finance_business": 0.0005617141723632812, "__label__food_dining": 0.0005044937133789062, "__label__games": 0.0006747245788574219, "__label__hardware": 0.0010986328125, "__label__health": 0.0009908676147460938, "__label__history": 0.0004470348358154297, "__label__home_hobbies": 0.0002453327178955078, "__label__industrial": 0.0008096694946289062, "__label__literature": 0.00034809112548828125, "__label__politics": 0.0004031658172607422, "__label__religion": 0.0005059242248535156, "__label__science_tech": 0.276611328125, "__label__social_life": 0.0003123283386230469, "__label__software": 0.07623291015625, "__label__software_dev": 0.6357421875, "__label__sports_fitness": 0.0004127025604248047, "__label__transportation": 0.00038051605224609375, "__label__travel": 0.0003178119659423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19195, 0.03579]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19195, 0.33412]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19195, 0.79091]], "google_gemma-3-12b-it_contains_pii": [[0, 2082, false], [2082, 4056, null], [4056, 6933, null], [6933, 9778, null], [9778, 12501, null], [12501, 14003, null], [14003, 15510, null], [15510, 17823, null], [17823, 19195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2082, true], [2082, 4056, null], [4056, 6933, null], [6933, 9778, null], [9778, 12501, null], [12501, 14003, null], [14003, 15510, null], [15510, 17823, null], [17823, 19195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19195, null]], "pdf_page_numbers": [[0, 2082, 1], [2082, 4056, 2], [4056, 6933, 3], [6933, 9778, 4], [9778, 12501, 5], [12501, 14003, 6], [14003, 15510, 7], [15510, 17823, 8], [17823, 19195, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19195, 0.12644]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4cffe913e3c0674b6e55de0d5503dc9f5b2f9cd5
|
XEP-0470: Pubsub Attachments
Jérôme Poisson
mailto:goffi@goffi.org
xmpp:goffi@jabber.fr
2022-08-25
Version 0.2.0
<table>
<thead>
<tr>
<th>Status</th>
<th>Type</th>
<th>Short Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>Experimental</td>
<td>Standards Track</td>
<td>pubsub-attachments</td>
</tr>
</tbody>
</table>
This specification provides a way to attach elements to a pubsub item.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the “Specification”), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
1 Introduction
It is nowadays common to attach informations to messages or other items in various social networks: famous example are "like" (or "favourite") and "reactions" features. While there are ways to attach informations to <message/> stanzas with extensions such as Message Fastening (XEP-0422) ¹ or Message Reactions (XEP-0444) ², this is not the case for pubsub items.
Some software use comments as a work-around for Microblogging Over XMPP (XEP-0277) ³, by posting a single "□" character to notify a "like". This has the advantage to work out of the box even if no specific implementation is done to manage this, but this has a couple of disadvantages:
- it only works with Microblogging Over XMPP (XEP-0277) ⁴, it is not possible to like other kind of items;
- it’s polluting comments with an information which should be separated;
- it doesn’t handle uniqueness: a "like" should be doable only once per entity, but by using comments one can like thousand of times, and it’s the receiving client which must ignore duplicates;
- it doesn’t scale: if thousand of people like a blog post, all comments must be retrieved and counted;
- it’s mixing metadata and content intended for human user;
- this behaviour is found in the wild, but not standardized anywhere;
This XEP proposes an alternative and generic solution, which can work with any kind of pubsub item.
2 Requirements
The design goal of this XEP are:
- work with any kind of pubsub item, not only Microblogging Over XMPP (XEP-0277) ⁵
- handle uniqueness of attachment per JID
- have an extensible mechanism for future use
4 Use Cases
4.1 Basic Usage
Romeo wants to indicate to Juliet that he has noticed her post about the balcony restoration. This Microblogging Over XMPP (XEP-0277) item has been published on the PEP service of Juliet at service juliet@capulet.lit on the node 'urn:xmpp:microblog:0' and the item has the ID 'balcony-restoration-afd1'.
To do so he publishes the following item to the suitable attachment node:
---
Listing 1: Romeo Indicates To Juliet That He Has Noticed Her Publication
```xml
<iq from='romeo@montague.lit/123'
id='attachment_1'
to='juliet@capulet.lit'
type='set'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<publish node='urn:xmpp:pubsub-attachments:1/xmpp:juliet@capulet.lit?node=urn%3Axmpp%3Amicroblog%3A0;item=balcony-restoration-afd1'>
<item id='romeo@montague.lit'>
<attachments>
<noticed timestamp="2022-07-11T12:07:24Z"/>
</attachments>
</item>
</publish>
</pubsub>
</iq>
```
Few seconds later, Romeo reacts with some emojis, it does that with the following item, and his client takes care of keeping the `<noticed>` element above:
Listing 2: Romeo Add Reactions To Juliet Publication
```xml
<iq from='romeo@montague.lit/123'
id='attachment_2'
to='juliet@capulet.lit'
type='set'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<publish node='urn:xmpp:pubsub-attachments:1/xmpp:juliet@capulet.lit?node=urn%3Axmpp%3Amicroblog%3A0;item=balcony-restoration-afd1'>
<item id='romeo@montague.lit'>
<attachments>
<noticed timestamp="2022-07-11T12:07:24Z"/>
<reactions timestamp="2022-07-11T12:07:48Z">
<reaction>
❤
</reaction>
<reaction>
❤
</reaction>
</reactions>
</attachments>
</item>
</publish>
</pubsub>
</iq>
```
4.1.1 Explanations
To attach metadata to a pubsub item, an "attachment node" MAY be created, either by the publisher of the target item, or by the pubsub service if it is fully-compliant with this XEP (see
This node name is generated by merging the following strings:
- the namespace 'urn:xmpp:pubsub-attachments:1'
- a slash '/'
- the XMPP URI of the target item as explained at XEP-0060 § Pubsub URIs
Thus, in the example above, the node name to use for the item "balcony-restoration-afd1" of the node "urn:xmpp:microblog:0" located at PEP service "juliet@capulet.lit" is: "urn:xmpp:pubsub-attachments:1/xmpp:juliet@capulet.lit?;node=urn%3Axmpp%3Amicroblog%3A0;item=balcony-restoration-afd1"
This node SHOULD have the same access model than the target node.
To publish to this node, an entity MUST use its own bare JID for the ID of the item. It is both to keep the uniqueness of the item per JID and to make the retrieval of attachment for a particular entity easy.
The entity willing to publish attachment tries directly to publish to the above mentioned node. If the node doesn’t exist (and is not created on the fly by the pubsub service, see below), the pubsub service SHOULD answer with <item-not-found> error as explained in XEP-0060 §7.1.3.3 Node Does Not Exist. If the node doesn’t not exist, that means that it’s not possible to attach metadata to the target item, the entity willing to publish the attachment MUST NOT try to create the node itself (that would result in wrong ownership of the node).
An attachment payload is build with a top level <attachments> element which has zero, one or more child elements. This specification defines 2 child elements, <noticed> and <reactions>, but future XEPs may add their own elements qualified by their own namespaces to extend the functionalities. Each child element MAY have an optional 'timestamp' attribute indicating when the element has been attached. The value of this attribute is a DateTime as specified in XMPP Date and Time Profiles (XEP-0082) 7.
Because there is one item per JID; to update, add or remove attachments an entity simply re-publish an item on the same node with its bare JID as ID. It is the responsibility of the publishing entity to republish all previously existing attachments (except those who need to be removed). If an XMPP client doesn’t know a specific attachment, it MUST keep it and republish it when updating attachments.
All attachments of a specific JID can be deleted at once by retracting the item as specified at XEP-0060 §7.2 Delete an Item from a Node. A client SHOULD NOT retract an attachment item if there are attachments it doesn’t know, instead it SHOULD publish a new attachment item without the attachments which must be removed, and with the unknown attachments left in place.
### 4.2 Full-Compliance
Previous section describes the basic usage of pubsub attachments, which works with generic pubsub service. However, even if it works out of the box, it relies on the goodwill of entities:
an attacker or simply a bugged implementation could publish an item with wrong ID or somebody else bare JID, an item publisher client could miss the creation of attachment node, or give it bad access model, access model between attachment node and target node can become out of sync, etc.
To avoid these flaws, a pubsub service SHOULD implement the features described in this and following sections. If a pubsub service does so, it is said to be fully-compliant with pubsub attachments, and then and only then it can advertises the feature with Service Discovery (XEP-0030).
To be fully compliant, a PEP or pubsub service MUST implement the following features, which are explained in details below:
- auto-create attachment node, and keep its publish_model and access_model synchronized
- forbid manual creation of attachment or summary node
- check validity of items published to attachment node, and notably the item ID
- create and maintain a summary node
- handle <noticed> and <reactions> attachments
### 4.3 Automatic Node Creation
When an attachments item is published to a fully-compliant pubsub service, and if the attachment node doesn’t exist, the service MUST create automatically the node as explained at XEP-0060 §7.1.4 Automatic Node Creation, except that instead of applying the default configuration, it MUST apply the same access_model and publish_model as for the target node. The service MAY also copy other configuration options if they differ from default, it is up to the implementation to decide which other options are relevant to copy.
If the <iq/> stanza of the publishing client includes publishing options as explained in XEP-0060 § 7.1.5 Publishing Options, they are ignored.
If later the target node configuration is updated and either access_model or pubsub_model are modified, the fully-compliant service MUST also update the attachment node pubsub_model and access_model accordingly.
### 4.4 Manual Node Creation Rejection
If any user, including owner of target node or publisher of target item, tries to create manually an attachment node or a summary node, a fully-compliant service MUST reject it by returning a <not-allowed/> error.
A client can see if a node creation is necessary by using Service Discovery (XEP-0030):
---
presence of 'urn:xmpp:pubsub-attachments:1' feature in disco#info means that the service is fully-compliant, and that manual node creation MUST NOT be done.
### 4.5 Checking Validity of Attachments Items
When an entity publish an items with attachments to an attachment node, a fully-compliant service MUST check that the item is valid by
1. Verifying that the item ID is equal to the bare jid of the item publisher
2. Verifying that the root element of the payload is an <attachments> element qualified by the 'urn:xmpp:pubsub-attachments:1' namespace
If any of these points are not met, the service MUST reject the item by returning a <bad-request/> error.
In addition to those 2 mandatory checks, a pubsub service MAY add implementation specific checks.
### 4.6 Summary Node
As soon as a first attachment is received, a fully-compliant pubsub service MUST create a ”summary node”. A summary node is a node maintained by the service which group all attachments of a kind, allowing client to have a good overview of the data without needing to retrieve individually all items of the attachment nodes of all target items.
A summary node has the same access_model as the attachment node, but nobody is allowed to publish directly to it. The summary node is linked to the target node, and its name is made by joining the following element:
1. the 'urn:xmpp:pubsub-attachments:summary:1' prefix
2. a slash "/
3. the name of the target node
Thus in the initial example, for the blog of Juliet, the summary node name would be 'urn:xmpp:pubsub-attachments:summary:1/urn:xmpp:microblog:0' and it would be located at the PEP service juliet@capulet.lit.
For each item of the target node which has attachments, the summary node MUST contain an item which MUST have the same ID. This item contain a <summary> element qualified with the namespace 'urn:xmpp:pubsub-attachments:summary:1'. This item has elements with names matching attachments elements names, and a summary data which depend of the attachment. This specifications explain below how to summarize <noticed> and <reactions> attachments, it is the up to other XEPs specifying other features to explain how to summarize
their own attachments. If a service doesn’t know how to summarize an attachment, it SHOULD ignore it.
If a target item has no attachment at all, or if all attachments have been removed, the node MAY either return an <item-not-found> error, or an empty <summary> element, whatever is simpler for the service implementation.
Summary node subscriptions are working as for normal pubsub nodes: when a new attachment is published, resulting in the corresponding summary item updated, an event is sent with the new item to every subscribers.
Listing 3: Romeo Check Summary of Attachments of Juliet Blog
```xml
<iq from='romeo@montague.lit/123'
id='attachment_3'
to='juliet@capulet.lit'
type='get'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<items node='urn:xmpp:pubsub-attachments:summary:1/urn:xmpp:microblog:0'/>
</pubsub>
</iq>
```
Listing 4: Fully-Compliant Pubsub Service Returns Summary Items
```xml
<iq from='juliet@capulet.lit'
id='attachment_3'
to='romeo@montague.lit/123'
type='result'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<items node='urn:xmpp:pubsub-attachments:summary:1/urn:xmpp:microblog:0'>
<item id='balcony-restoration-afd1'>
<summary xmlns='urn:xmpp:pubsub-attachments:summary:1'>
<noticed count="5"/>
<reactions>
<reaction count="2"></reaction>
<reaction></reaction>
<reaction></reaction>
<reaction></reaction>
</reactions>
</summary>
</item>
</items>
</pubsub>
</iq>
```
4.7 Noticed Attachment
4.7.1 Foreword: "noticed" instead of "like" or "favourite"
The <noticed> feature described here is similar to what is most often known as "like", and sometime "favourite". It has been decided to use "noticed" word to highlight a different spirit from its ancestors.
The "like" feature has been invented in mid 2000s on commercial social network. Over the years, this functionality has proven to be a borderline toxic problem. Among known issues, we can mention:
- It may cause addictive behaviour, people feeling need to get more "likes".
- In contrast, the lack of like on a publication may lead to feelings of depression.
- It is used as a marketing tool, to spy user tastes and interests. It can even be used to discover political orientation, sexual preferences or religious beliefs, which can be dangerous in some countries/locations.
- It tend to diminish the quality of contents, by favoring metrics over contents themselves.
- In some social networks, more likes means more visibility and having a better image, resulting in some people/organizations/companies buying fake likes.
- The word "like" is ill-suited to bad news or dramatics events, when someone simply wants to show their support or empathy.
For all these reasons, it has been decided to use the word "noticed" which reflect better the way it is used by some people (notably observed on some social network built on top of the ActivityPub protocol): it is then used as way to say "I have seen" or "I've taken that into account".
However, and for compatibly reason with other protocols (especially to have the tools to make gateways), the summary feature of <noticed> attachment does count the number of elements. After reading this note, it is up to the various implementations to decide whether to show this number prominently, inconspicuously, or not at all.
4.7.2 Attachment Overview
<noticed> element is attached by an entity to say that they have seen or taken into account something. On the client UI side, it is often published when user push a simple button or icon, and the attachment is often visible with the same icon displayed on the noticed item. If an icon is used, it is recommended to use something as neutral as possible, thus a heart icon SHOULD NOT be used to avoid misunderstanding between various implementations (also see foreword above). As for any attachment, an optional "timestamp" attribute MAY be set with a
4 USE CASES
value of latest publication DateTime as specified in XMPP Date and Time Profiles (XEP-0082)\(^\text{10}\).
4.7.3 Summarizing
To summarize <noticed> attachments, a fully-compliant pubsub service just sum-up the total number of <noticed> elements found for the item, and put this number in "count" attribute of the summary <noticed> element. In the example below, an item has been noticed 25 times.
Listing 5: Example of Noticed Attachment Summary
```xml
<iq from='pubsub.example.net'
id='attachment_4'
to='juliet@capulet.lit/123'
type='result'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<items node='urn:xmpp:pubsub-attachments:summary:1/urn:xmpp:example:0'>
<item id='ball-event-able'>
<summary xmlns='urn:xmpp:pubsub-attachments:summary:1'>
<noticed count="25"/>
</summary>
</item>
</items>
</pubsub>
</iq>
```
4.8 Reactions Attachment
4.8.1 Attachment Overview
<reactions> element lets an entity attach various emojis to an item. Each emoji is put as the content of a single <reaction> element, and a client SHOULD ensure that any <reaction> element only appears once at most. As for any attachment, a "timestamp" attribute may be set with the DateTime of latest publication to the root <reactions> element. The protocol is similar to Message Reactions (XEP-0444)\(^\text{11}\) which is used for <message/> stanza.
4.8.2 Summarizing
To summarize <reactions> attachments, a fully-compliant pubsub service counts how many times each emoji is attached, ignoring duplicate from the same JID if any. If an emoji appears multiple times (from distinct bare JIDs), a 'count' attribute MUST be added to the <reaction> element with the number of time this reaction appear in all reactions as a value (if the same
reaction appears several times for a single bare JID, it MUST be counted only once).
In following example, all emojis are attached only once to the item, except the woman dancing
one which appears 22 times and the ballet shoes one which appears twice.
Listing 6: Example of reactions Attachment Summary
```xml
<iq from='pubsub.example.net'
id='attachment_5'
to='juliet@capulet.lit/123'
type='result'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<items node='urn:xmpp:pubsub-attachments:summary:1/
urn:xmpp:example:0'>
<item id='ball-event-able'>
<summary xmlns='urn:xmpp:pubsub-attachments:summary:1'>
<reactions>
<reaction count="22"> </reaction>
<reaction count="2"> </reaction>
<reaction> </reaction>
<reaction> </reaction>
<reaction> </reaction>
</reactions>
</summary>
</item>
</items>
</pubsub>
</iq>
```
5 Business Rules
- Similarly to "like" in commercial software, the "noticed" attachment can be used to
analyse user’s tastes, political view, religious beliefs, sexual orientation, etc. It is recom-
mended that implementers post a prominent notice warning users of potential abuses.
- Emoji pictures may differ widely on various platforms where they are displayed. This
has already led to misunderstanding of reactions, as a slightly different picture can be
interpreted in a completely different way from what the reactions author meant. Here
again, a prominent notice in implementations warning user is recommended.
- As "reactions" attachment is similar to Message Reactions (XEP-0444)\(^\text{12}\) which is used
for <message/> stanza, non <message/> related business rules from there apply for this
attachment too. Notably: A <reaction> element SHOULD only contain Unicode codepoints
that can be displayed as a single emoji, as specified in the latest revision of the Unicode Technical
Standard #51\(^\text{13}\). Receiving entities MAY ignore <reaction> elements that do not comply with this
---
\(^\text{13}\)Unicode Technical Standard #51 <http://www.unicode.org/reports/tr51/>.
8 IANA CONSIDERATIONS
specification.
6 discovering support
If and only if a PEP or pubsub service is fully-compliant with the "Pubsub Attachments" protocol (as explained in Full-Compliance section), it MUST advertise that fact by including the "urn:xmpp:pubsub-attachments:1" discovery feature in response to a Service Discovery (XEP-0030) 14 information request:
Listing 7: service discovery information request
```xml
<iq from='example.org'
id='disco1'
to='example.com'
type='get'>
<query xmlns='http://jabber.org/protocol/disco#info'/>
</iq>
```
Listing 8: service discovery information response
```xml
<iq from='example.com'
id='disco1'
to='example.org'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'>...
<feature var='urn:xmpp:pubsub-attachments:1'/>...
</query>
</iq>
```
7 Security Considerations
TODO
8 IANA Considerations
TODO
9 XMPP Registrar Considerations
TODO
10 XML Schema
TODO
|
{"Source-Url": "https://xmpp.org/extensions/xep-0470.pdf", "len_cl100k_base": 5435, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34576, "total-output-tokens": 6716, "length": "2e12", "weborganizer": {"__label__adult": 0.0003619194030761719, "__label__art_design": 0.0005159378051757812, "__label__crime_law": 0.001422882080078125, "__label__education_jobs": 0.0004584789276123047, "__label__entertainment": 0.0001150369644165039, "__label__fashion_beauty": 0.00012481212615966797, "__label__finance_business": 0.0007300376892089844, "__label__food_dining": 0.000194549560546875, "__label__games": 0.0004696846008300781, "__label__hardware": 0.0007767677307128906, "__label__health": 0.0001399517059326172, "__label__history": 0.00020015239715576172, "__label__home_hobbies": 6.639957427978516e-05, "__label__industrial": 0.0002236366271972656, "__label__literature": 0.0003294944763183594, "__label__politics": 0.00025153160095214844, "__label__religion": 0.0002484321594238281, "__label__science_tech": 0.00934600830078125, "__label__social_life": 0.0001226663589477539, "__label__software": 0.09149169921875, "__label__software_dev": 0.89208984375, "__label__sports_fitness": 0.00014865398406982422, "__label__transportation": 0.00020265579223632812, "__label__travel": 0.0001518726348876953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23994, 0.0285]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23994, 0.20351]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23994, 0.84678]], "google_gemma-3-12b-it_contains_pii": [[0, 383, false], [383, 2918, null], [2918, 2918, null], [2918, 4906, null], [4906, 5400, null], [5400, 7070, null], [7070, 9959, null], [9959, 12384, null], [12384, 14564, null], [14564, 16254, null], [16254, 18693, null], [18693, 20661, null], [20661, 22960, null], [22960, 23936, null], [23936, 23994, null]], "google_gemma-3-12b-it_is_public_document": [[0, 383, true], [383, 2918, null], [2918, 2918, null], [2918, 4906, null], [4906, 5400, null], [5400, 7070, null], [7070, 9959, null], [9959, 12384, null], [12384, 14564, null], [14564, 16254, null], [16254, 18693, null], [18693, 20661, null], [20661, 22960, null], [22960, 23936, null], [23936, 23994, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23994, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23994, null]], "pdf_page_numbers": [[0, 383, 1], [383, 2918, 2], [2918, 2918, 3], [2918, 4906, 4], [4906, 5400, 5], [5400, 7070, 6], [7070, 9959, 7], [9959, 12384, 8], [12384, 14564, 9], [14564, 16254, 10], [16254, 18693, 11], [18693, 20661, 12], [20661, 22960, 13], [22960, 23936, 14], [23936, 23994, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23994, 0.0102]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
797b6f5ec4bb1169fd8ab1fe282b6daf1a63464f
|
Ontology Re-Engineering: A Case Study from the Automotive Industry
Nestor Rychtyckyj and Venkatesh Raman
Ford Motor Company
Dearborn, MI, USA
Baskaran Sankaranarayanan,
P Sreenivasa Kumar, Deepak Khemani
Indian Institute of Technology Madras
Chennai, TN, India
Abstract
For over twenty five years Ford has been utilizing an AI-based system to manage process planning for vehicle assembly at our assembly plants around the world. The scope of the AI system, known originally as the Direct Labor Management System and now as the Global Study Process Allocation System (GSPAS), has increased over the years to include additional functionality on Ergonomics and Powertrain Assembly (Engines and Transmission plants). The knowledge about Ford’s manufacturing processes is contained in an ontology originally developed using the KL-ONE representation language and methodology. To preserve the viability of the GSPAS ontology and to make it easily usable for other applications within Ford, we needed to re-engineer and convert the KL-ONE ontology into a semantic web OWL/RDF format. In this paper, we will discuss the process by which we re-engineered the existing GSPAS KL-ONE ontology and deployed Semantic Web technology in our application.
1 Introduction
The Direct Labor Management System (DLMS) (Rychtyckyj 1999) was initially developed and deployed in Ford’s North American assembly plants back in the early 1990s. It was recognized that an ontology and a reasoner were required to represent the complex knowledge in the manufacturing process. This was done by creating an implementation of the KL-ONE language using the LISP programming language and developing a classifier that could reason with the ontology. This implementation turned out to be extremely successful and became the production version as the system was expanded to assembly plants first in Europe and then the rest of the world. Throughout this the KL-ONE architecture remained in place as the ontology was expanded and maintained through thousands of updates.
As the Semantic Web architecture and standards were developed it became obvious the GSPAS KL-ONE ontology would be much more usable and of better value to Ford if it could be rewritten into OWL/RDF. A semantic web ontology would be much easier to maintain and could be extended and utilized for other applications in the company. The main issue was in terms of time and resources: GSPAS was a production system with high value to the business customers and it was impossible to spare the people to redo the ontology and keep the existing system in production. An alternative solution was needed and we found it by partnering with the Indian Institute of Technology Madras (IITM) in Chennai, India. We elected to partner with IITM for several reasons. The university has an excellent reputation with a strong background in Artificial Intelligence. In addition, Ford already has significant operations in Chennai and we wanted to develop a strong relationship with the university.
The result of this project was very successful. The IITM team delivered a re-engineered OWL/RDF ontology that contained the knowledge in the existing KL-ONE ontology. The Ford team validated and updated the ontology to meet Ford’s requirements and has deployed the lexical ontology into the GSPAS application. The rest of the paper is organized as follows: Section-2 will describe the structure and usage of the existing KL-ONE ontology. Section-3 describes the conversion approach and the conversion process, while Section-4 describes how the ontology was validated and then deployed into the GSPAS application. In this paper, we will refer to the GSPAS KL-ONE ontology as the GSPAS ontology or KL-ONE ontology, and refer to the new GSPAS OWL ontology as the new ontology or OWL ontology.
2 The existing KL-ONE ontology
We adapted the KL-ONE knowledge representation system during our initial development of DLMS. There were no KL-ONE tools or editors available so we built both a KL-ONE editor as well as the code for classification and reasoning (Rychtyckyj 1994). The Knowledge Base Update (KBU) module was a graphical user interface that allowed us to maintain the KL-ONE knowledge base and also performed error checking as part of the update process.
The KL-ONE knowledge representation system (Brachman and Schmolze 1985) was first developed in the late 1970’s. KL-ONE was selected for use on the DLMS project
because of its adaptability as well as the power of the KL-ONE classification algorithm.
The KL-ONE knowledge base as used in DLMS can be described as a network of concepts with the general concepts being closer to the root of the tree and the more specific concepts being the leaves of the tree. A concept in a KL-ONE knowledge base inherits attributes from the nodes that subsume it. The power of the KL-ONE system lies in the classification scheme. The system will place a new concept into its appropriate place in the taxonomy by utilizing the subsumption relation on the concept’s attributes. A detailed description of the KL-ONE classification scheme can be found in (Schmolze and Lipkis 1983).
The existing KL-ONE ontology proved to be very robust and flexible as we made hundreds of changes to it on an annual basis. Both the business and the technology changed dramatically, but we managed to keep the system fully functional as its scope increased dramatically. However, it also became obvious that the KL-ONE framework was limiting the usefulness of the GSPAS ontology. It was difficult to extract and share knowledge with other applications because custom code was needed. The graphical user interface was rewritten several times as the application was migrated to new platforms and maintaining it was time-consuming. In the meantime semantic web technology had matured to a point where it was certainly feasible to move into this space.
3 Re-Engineering KL-ONE into OWL
The goal of this project is not only ontology translation but also redesign and restructuring, where the scope is limited to GSPAS and OWL frameworks. The GSPAS to OWL translation follows the 4-layered approach (with lexical, syntactic, semantic and pragmatic layers) from (Corcho and Gómez-Pérez 2005; Euzenat 2001). The lexical and syntactic layers, respectively, deal with character-set and KR-language syntax translation. The semantic and pragmatic layers, respectively, deal with interpretation and choice-of-modeling.
Our approach to re-engineering (redesign and translation) is shown in Fig. 1. We follow a spiral development model and make several iterations through the various phases. The Framework-Mapping phase covers the semantic and pragmatic aspects of ontology translation. The lexical and syntactic transformations are implemented in the translator, we do not present the details due to space limit. The remainder of this section describes our re-engineering approach.
3.1 Study Phase
Here, the goal is to understand the GSPAS and OWL (Bechhofer et al. 2004) frameworks, their similarities and differences, and understand the use-cases, design and organization of the GSPAS ontology, and identify areas for improvement.
To accomplish this goal, the IITM team studied the GSPAS, KL-ONE, DL and OWL frameworks, and with the help of the Ford team analyzed the GSPAS ontology. Then the IITM team developed a document that presented their understanding of (i) the KR frameworks, (ii) a potential mapping between GSPAS and OWL, (iii) the design, organization and use-cases of GSPAS ontology, and (iv) a high-level approach to GSPAS ontology re-engineering.
The Ford team then reviewed the understanding-document and worked with the IITM team to validate their understanding of the ontology and to address the questions and fill in the blanks where needed. There was a significant amount of “obsolete” knowledge that was no longer needed but existed in the ontology, furthermore, ontology “cleanup” was never a high priority due to limited time and resources, so concepts like carburetors or tape decks still exist in the ontology.
3.2 Framework Mapping
Here we aim to establish a correspondence between GSPAS (a subset of KL-ONE) and OWL frameworks. So we study and compare the three elements of the frameworks, namely, vocabulary, representation and reasoning.
**Vocabulary:** GSPAS, KL-ONE, DL and OWL (though related) use different names to refer to a given idea or conceptualization. We document the various vocabularies and their correspondences, see Table-1. It also shows the GSPAS features that are (un)supported in other frameworks.
**Representation:** we study the KR primitives in GSPAS and determine how GSPAS ontology can be losslessly encoded in OWL. GSPAS implements only a subset of the KL-ONE framework. For each GSPAS KR primitive we find a representation in OWL, such that the subsumption relation is preserved after translation. Table-2 shows the KR primitives and their OWL translation. Here, a primitive concept is represented as a subclass axiom, a defined concept as an equivalence axiom, a classifiable attribute as an object property, a simple attribute as an annotation property, and a value restriction as an existential restriction.
and possibly more; let us denote this as property $P$.
will find all inferences that structural subsumption can find (Baader et al. 2003), i.e., for a knowledge base, logical subsumption is incomplete with respect to logical subsumption (Baader et al. 2008; Motik et al. 2012) which runs in polynomial time for sub-language of DL.
It is known that structural subsumption is sound but incomplete with respect to logical subsumption (Baader et al. 2003), i.e., for a knowledge base, logical subsumption will find all inferences that structural subsumption can find and possibly more; let us denote this as property $P_1$.
Now, the profile of GSPAS (a subset of KL-ONE) is:
$$A \sqsubseteq C; \quad A \equiv C; \quad \text{axioms (1a)}$$
$$C \rightarrow A \mid \forall R.A \sqcap \exists R \mid C_1 \sqcap C_2; \quad \text{constructors (1b)}$$
The profile of the new (translated) ontology is:
$$A \sqsubseteq C; \quad A \equiv C; \quad R \sqsubseteq S; \quad \text{axioms (2a)}$$
$$\text{domain}(R) \sqsubseteq C; \quad \text{range}(R) \sqsubseteq C; \quad \text{axioms (2b)}$$
$$C \rightarrow A \mid \exists R.A \mid C_1 \sqcap C_2; \quad \text{constructors (2c)}$$
where, $A$ is concept name; $C, C_1, C_2$ are concept expressions; $R, S$ are role names.
Observe that models of Eqn-1, when value restriction is translated into existential restriction, are models of Eqn-2. From property $P_1$ and $(\forall R.A \sqcap \exists R \sqsubseteq \exists R.A)$, we can conclude that the subsumptions in GSPAS ontology will be preserved in the new ontology. Furthermore, Eqn-2 forms a sub-language of DL called $\mathcal{EL}^{++}$ (Baader, Brandt, and Lutz 2008; Motik et al. 2012) which runs in polynomial time for common reasoning tasks. Thereby, the new ontology stays well within the OWL-DL subset.
We experimented with other DL profiles and selected $\mathcal{EL}^{++}$ because it provides a good balance between expressiveness and performance for the current ontology requirements.
### 3.3 Ontology Design and Organization
The existing GSPAS ontology was designed to support two use cases, namely, to parse vehicle-build-instructions written in standard language (Rychtyckýj 2006), and to interpret (assign meaning to) parsed instructions. As a result of this, there are two sets of terminology in the ontology—one that
| Table 1: Vocabulary Mapping. Only terms relevant to GSPAS framework are listed here. |
|---|---|---|---|
| GSPAS | KL-ONE | DL | OWL |
| 1 | Concept | Concept | Concept | Class |
| 2 | Primitive Concept | Primitive Concept | Atomic Inclusion | Partial Concept |
| 3 | Generic Concept | Defined Concept | Definition | Complete Concept |
| 4 | Individual | Individual Concept | Individual | Object |
| 5 | Classifiable Attribute | Role | Role | Object Property |
| 6 | Attribute | Non-definition Role | $n/a$ | Annotation Property |
| 7 | Inheritable Attribute | $n/a$ | $n/a$ | $n/a$ |
| 8 | Role Restriction | Role Restriction | Role Restriction | Property Restriction |
| 9 | Value Restriction | Value Restriction | Value Restriction | Value Restriction |
| 10 | Number Restriction | Number Restriction | Number Restriction | Cardinality restriction |
| 11 | Classifier | Classifier | Reasoner | Reasoner |
| Table 2: GSPAS KR primitives and their OWL translation. |
|---|---|---|---|
| GSPAS | KL-ONE | DL | OWL |
| 1 | Primitive Concept | Primitive Concept | $A \sqsubseteq C$ | rdfs:subClassOf |
| 2 | Generic Concept | Defined Concept | $A \equiv C$ | owl:equivalentClass |
| 3 | Classifiable Attribute | Role | Role | owl:ObjectProperty |
| 4 | Attribute | Non-definition Role | $n/a$ | owl:AnnotationProperty |
| 5 | Inheritable Attribute | $n/a$ | $n/a$ | $n/a$ |
| 6 | Value Restriction | Value Restriction | $\exists R.A$ | owl:someValuesFrom |
| 7 | Conjunction | Conjunction | $C_1 \sqcap C_2$ | owl:intersectionOf |
| 8 | Sub Role | Sub Role | $R \sqsubseteq S$ | rdfs:subPropertyOf |
describes words in the standard language and the other that describes build-instructions, parts, tools, etc.
The new design aims to organize the ontology in a modular fashion, i.e., keep related terms together and unrelated terms apart. Accordingly, the new ontology is broadly divided into language and manufacturing terms (Fig-2). Each of this is further divided into smaller areas (like verbs, parts, tools, etc.), and so on to arbitrary depth.
This solves the homonym problem and enforces “one-term one-meaning” principle. Now, homonyms will have matching labels, but different IRIs, and will not cause spurious inferences.
**Synonyms:** In the GSPAS ontology, name variations (like synonyms, acronyms, abbreviations, misspellings, regional variations, names given by external source, etc.) are treated as synonyms (we call them GSPAS-synonyms). GSPAS synonyms are stored as data-values in the associated term and so the classifier does not process them. The same approach is used in the new design where GSPAS synonyms are stored in a multi-valued OWL annotation property. Below, we provide an alternate design and give reasons for rejecting it.
GSPAS synonyms for classes and objects can be modeled using predefined properties `owl:equivalentClass` and `owl:sameAs`, respectively. Now, GSPAS synonyms become logical-terms and the classifier will process them. But this has a few side effects. First, we cannot tell apart a term and its synonym because both become first class terms, so the synonym relation has no explicit representation. This is not wrong, but the synonym relationship goes out of sight. Second, the GSPAS synonym relation is neither symmetric nor transitive, but class-equivalence and same-as are both symmetric and transitive, and so will induce spurious synonym relationships. Third, the GSPAS synonyms become new terms in the namespace and may cause homonym problem. This can be solved, but at the expense of introducing spurious homonyms. For these reasons we reject this approach and treat synonyms as data-values.
**Part-of-speech information:** In GSPAS ontology, part-of-speech (POS) information is modeled in two ways: POS tags (like noun, verb, etc.) appear as concepts in the taxonomy (so words in standard language can specialize them), and POS tags are stored as data-values in non-definition attributes. In the new design, we model POS tags as concepts in the taxonomy. The POS tags stored in the attributes are remodeled into the taxonomy by creating suitable POS concepts and subsumption links.
### 3.4 Ontology Conversion
Conceptually, ontology conversion takes a GSPAS term and creates one or more new terms from it, and in the process it resolves homonyms and implements the various design choices. For example, a GSPAS term description like
\[
C = A \cap B \cap \forall R.U \cap \exists R \cap \forall S.V \cap \exists S
\]
may result in new term descriptions like
\[
C_1 = A_1 \cap \exists R_1.U_1; \quad C_2 = B_2 \cap \exists S_2.V_2
\]
where, alphabets are term names, subscripts are namespaces, each new term gets one namespace, the left-side of a term description is a name, and the right-side is an expression that refers to terms defined elsewhere in the ontology.
Technically, ontology conversion reduces to the problem of assigning namespace(s) to each name in a term description and then extracting new descriptions from that term description. For example, term C after namespace assignment is shown below, from this, new descriptions C₁, C₂ (in Eqn-4) are extracted after resolving namespace ambiguities.
\[ C_{1,2} \equiv A_{1,2} \cap B_{2,3} \cap \forall R_1.U_{1,3} \cap \forall R_1 \cap \forall S_2.V_{2,3} \cap \forall S_2 \]
In the presence of namespace ambiguity, ontology conversion becomes an inverse problem. There are several solutions to C₁ and C₂, one solution is Eqn-4, another solution is given below, and many other solutions exist.
\[ C_1 \equiv A_2 \cap \forall R_1.U_3; \quad C_2 \equiv B_3 \cap \forall S_2.V_3 \]
We need a set of rules to select the correct solution from the possible set of solutions. The choice of solution depends on the choice of namespaces, the particular instance of GSPAS ontology, and homonyms in the ontology.
Below, we describe the conversion process with the help of Fig-3, two term-mapping functions and three choice-functions. In Fig-3, parent refers to named primitive concept, role refers to the role participating in role restrictions, and filler refers to value restriction. For example, in Eqn-3, parents of C is \{A, B\}, roles of C is \{R, S\}, and fillers of R in C is \{U\}.
The term-mapping functions are used to track the relationship between GSPAS terms (sources) and new terms (targets). Here, sof maps a target to its source, each target has only one source, and tof (inverse of sof) maps a source to a set of targets. For example,
\[ \text{sof}(C_1) = C; \quad \text{sof}(C_2) = C; \quad \text{tof}(C) = \{C_1, C_2\}. \]
The choice functions are used to disambiguate homonyms and to choose admissible terms. Given a context and set of candidate new-terms, choice functions return a set of new-terms valid in that context. Here, chooseP returns the valid parents for a given concept; chooseR returns the valid roles for a given concept; chooseF returns the valid fillers for a given concept and role pair. For example,
\[ \text{chooseP}(C_1, \{A_1, A_2\}) = \{A_1\} \]
\[ \text{chooseR}(C_1, \{R_1, S_2\}) = \{R_1\} \]
\[ \text{chooseF}(C_1, R_1, \{U_1, U_3\}) = \{U_1\} \]
Ontology conversion creates new terms from GSPAS terms, this is done in 4 steps: (A) create new terms, with empty descriptions, from GSPAS terms; (B) add parents to the newly created terms, (C) then roles, (D) and role fillers (value restrictions). These steps are elaborated below.
\[ \left( \frac{3}{2} \right) \] The corresponding forward problem is to recover the GSPAS ontology from the new ontology, i.e., drop namespaces and merge descriptions. GSPAS conversion is lossless if we can recover GSPAS ontology from the new ontology.
---
**Figure 3:** Conversion work flow. Shows GSPAS term and a new term, and the conversion steps A to D. Numbers indicate flow sequence. Double arrow indicates set-valued input/output, otherwise it is scalar input/output. Items to be computed are in dashed-lines.
(A.) Create new-terms (concepts, roles and attributes). First, determine the choice of namespaces for the new ontology, then, for each namespace, identify the terms that belong to it. Homonyms will show up in multiple namespaces. Next, create new-terms out of GSPAS-terms and namespaces, and track the association using sof and tof functions.
```
1 for each ns in Namespaces
2 ns-terms = identify all terms that belong to ns
3 for each term in ns-terms
4 new-term = Create-Term(ns, term)
5 add term to sof(new-term)
6 add new-term to tof(term)
```
At this point we have new-terms with empty descriptions, each new-term will map to exactly one GSPAS-term, and each GSPAS-term will map to one or more new-terms.
(B.) Populate parents (follow the path 1-2-3-4 in Fig-3). For each new-concept fetch its GSPAS-parents. For each GSPAS-parent fetch the candidate set. If a parent is a homonym it will return multiple candidates. Now, use the choice function to select valid parents from the candidate set. Add selected parents to the new-concept.
```
7 for each new-concept
8 concept = sof(new-concept)
9 for each parent of concept
10 candidates = tof(parent)
11 new-parents = chooseP(candidates)
12 add new-parents to new-concept
```
(C.) Populate roles (follow the path 1-5-6-7 in Fig-3). For each new-concept fetch its GSPAS-roles. For each GSPAS-role fetch the candidate set. Each role has only one meaning...
in GSPAS ontology, so GSPAS-role and candidate-role will be in 1-to-1 correspondence. So candidate set will be a singleton set. Use the choice function to select valid role from the candidate set. Add selected role to new-concept.
```
13 for each new-concept
14 concept = sof(new-concept)
15 for each role of concept
16 candidates = tof(role)
17 new-role = chooseR(candidates)
18 add new-role to new-concept
19 // code to populate fillers is given below
```
Now, populate attributes in a similar manner.
**D.** Population role fillers (continue from previous step and follow the path 8-9-10 in Fig-3). For each GSPAS-role fetch its fillers. For each GSPAS-filler fetch the candidate set. Use the choice function to select the valid filler from the candidate set. Add selected fillers to new-concept.
```
19 // code to populate fillers is given below
20 for each filler of role
21 candidates = tof(filler)
22 new-fillers = chooseF(candidates)
23 add new-fillers to new-concept
```
Now, populate attribute fillers in a similar manner.
At the end of step D, all term descriptions are complete and we have a re-engineered namespace-aware ontology which can be serialized in OWL/RDF format.
In the conversion process, the choice function is the workhorse, and the remaining is routine processing. The choice function uses a set of cascading rules to disambiguate terms. For example, given a term and a set of candidate parents, chooseP returns parents from the term’s namespace, otherwise returns parents that prefer children from the term’s namespace, otherwise return the candidate set.
For each role, its namespace and the namespaces in which it can be used is predetermined during design phase. Also, its domain and range are predetermined. Given a term and a candidate role, chooseR returns the role if it is usable in the term’s namespace otherwise none.
Given a term, a role and a set of candidate filters, chooseF filters the candidate list progressively until only one candidate is left. First, it selects fillers that are subtype of the role’s range, next selects fillers from the term’s namespace, and finally selects fillers from the role range’s namespace.
The choice functions and their rules were discovered by profiling the GSPAS ontology and by experimentation. The rules are specific to GSPAS ontology, its design-and-organization, the choice of namespaces, homonyms, etc. The rules are tuned to the particular ontology instance that was used for conversion and testing.
### 3.5 Verification
Verification is done at 3 levels: framework level, ontology level and application level.
At the framework level, (i) we verify the validity of the mapping between KR primitives (Table-2). It is verified by comparing the asserted hierarchies of the new and GSPAS ontology, and then comparing the respective inferred hierarchies. The asserted hierarchy in the new ontology had 4 missing subsumption links out of 12,600+ direct links. These were fixed manually. Next, we manually compared the inferred hierarchies, most of the hierarchy matched, there were about 20 cases where a sub-concept became equivalent to its parent. These types of cases were manually corrected in the new ontology. (ii) Further, we verify the profile of the new ontology. We used Pellet info tool to compute OWL and DL profiles of the new ontology. It turned out to be OWL2EL and EC++ (see Table-4) as expected.
At the ontology level, (i) we verify that every GSPAS term has a representation in the new ontology and every new term description is part of some GSPAS term description. This is done by recreating the GSPAS ontology from the new ontology by dropping the namespaces and merging the terms. We manually compared the two versions of GSPAS ontology and found no significant differences. This verification by itself does not establish the validity of the new ontology, but checks whether the conversion is lossless. It is a good first line of defense, and helps in accounting for terms in the new ontology. (ii) Further, we checked for cases of punning using Pellet lint tool, and found one violation which was fixed manually.
The application level verification provides the final validation of the new ontology. It is described in sec-4.
### 3.6 Performance Testing
In the GSPAS ontology, all terms are modeled as concepts, there are no individuals. But primitive concepts that occur as leaves in the taxonomy and without any role restrictions qualify as individuals. To explore alternate models of GSPAS ontology, qualifying individuals in the part-of-speech hierarchy and object hierarchy are modeled as individuals.
We created five OWL ontologies from GSPAS ontology (see Table-3), each differ in the number of individuals they contain. The first four cases were created for performance testing. The last one was created as a result of performance tuning. We tested three reasoners on the five ontologies using Intel i7-4770 with 16GB RAM running 64bit Ubuntu 12.04; the details are reported in Table-4.
We make the following observations. (i) Of the reasoners, FacT++ has the best overall performance, followed by HermiT and Pellet. (ii) Of the ontologies, LEX-1 has the best overall performance, it has a 1:21 class to individual ratio, and ONT-1 has good overall performance and has no individuals. (iii) The performance, though within acceptable limits, begins to degrade for ONT-2 and ONT-3. HermiT and Pellet are up to 2 orders of magnitude slower than FacT++ on these ontologies.
To understand where the reasoner was spending time we profiled ONT-3 using Pellet⁴ and computed the classification time for each concept. From this we prepared a Pareto Chart (term-count vs classification-time), which showed that
---
⁴In Pellet, concept classification is done by a series of subsumption tests. Pellet reports the execution time for each test, we sum up these times to compute the classification time for a concept.
was loaded into an Allegrograph server and we wrote var-
ciation. The delivered OWL ontol-
gy by developing a tool to compare it to the
OWL ontology. We (Ford) verified the completeness of the new OWL ontol-
gy and ran the entire suite of regression tests and compared the results with the baseline. As with the manual tests we found a number of differences that needed to be analyzed and addressed. These differences fell into the following categories:
- OWL representation was different than KL-ONE, but was part of the re-engineering process. In this case we adjusted the regression tests to reflect how the knowledge was represented in OWL.
- Discrepancies were caused because of formatting, punctuation, special characters and related syntax errors. In these cases, we wrote a routine that would fix these errors as part of the OWL retrieval process, but our intention is to go back and fix these in OWL.
- In some cases, the OWL representation was not what we wanted. In this case we went back to OWL and made the appropriate fixes.
At this point we were confident that the lexical ontology was fairly complete and would be usable after the changes made above were completed.
The next step was to build an image using the new OWL ontology and deploy it for user acceptance testing. This testing pointed out some performance issues that were ad-
dressed by rewriting the code to make the OWL interface work more efficiently. After these performance issues were fixed the new AI system with the OWL ontology was deployed into the testing environment. No other major issues were discovered during the user acceptance testing phase and the application with the embedded lexical OWL ontol-
ogy was deployed for use.
We were able to take advantage of the extensibility of the OWL ontology by developing a script that could load a class of parts known as wire assemblies directly from an external database. This allows us to add additional knowledge into OWL much more quickly. Another of the main advantages of using OWL was the capability to use standard tools for ontology maintenance. In our case, we are using the Top Braid Composer tool to maintain our OWL ontology which provides much additional capability and allowed us to retire our own tool.
After deployment of the lexical OWL ontology our next goal is to deploy the manufacturing ontology. The OWL on-
tology is also available for use through Allegrograph and is being utilized by other applications that need the information. Figure-4 shows the structure of our semantic web architec-
If you have any further questions or need assistance with this content, feel free to ask! I'm here to help.
5 Conclusions and Future Steps
In this paper we described a project where Ford collaborated with the Indian Institute of Technology Madras to re-engineer and convert an existing ontology into a semantic web OWL/RDF architecture. After a thorough validation procedure the lexical ontology has been deployed as part of the GSPAS system at Ford. The manufacturing ontology will also undergo the same rigorous validation before deployment into production.
There were a number of compelling reasons that motivated the re-engineering of the ontology from KL-ONE to OWL. The most important ones were based around maintainability and extensibility. The original software was written before any software tools for ontology maintenance were available. The KL-ONE ontology could only be maintained using a specialized tool. This tool has had to be re-written several times as operating systems and hardware were being upgraded and it was becoming a bottleneck for future ontology development. The KL-ONE ontology was not usable outside the application without designing custom code to extract specific knowledge. In the meantime business requirements for the ontology were rapidly increasing and the existing architecture could not support them. The conversion of the ontology to OWL was a critical requirement for the future usage of the A1 application. Our experience was somewhat unique in that we have been using KL-ONE since the 1990s and much of the work in semantic web had taken place after we had a deployed application.
The conversion from KL-ONE to OWL required a significant amount of work, but the advantages from moving into a semantic web architecture made this a worthwhile investment. It enables us to take advantage of existing tools and processes and to make our ontology reusable and extensible using existing standards. Queries can easily be developed using SPARQL that allow other applications to access our ontology. The semantic web infrastructure also gives us the ability to link to other ontologies and take advantage of the linked open data world. Therefore, the ROI on this project is based on the following: increased functionality with OWL vs. KL-ONE and reduced expenses in terms of maintenance costs.
Our future work will include the deployment of our man-
References
|
{"Source-Url": "https://cdn.aaai.org/ojs/19071/19071-13-22941-1-10-20211014.pdf", "len_cl100k_base": 7335, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 32376, "total-output-tokens": 8521, "length": "2e12", "weborganizer": {"__label__adult": 0.0004532337188720703, "__label__art_design": 0.0008535385131835938, "__label__crime_law": 0.0006737709045410156, "__label__education_jobs": 0.0014982223510742188, "__label__entertainment": 0.0001989603042602539, "__label__fashion_beauty": 0.00032138824462890625, "__label__finance_business": 0.0022563934326171875, "__label__food_dining": 0.0005350112915039062, "__label__games": 0.0011472702026367188, "__label__hardware": 0.0021800994873046875, "__label__health": 0.0007648468017578125, "__label__history": 0.0005292892456054688, "__label__home_hobbies": 0.00028252601623535156, "__label__industrial": 0.00418853759765625, "__label__literature": 0.0007715225219726562, "__label__politics": 0.0003581047058105469, "__label__religion": 0.0008392333984375, "__label__science_tech": 0.45068359375, "__label__social_life": 0.000194549560546875, "__label__software": 0.053314208984375, "__label__software_dev": 0.472412109375, "__label__sports_fitness": 0.00034618377685546875, "__label__transportation": 0.00501251220703125, "__label__travel": 0.0003132820129394531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33578, 0.02235]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33578, 0.40372]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33578, 0.91516]], "google_gemma-3-12b-it_contains_pii": [[0, 4435, false], [4435, 9204, null], [9204, 13137, null], [13137, 16132, null], [16132, 20872, null], [20872, 26834, null], [26834, 29505, null], [29505, 33578, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4435, true], [4435, 9204, null], [9204, 13137, null], [13137, 16132, null], [16132, 20872, null], [20872, 26834, null], [26834, 29505, null], [29505, 33578, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33578, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33578, null]], "pdf_page_numbers": [[0, 4435, 1], [4435, 9204, 2], [9204, 13137, 3], [13137, 16132, 4], [16132, 20872, 5], [20872, 26834, 6], [26834, 29505, 7], [29505, 33578, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33578, 0.12953]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
050ae467a639f685500a17162a72de7cb2025f9c
|
Guide for Applicants
1st Open Call
Extension of the Virtual Object Stack (VOStack) and development of Virtual Objects (VOs), composite Virtual Objects (cVOs) and Digital Twins (DTs)
Submission starts on 1 November 2023 at 9:00 (CET, Brussels Time)
Deadline is on the 10 January 2024 at 17:00 (CET, Brussels Time)
V1, 27/10/2023
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union nor the European Commission can be held responsible for them. NEPHELE has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070487.
Table of Content
Terms and abbreviations 3
1. Basic info about NEPHELE 4
2. What do we offer? 4
3. Admissibility and Eligibility Criteria 5
3.1 Who are we looking for? 5
3.2 What types of activities can be funded? 6
3.3 How to apply? 7
4. How will we evaluate your proposal? 8
4.1 First Admissibility and Eligibility Check 9
4.2 In/Out of scope screening 9
4.3 External Evaluation 10
4.4 Consensus Meeting 11
4.5 What's next? Subgrant Agreement Preparation and Signature 12
5. Our Support Programme and payment arrangements 12
Milestone review process 13
Payment schedule 14
6. Contact us 14
How can we help you? 14
Complaints 15
7. Last but not least - final provisions 15
8. Extra hints before you submit your proposal 16
Annex 1: Software repository and documentation 17
Annex 2: Focus areas 18
Development of VOs and optionally cVOs along with a set of generic or
device-specific functions based on VOStack. 18
Development of Digital Twins based on the cVO concept. 18
Development of VO network and/or security management mechanisms for the
VOStack. 18
Development of semantic interoperability mechanisms (e.g., from W3C WoT to
NGSI-LD, from W3C WoT to OMA LwM2M and vice-versa) for the VOStack. 19
Annex 3: Project's use cases 20
## Terms and abbreviations
<table>
<thead>
<tr>
<th>EC</th>
<th>European Commission</th>
</tr>
</thead>
<tbody>
<tr>
<td>FSTP</td>
<td>Financial Support to Third Parties</td>
</tr>
<tr>
<td>VO</td>
<td>Virtual Object</td>
</tr>
<tr>
<td>cVO</td>
<td>Composite Virtual Object</td>
</tr>
<tr>
<td>DT</td>
<td>Digital Twin</td>
</tr>
<tr>
<td>IoT</td>
<td>Internet of Things</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>TRL</th>
<th>As per European Commission <a href="#">definition</a>, there are 9 possible levels of technology readiness.</th>
</tr>
</thead>
<tbody>
<tr>
<td>TRL 1</td>
<td>basic principles observed,</td>
</tr>
<tr>
<td>TRL 2</td>
<td>technology concept formulated,</td>
</tr>
<tr>
<td>TRL 3</td>
<td>experimental proof of concept,</td>
</tr>
<tr>
<td>TRL 4</td>
<td>technology validated in a lab,</td>
</tr>
<tr>
<td>TRL 5</td>
<td>technology validated in relevant environment (industrially relevant environment in the case of key enabling technologies),</td>
</tr>
<tr>
<td>TRL 6</td>
<td>technology demonstrated in relevant environment (industrially relevant environment in the case of key enabling technologies),</td>
</tr>
<tr>
<td>TRL 7</td>
<td>the system prototype demonstration in operational environment,</td>
</tr>
<tr>
<td>TRL 8</td>
<td>system complete and qualified, and</td>
</tr>
<tr>
<td>TRL 9</td>
<td>actual system proven in an operational environment (competitive manufacturing in the case of key enabling technologies; or in space).</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>FAQ</th>
<th>Frequently Asked Questions</th>
</tr>
</thead>
</table>
1. Basic info about NEPHELE
NEPHELE is a Research and Innovation Action project funded by the Horizon Europe programme under the topic "Future European platforms for the Edge: Meta Operating Systems". The project's vision is to enable the efficient, reliable and secure end-to-end orchestration of hyper-distributed applications over programmable infrastructure that is spanning across the compute continuum from Cloud-to-Edge-to-IoT, removing existing openness and interoperability barriers in the convergence of IoT technologies against cloud and edge computing orchestration platforms, and introducing automation and decentralized intelligence mechanisms powered by 5G and distributed AI technologies.
The main goal of NEPHELE is to design and build a virtual object software stack (VOStack) that will tackle IoT interoperability and convergence challenges in the compute continuum and develop synergetic orchestration mechanisms that can manage distributed applications in the continuum. See more info about the project here: https://nephele-project.eu/
In order to achieve the project’s goals, NEPHELE will distribute up to €1,49M among SMEs/Mid-caps that will be selected through 2 Open Calls:
- **1st Open Call** - Extension of the Virtual Object Stack (VOStack) and development of Virtual Objects (VOs), composite Virtual Objects (cVOs) and Digital Twins (DTs) - targeting up to 8 SMEs/Mid-caps to develop Virtual Objects, composite Virtual Objects (including the Digital Twin concept), and generic IoT enablers in accordance with the provided software stack.
- **2nd Open Call** - Use Cases development and NEPHELE approach validation - targeting up to 8 SMEs/Mid-caps to develop use cases and validate the NEPHELE architectural approach & synergetic meta-orchestration framework.
This Guide for Applicants contains relevant information to understand how to successfully take part in the 1st Open Call.
2. What do we offer?
The 1st Open Call of NEPHELE will distribute **up to €608 000** among **up to 8 SMEs/Mid-caps to develop VOs, cVOs (including the concept of DTs) and Generic/Backend functions that can be considered as extensions of the provided software stack (VOStack).**
The selected SMEs/Mid-caps will benefit from the 1st Open Call in a number of ways, namely by:
1. Participating in a 6-month-long support programme;
2. Receiving a fixed lump sum of up to €76 000 per entity;
3. Receiving technical mentoring during the support programme.
Applications are welcome from the **1st of November 2023 at 9:00 (CET)** until the **10th of January 2024 at 17:00 (CET)** at https://nephele-1st-open-call.fundingbox.com/.
### 3. Admissibility and Eligibility Criteria
We will check the admissibility and eligibility of all proposals submitted before the deadline (the 10th of January 2024 at 17:00 CET) via our online application form: https://nephele-1st-open-call.fundingbox.com/. All the admissibility and eligibility criteria are listed in this section of this Guide for Applicants. The projects that do not comply with those criteria will be excluded and marked as ineligible.
We will check the admissibility and eligibility criteria based on the information provided in your application **during the whole evaluation process**.
#### 3.1 Who are we looking for?
We are looking for single Industrial\(^1\) SMEs\(^2\) (including startups) and Mid-caps\(^3\) registered as legal entities no later than at the end date of the NEPHELE 1st Open Call (no later than 10 Jan 2024) in the following countries:
---
1. An industrial entity is considered to be any type of SME or Mid-cap that covers the expertise and experience criteria described in Section 3.2.
2. An SME will be considered as such if it complies with the European Commission’s Recommendation 2003/361/EC. As a summary, the criteria defining an SME are:
- Headcount in Annual Work Unit (AWU) less than 250;
- Annual turnover less or equal to €50 million OR annual balance sheet total less or equal to €43 million.
Note that the figures of partners and linked enterprises should also be considered as stated in the SME user guide. For detailed information check EU recommendation: https://ec.europa.eu/growth/smes/business-friendly-environment/sme-definition_en
3. Middle-capitalization company’ or ‘Midcap’ means an enterprise that is not an SME and that has up to 3 000 employees, knowing that the staff headcount is calculated in accordance with Articles 3 to 6 of the Annex to Commission Recommendation 2003/361/EC:
The SMEs and Mid-caps must demonstrate expertise and experience in at least one of the technical areas described in Section 3.2.
The applicants who are subject to EU restrictive measures under Article 29 of the Treaty on the European Union (TEU) and Article 215 of the Treaty on the Functioning of the EU (TFEU) are not eligible to participate in this open call.
The NEPHELE Partners are not eligible to act as applicants and CANNOT be involved in the grantees' projects, neither their affiliates nor employees or permanent collaborators.
3.2 What types of activities can be funded?
Participants have to address the scope of the 1st Open Call, that is to develop a set of Virtual Objects (VOs), composite Virtual Objects (cVOs) and Digital Twins (DTs), as well as extensions in the software stack (VOStack) in the form of generic Functions. The aim is to integrate these components into the NEPHELE ecosystem and adopt the NEPHELE architecture (see Annex 1). The provided software and its documentation have to be made available as open source and offered as part of the NEPHELE ecosystem.
The proposals must address at least one of the focus areas listed below (see Annex 2):
- Development of VOs and optionally cVOs along with a set of generic or device-specific functions based on VOStack.
- Development of Digital Twins based on the cVO concept.
- Development of VO network and/or security management mechanisms for the VOStack.
- Development of semantic interoperability mechanisms (e.g., from W3C WoT to NGSI-LD, from W3C WoT to OMA LwM2M and vice-versa) for the VOStack.
---
4 Following the Council Implementing Decision (EU) 2022/2506, as of 16th December 2022, no legal commitments can be signed with Hungarian public interest trusts established under Hungarian Act IX of 2021 or any entity they maintain. Affected entities may continue to apply to calls for proposals. However, in case the Council measures are not lifted, such entities are not eligible to participate in the NEPHELE 1st open call. In case of consortium, co-applicants will be invited to remove or replace that entity. Tasks and budget may be redistributed accordingly.
5 AC as of (27.10.2023): Albania, Armenia, Bosnia and Herzegovina, Faroe Islands, Georgia, Iceland, Israel, Kosovo, Moldova, Montenegro, North Macedonia, Norway, Serbia, Turkey, Tunisia, Ukraine, for the most up-to-date list please see first part of this document. For the avoidance of doubt, New Zealand is not eligible in this open call.
6 Please note that the EU Official Journal contains the official list and, in case of conflict, its content prevails over that of the EU Sanctions Map.
To be eligible for funding, the provided solutions and the software development has to be aligned with the provided specifications provided in Annex 1. Furthermore, the initial TRL of the proposals should be at TRL4, and the desired TRL level that the proposed solutions should reach during the support programme is at TRL5.
Applicants are also required to demonstrate expertise and experience in at least one of the following fields:
- IoT technologies: communication protocols (e.g., HTTP, CoAP, MQTT, Web of Things), semantic interoperability, security mechanisms, digital twin development.
- Data analytics: Machine Learning (ML), TinyML, stream processing etc.
- Cloud and edge computing technologies: orchestration mechanisms, microservices-based application development.
- Network management technologies: 5G and beyond, NBIoT, ad-hoc networking, time sensitive networking.
3.3 How to apply?
When applying to NEPHELE’s 1st open call, please also note the following:
- **Be on time and use our system:** Make sure you submit your proposal through the [online form](#) before the deadline (admissibility criterion). If you submit the form correctly, the system will send you a confirmation of your submission. Get in touch with us if it is not the case. It is important for you to know that we will not be evaluating any proposal sent after the deadline and submitted outside the dedicated form.
- **English Language:** your proposal must be written in English in all mandatory parts in order to be eligible. Only parts written in English will be evaluated. If the mandatory parts of the proposal are in any other language, the entire proposal will be rejected (admissibility criterion).
- **Every question deserves your attention:** all mandatory sections - generally marked with an asterisk - of your proposal must be completed (admissibility criterion). The data provided should be actual, true, and complete and should allow assessment of the proposal. Additional material, not specifically requested in the online application form, will not be considered for the evaluation.
- **European Dimension:** Your project should have a clear European Dimension, meaning: 1) contribute towards the next generation of higher-level (meta) operating systems for the smart Internet of Things with strong computing capacity at the smart device, system and edge-level, embedded in a compute continuum from IoT-to-edge-to-cloud; 2) increase European autonomy in data processing required to support future hyper-distributed applications.
- **Be exhaustive and precise:** You have to verify the completeness of the form, as it won’t be possible to add any further information after the deadline. After the proposal is submitted, you will be able to modify the form until the deadline.
- You can submit only **one proposal** in this call. If more than one proposal is identified, only the last proposal which has been submitted in order of time will be evaluated.
- **Conflict of interest:** We will take into consideration the existence of potential conflict of interest among you and one or more Consortium partners. Consortium partners, their affiliated entities, employees and permanent collaborators cannot take part in the NEPHELE support programme. All cases of potential conflict of interest will be assessed case by case.
• **Healthy finances and a clean sheet are a must:** We don’t accept entities that are under liquidation or are an enterprise under difficulty according to the Commission Regulation No 651/2014, art. 2.18, or that are excluded from the possibility of obtaining EU funding under the provisions of both national and EU law, or by a decision of both national or EU authority. We also don’t accept entities that are meeting national regulations regarding bankruptcy.
• **It is your proposal:** Your project should be based on your original work or, if the project is not based on your original work, your right to use the IPR must be clearly defined (you must have a licence agreement or the IPR must be transferred to you from somebody who created the work). In particular, any work related to the implementation of the project described in the application may not violate the IPR of third parties, and the IPR to the application project may not be the subject of a dispute or proceedings for infringement of third party IPR.
• Acceptance of the open call rules: to apply for this open call you have to accept its rules and regulations detailed in this Guide for Applicants.
### 4. How will we evaluate your proposal?
Our evaluation process is transparent, fair and equal to all our participants. We will evaluate your project in four phases as shown below.

Figure 1 Evaluation process
4.1 First Admissibility and Eligibility Check
After the closure of the open call, we will review the proposal to ensure it meets the admissibility and eligibility conditions outlined in Section 3. This assessment will be based on the statements provided in your proposal.
During this initial stage, the eligibility criteria will be cross-referenced against the declaration of honour and self-declarations included in the application form. Subsequently, throughout the evaluation process (including the final formal check), the above criteria will be thoroughly verified.
The proposals that do not comply with the above-mentioned criteria will be excluded.
4.2 In/Out of scope screening
To maximise the impact within the framework of NEPHELE, it is crucial that all proposals align with the European Dimension and the scope of the activities. Additionally, applicants must have the necessary expertise and experience in specific technical fields related to NEPHELE. For this reason, the Selection Committee will review and assess the following aspects of your proposal:
- The scope - the objectives of the proposal must fit within the scope of the project, as described in this guide in Section 3.2.
- The European Dimension - the project should have a European dimension, as described in this guide in Section 3.3.
- Expertise and experience in the defined technical fields, as per Section 3.2.
Proposals that do not comply with any of the aspects described above will be rejected. Proposals that fully comply with all the specified aspects will proceed to the experts’ evaluation phase.
We will inform you about the results of the in/out scope screening.
4.3 External Evaluation
In this phase, each project will be evaluated by two external and independent evaluators from the IoT, edge/cloud computing, Artificial Intelligence (AI) and network management fields. Your project will be evaluated within the following awarding criteria:
**EXCELLENCE** will evaluate:
- **Ambition.** The applicants have to demonstrate to what extent the proposed Third Party Project contributes to the project scope, have a European dimension and are beyond the State of the Art. The Third Party Project has to describe the innovative approach behind it (e.g. ground-breaking objectives, novel concepts and approaches, new products, services or business and organisational models). Development Projects have to demonstrate the innovative approach behind the VOs, cVOs, DTs & IoT enablers (functions) proposed and how they will contribute to bring the NEPHELE Platform beyond the State of the Art.
- **Innovation:** applicants should provide information about the level of innovation within their market and about the degree of differentiation that this project will bring.
- **Soundness of the approach** and credibility of the proposed methodology.
**IMPACT** will analyse:
- **Market opportunity:** The applicants have to demonstrate a clear idea of what they want to do and whether the new/improved product has market potential, e.g. because it solves a problem for a specific target customer. Development Projects have to demonstrate how the proposed developments can contribute to the validation and extension of VOStack.
- **Commercial Strategy and Scalability:** The applicants have to demonstrate the level of scalability of the new/improved product meaning by not addressing a specific problem but able to be commercialised to solve a structural problem in a specific sector/process/etc. Development Projects have to demonstrate the willingness to sign a commercial agreement with the NEPHELE Platform to contribute to the extension of VOStack once integrated.
- **Environmental and social impact:** The applicants have to demonstrate the project contribution towards environmental, social and economic impacts to contribute to sustainable development, Green Deal and other European policies.
**IMPLEMENTATION** will consider:
- **Team:** The applicants have to demonstrate their management and leadership qualities, their ability to take a concept from ideas to market, their capacity to carry through their ideas and understand the dynamics of the market they are trying to tap into. The team should be a cross-functional team, with a strong background and skills base and taking into account its gender balance. The technical capabilities of the applicants for the developments to be done has to be clearly explained.
• **Resources.** Demonstrate the quality and effectiveness of the resources assigned in order to get the objectives/deliverables proposed.
The evaluators will score each award criterion on a scale from 0 to 5:
- 0 = Proposal fails to address the criterion or cannot be assessed due to missing or incomplete information
- 1 = Poor – criterion is inadequately addressed or there are serious inherent weaknesses
- 2 = Fair – proposal broadly addresses the criterion, but there are significant weaknesses
- 3 = Good – proposal addresses the criterion well, but a number of shortcomings are present
- 4 = Very good – proposal addresses the criterion very well, but a small number of shortcomings are present
- 5 = Excellent – proposal successfully addresses all relevant aspects of the criterion. Any shortcomings are minor.
Each evaluator will produce an Individual Evaluation Report. The threshold for individual criteria will be 3. The overall threshold, applying to the sum of the three individual scores, will be 10. The final score will be calculated as an average of the individual assessments provided by the Evaluators.
Ties will be solved using the following criteria. The criteria in order of priority are:
1. The highest score in the Excellence section.
2. Gender balance among the personnel responsible for carrying out the activities.
3. The highest score in the Implementation section.
All proposals obtaining a score above the threshold, will pass to the next phase.
In cases where there is significant divergence between the evaluators' scoring, experts will convene to establish a unified position on the evaluated proposals. If no consensus is reached, a third evaluator will be included to provide an extra evaluation.
Please note that we need time to process through all the proposals in this phase, so you probably won’t hear back for a while.
### 4.4 Consensus Meeting
The Selection Committee formed by National Technical University of Athens, Siemens, ATOS, Institut National de Recherche en Informatique et Automatique, GEIE ERCIM and Fundingbox Accelerator Sp Zoo will decide by consensus (minimum ⅔ of the votes) the 'List
of Finalists’ and the ‘Reserve List’. Two external evaluators will be invited to participate in the Consensus Meeting in an advisory capacity.
The discussion will be based on the ranking obtained as a result of the External Evaluation. Whilst normally the highest ranked proposals will be selected for funding, the Selection Committee might have fair reasons for objecting to a specific third party, like the alignment with NEPHELE’s goals and scope, the ability to achieve the highest impact possible, commercial competition, as well as the existence of significant ethical concerns or a potential conflict of interest. In this case, the choice may pass to the next-ranked proposal.⁷
The exact number of proposals approved will be decided based on the overall quality of the proposals.
4.5 What's next? Subgrant Agreement Preparation and Signature
Before you get started with the NEPHELE support programme, you need to sign the Subgrant Agreement with the NEPHELE Consortium.
Prior to signing the Agreement, you should provide documents regarding your formal status. The NEPHELE Consortium will verify them to prove your eligibility (for the details please check our Frequently Asked Questions document that is available on the open call website). Please do it within the deadlines that will be communicated to you. If you fail to deliver the requested documents on time, without clear and reasonable justification, we will exclude you from the further formal assessment and you will be replaced with the company from the Reserve List.
5. Our Support Programme and payment arrangements
Once your eligibility has been confirmed following the formal status check and the Subgrant Agreement signed, you will become an official beneficiary of the NEPHELE support programme, which will last up to 6 months. The programme consists of three stages, see Table 1 below.
⁷ Please note that this is not a closed list of reasons.
For the sake of simplicity and transparency, the Financial Support will be paid against specific deliverables/ upon achievement of certain milestones or KPIs (which will be included in the 'Individual Mentoring Plan' annexed to the SGA), and based on the results of Milestone Review.
As a beneficiary, you will receive a funding as follows:
<table>
<thead>
<tr>
<th>Stage No and Name</th>
<th>Stage duration</th>
<th>Deliverable</th>
<th>Delivery month</th>
<th>Lump sum</th>
</tr>
</thead>
<tbody>
<tr>
<td>Stage 1: Individual Mentoring Plan (IMP) and Inception</td>
<td>1 month</td>
<td>IMP and proof of concept (implementation plan)</td>
<td>End of Stage 1</td>
<td>Up to €9 000</td>
</tr>
<tr>
<td>Stage 2: Development</td>
<td>3 months</td>
<td>Prototype release</td>
<td>End of Stage 2</td>
<td>Up to €44 000</td>
</tr>
<tr>
<td>Stage 3: Release</td>
<td>2 months</td>
<td>MVP release</td>
<td>End of Stage 3</td>
<td>Up to €23 000</td>
</tr>
<tr>
<td>Total:</td>
<td>6 months</td>
<td>-</td>
<td>-</td>
<td>Up to €76 000</td>
</tr>
</tbody>
</table>
Table 1 Stages and payment schedule
Milestone review process
At the end of each stage, an exhaustive review process will be implemented consisting of the following evaluation criteria:
- Deliverables’ quality;
- Technical performance indicators;
- Deadline Compliance.
Each criterion will be scored by Technical Mentors from 0 to 10 and the weight of each of these criteria, in the final score, will be as follow:
- Deliverable quality (30%).
- Technical performance indicators (60%).
- Deadline Compliance (10%).
The threshold to receive payments and to pass on to the next stage is **7 points**:
- **Beneficiaries over threshold** will successfully receive the next payment and continue in the programme.
- **Beneficiaries under threshold** will be invited to leave the programme by their Technical Mentor. If this decision is ratified by the Selection Committee, then they will have to leave the programme and will not receive the payment.
The function of the Selection Committee is to review and validate each Technical Mentor's evaluation proposal, with a special focus on cases where beneficiaries have not met the threshold. The Committee will take into account all possible objective reasons for underperformance, such as external factors that may have influenced the beneficiaries' performance. The Selection Committee will then make the final decision and approve the payments.
**Payment schedule**
As a selected grantee, you will receive a fixed lump sum of up to €76 000. The lump sum funding is a simplified method of settling expenses in projects financed by Horizon Europe funds. As a grantee, you will not be required to present strictly defined accounting documents (e.g. invoices) to prove the costs incurred. However, you will need to demonstrate that the project implementation is in line with the set milestones (i.e. KPIs/Deliverables) see [FAQ, Section 4](#), which will be defined in the Individual Mentoring Plan at the beginning of the programme. We will carefully assess your progress and the quality of your work during the review process, not your accounting. The lump sum method does not exempt you from collecting documentation to confirm the costs under fiscal regulations.
For a more detailed payment schedule please check the [FAQ](#).
### 6. Contact us
**How can we help you?**
If you have any questions regarding the application process, please feel free to email us at nephelehelpdesk@fundingbox.com, or post your questions on the [Helpdesk Space](#). In case of any technical issues or problems, please include the following information in your message:
- Your username, telephone number and your email address;
- Details of the specific problem (error messages you encountered, bugs descriptions, i.e. if a dropdown list isn't working, etc.);
Complaints
If you believe that a mistake has been made after receiving the results of one of the evaluation phases (when foreseen), you may submit a complaint. To do so please email us your complaint in English at nephelehelpdesk@fundingbox.com and include the following information:
- Your contact details, including email address,
- The subject of the complaint,
- Information and evidence regarding the alleged breach.
You have 3 calendar days to submit your complaint, starting from the day after the communication was sent. We will review your complaint within seven calendar days of its receipt. If we need more time to assess your complaint, we will notify you by email of the extension. Please note that we will not review anonymous complaints or complaints with incomplete information.
Please be aware that the evaluation is conducted by experts in the relevant field, and we do not interfere with their assessment. Therefore, we will not evaluate complaints related to the results of the evaluation other than those related to the mistakes in the evaluation of the first eligibility criteria.
7. Last but not least - final provisions
Any matters not covered by this guide will be governed by Polish law and rules related to the Horizon Europe Programme and EU grants. Please take into account that we make our best effort to keep all provided data confidential. However, for the avoidance of doubt, you are solely responsible to indicate your confidential information as such.
Your IPR will remain your property.
For the selected grantees, the Subgrant agreement will include the set of obligations towards the European Commission (for example: promoting the project and giving visibility to the EU funding, maintaining confidentiality, understanding potential controls by the EC/ECA, EPPO and OLAF).
The NEPHELE Consortium might cancel the call at any time, change its provisions or extend it. In such a case we will inform all applicants about such change. Signature of the Subgrant agreement is an initial condition to establish any obligations among applicants and any Consortium partners (with respect to the obligation of confidentiality of the application).
Did not find what you were looking for? You may want to check our Frequently Asked Questions (available on the open call website).
8. Extra hints before you submit your proposal
A proposal takes time and effort and we know it. Here are some few crucial points you should read before submitting your proposal.
- Is your project in line with what NEPHELE is looking for? You are not sure? You can consult this section and this one.
- Did you present your project in a way that will convince evaluators? Not sure if you did? Go back to this section.
- Is your project fulfilling all eligibility requirements described in the guide? Check again this section.
- Are you sure you are able to cope with our process of the Subgrant agreement signature and payment arrangements for selected proposals? You may want to go over this section.
- Do you need extra help? Contact us
And as a bonus: You can read our R.E.C.I.P.E. for an outstanding European Funding Opportunity application for additional advice. Good luck!
Annex 1: Software repository and documentation
The software release of VOStack is made available at the URL: https://gitlab.eclipse.org/eclipse-research-labs/nephele-project. Details regarding the basic definitions of the VOs and cVOs and the architectural approach followed in VOStack are available at the URL: https://netmode.gitlab.io/vo-wot/index.html. It should be noted that in accordance with the VOStack specifications, we have made available two software implementations. Both implementations adhere to the same principles and core Application Programming Interfaces (APIs), however being aligned with specifications coming from different standardisation groups. These groups regard the W3C Web of Things (WoT) (https://www.w3.org/WoT/) and the Open Mobile Alliance (OMA) Lightweight M2M (LWM2M) (https://omaspecworks.org/what-is-oma-specworks/iot/lightweight-m2m-lwm2m/) specifications.
Specifically, for the implementation that is aligned with the W3C WoT:
- The software repository is available at the URL: https://gitlab.eclipse.org/eclipse-research-labs/nephele-project/vo-wot
- The documentation is available at the URL: https://netmode.gitlab.io/vo-wot/index.html#
For the implementation that is aligned with the OMA LWM2M:
- The software repository and the documentation is available at the URL: https://gitlab.eclipse.org/eclipse-research-labs/nephele-project/vo-lwm2m
In your application, you may select to work based on any of the two provided implementations.
Annex 2: Focus areas
Development of VOs and optionally cVOs along with a set of generic or device-specific functions based on VOStack.
In this focus area, we consider applications where a set of VOs and/or cVOs are developed for IoT devices that are applied in a specific industrial domain (e.g., energy, smart grid, water management, video streaming, robotics). The objective is to develop the appropriate VOs/cVOs to support the provision of the digital counterparts of these devices in accordance with the VOStack, as well as to provide a set of virtual functions that support business logic aspects for the management of the collected IoT data or the management of the IoT devices. Among others, the development of VOs to manage video streams and/or images is envisaged. Such virtual functions can be IoT-domain-specific (e.g., management and analysis of data related to energy consumption) or generic enough (e.g., time series data analysis functions applicable to a wide range of devices, complex event processing functions for management of conflicts by actions targeted to multiple IoT devices).
Development of Digital Twins based on the cVO concept.
In this focus area, we consider applications where a DT is developed to support analysis/simulation/emulation processes over the data collected by the considered IoT devices. The DT has to be developed based on the cVO concept that interacts with one or more VOs and provides advanced data management functionalities, considering context representation and context awareness aspects.
Development of VO network and/or security management mechanisms for the VOStack.
In this focus area, we consider applications where specific mechanisms are going to be developed for enhancing the security and/or network management mechanisms supported by VOStack. For the networking part, among others, mechanisms can be examined for dynamic routing in ad-hoc networks, time-sensitive networking (TSN) functionalities, as well as integration of VOStack with Software-Defined Networking (SDN) mechanisms. For the security part, we consider standardised mechanisms (e.g., EDHOC, ACE-OAuth framework) for the authentication, authorisation and protected communication among the VOs/cVOs and the IoT devices, and the secure exchange of data among the VOs based on the data made...
available in their data store. The development of security mechanisms should be in the Python programming language.
Development of semantic interoperability mechanisms (e.g., from W3C WoT to NGSI-LD, from W3C WoT to OMA LwM2M and vice-versa) for the VOStack.
In this focus area, we consider the development of mechanisms that can support semantic translation of IoT data represented based on different semantic models. Such models include - among others - the W3C WoT specifications, the NGSI-LD specifications, the OMA LWM2M specifications, and the oneM2M specifications. Furthermore, in this focus area, mechanisms that examine the interplay between VOs/cVOs and the IoT devices in terms of execution of Machine Learning (ML) processes and/or semantic data management are considered.
Annex 3: Project’s use cases
NEPHELE will launch four internal use cases in various vertical industries (logistics, smart buildings, emergency, healthcare) to validate, evaluate and demonstrate the innovative characteristics of the proposed architectural approach.
More info on these use cases is available on the project’s main website: https://nephele-project.eu/use-cases.
|
{"Source-Url": "https://s3.amazonaws.com/fundingbox-sites/gear%2F1698655759888-NEPH_Guide+for+Applicants_final+v1.pdf", "len_cl100k_base": 7880, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 79250, "total-output-tokens": 8595, "length": "2e12", "weborganizer": {"__label__adult": 0.0006537437438964844, "__label__art_design": 0.002452850341796875, "__label__crime_law": 0.0011701583862304688, "__label__education_jobs": 0.0294189453125, "__label__entertainment": 0.00039458274841308594, "__label__fashion_beauty": 0.000598907470703125, "__label__finance_business": 0.057586669921875, "__label__food_dining": 0.0007538795471191406, "__label__games": 0.002166748046875, "__label__hardware": 0.00408935546875, "__label__health": 0.0012664794921875, "__label__history": 0.00099945068359375, "__label__home_hobbies": 0.0007081031799316406, "__label__industrial": 0.0029354095458984375, "__label__literature": 0.0006189346313476562, "__label__politics": 0.001575469970703125, "__label__religion": 0.0009093284606933594, "__label__science_tech": 0.137939453125, "__label__social_life": 0.00037288665771484375, "__label__software": 0.02716064453125, "__label__software_dev": 0.72412109375, "__label__sports_fitness": 0.0005979537963867188, "__label__transportation": 0.0012140274047851562, "__label__travel": 0.0004897117614746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36024, 0.0262]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36024, 0.03131]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36024, 0.91067]], "google_gemma-3-12b-it_contains_pii": [[0, 745, false], [745, 2035, null], [2035, 3663, null], [3663, 5860, null], [5860, 8267, null], [8267, 10916, null], [10916, 14243, null], [14243, 15673, null], [15673, 17337, null], [17337, 20104, null], [20104, 22259, null], [22259, 24186, null], [24186, 25770, null], [25770, 27853, null], [27853, 29672, null], [29672, 31049, null], [31049, 32534, null], [32534, 34859, null], [34859, 35647, null], [35647, 36024, null]], "google_gemma-3-12b-it_is_public_document": [[0, 745, true], [745, 2035, null], [2035, 3663, null], [3663, 5860, null], [5860, 8267, null], [8267, 10916, null], [10916, 14243, null], [14243, 15673, null], [15673, 17337, null], [17337, 20104, null], [20104, 22259, null], [22259, 24186, null], [24186, 25770, null], [25770, 27853, null], [27853, 29672, null], [29672, 31049, null], [31049, 32534, null], [32534, 34859, null], [34859, 35647, null], [35647, 36024, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36024, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36024, null]], "pdf_page_numbers": [[0, 745, 1], [745, 2035, 2], [2035, 3663, 3], [3663, 5860, 4], [5860, 8267, 5], [8267, 10916, 6], [10916, 14243, 7], [14243, 15673, 8], [15673, 17337, 9], [17337, 20104, 10], [20104, 22259, 11], [22259, 24186, 12], [24186, 25770, 13], [25770, 27853, 14], [27853, 29672, 15], [29672, 31049, 16], [31049, 32534, 17], [32534, 34859, 18], [34859, 35647, 19], [35647, 36024, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36024, 0.10612]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
66561a859f3c694885e85157e32bca37a40f2ed9
|
CS 33
Virtual Memory
The concept of the address space is fundamental in most of today’s operating systems. Threads of control executing in different address spaces are protected from one another, since none of them can reference the memory of any of the others. In most systems (such as Unix), the operating system resides in address space that is shared with all processes, but protection is employed so that user threads cannot access the operating system. What is crucial in the implementation of the address-space concept is the efficient management of the underlying primary and secondary storage.
Early approaches to managing the address space were concerned primarily with protecting the operating system from the user. One technique was the hardware-supported concept of the *memory fence*: an address was established below which no user mode access was allowed. The operating system was placed below this point in memory and was thus protected from the user.
The memory-fence approach protected the operating system, but did not protect user processes from one another. (This wasn't an issue for many systems—there was only one user process at a time.) Another technique, still employed in some of today's systems, is the use of base and bounds registers to restrict a process's memory references to a certain range. Each address generated by a user process was first compared with the value in the bounds register to make certain that it did not reference a location beyond the process's range of memory, and then was modified by adding to it the value in the base register, insuring that it did not reference a location before the process's range of memory.
A further advantage of this technique was to ensure that a process would be loaded into what appeared to be location 0—thus no relocation was required at load time.
Swapping is a technique, still in use today, in which the images of entire processes are transferred back and forth between primary and secondary storage. An early use of it was for (slow) time-sharing systems: when a user paused to think, his or her process was swapped out and that of another user was swapped in. This allowed multiple users to share a system that employed only the memory fence for protection.
Base and bounds registers made it feasible to have a number of processes in primary memory at once. However, if one of these processes was inactive, swapping allowed the system to swap this process out and swap another process in. Note that the use of the base register is very important here: without base registers, after a process is swapped out, it would have to be swapped into the same location in which it resided previously.
The concept of overlays is similar to the concept of swapping, except that it applies to pieces of images rather than whole images and the user is in charge. Say we have 100 kilobytes of available memory and a 200-kilobyte program. Clearly, not all the program can be in memory at once. The user might decide that one portion of the program should always be resident, while other portions of the program need be resident only for brief periods. The program might start with routines A and B loaded into memory. A calls B; B returns. Now A wants to call C, so it first reads C into the memory previously occupied by B (it overlays B), and then calls C. C might then want to call D and E, though there is only room for one at a time. So C first calls D, D returns, then C overlays D with E and then calls E.
The advantage of this technique is that the programmer has complete control of the use of memory and can make the necessary optimization decisions. The disadvantage is that the programmer must make the necessary decisions to make full use of memory (the operating system doesn’t help out). Few programmers can make such decisions wisely, and fewer still want to try.
One way to look at virtual memory is as an automatic overlay technique: processes “see” an address space that is larger than the amount of real memory available to them; the operating system is responsible for the overlaying.
Put more abstractly (and accurately), virtual memory is the support of an address space that is independent of the size of primary storage. Some sort of mapping technique must be employed to map virtual addresses to primary and secondary stores. In the typical scenario, the computer hardware maps some virtual addresses to primary storage. If a reference is made to an unmapped address, then a fault occurs (a page fault) and the operating system is called upon to deal with it. The operating system might then find the desired virtual locations on secondary storage (such as a disk) and transfer them to primary storage. Or the operating system might decide that the reference is illegal and deliver an addressing exception to the process.
As with base and bounds registers, the virtual memory concept allows us to handle multiple processes simultaneously, with the processes protected from one another.
Virtual memory (what the program sees) is divided into fixed-size pages (on the x86 these are usually 4 kilobytes in size). Real memory (DRAM) is also divided into fixed-size pieces, called page frames (though they’re often referred to simply as pages). A memory map, implemented in hardware and often called a page table, translates references to virtual-memory pages into references to real-memory page frames. In general, virtual memory is larger than real memory, thus not all pages can be mapped to page frames. Those that are not are said to have invalid translations.
A page table is an array of page table entries. Suppose we have, as is the usual case for the x86, a 32-bit virtual address and a page size of 4096 bytes. The 32-bit address might be split into two parts: a 20-bit page number and a 12-bit offset within the page. When a thread generates an address, the hardware uses the page-number portion as an index into the page-table array to select a page-table entry, as shown in the picture. If the page is in primary storage (i.e. the translation is valid), then the validity bit in the page-table entry is set, and the page-frame-number portion of the page-table entry is the high-order bits of the location in primary memory where the page resides. (Primary memory is thought of as being subdivided into pieces called page frames, each exactly big enough to hold a page; the address of each of these page frames is at a “page boundary,” so that its low-order bits are zeros.) The hardware then appends the offset from the original virtual address to the page-frame number to form the final, real address.
If the validity bit of the selected page-table entry is zero, then a page fault occurs and the operating system takes over. Other bits in a typical page-table entry include a reference bit, which is set by the hardware whenever the page is referenced, and a modified bit, which is set whenever the page is modified. We will see how these bits are used later in this lecture. The page-protection bits indicate who is allowed access to the page and what sort of access is allowed. For example, the page can be restricted for use only by the operating system, or a page containing executable code can be write-protected, meaning that read accesses are allowed but not write accesses.
Quiz 1
How many $2^{12}$-byte pages fit in a 32-bit address space?
a) a bit over a 1000
b) a bit over a million
c) a bit over a billion
d) none of the above
VM is Your Friend ...
- Not everything has to be in memory at once
- pages brought in (and pushed out) when needed
- unallocated parts of the address space consume no memory
» e.g., hole between stack and dynamic areas
- What’s mine is not yours (and vice versa)
- address spaces are disjoint
- Sharing is ok though ...
- address spaces don’t have to be disjoint
» a single page frame may be mapped into multiple processes
- I don’t trust you (or me)
- access to individual pages can be restricted
» read, write, execute, or any combination
In the not-all-that-distant past, 4 megabytes of memory would have cost many tens of thousands of dollars.
Page-Table Size
- Consider a full $2^{32}$-byte address space
- assume 4096-byte ($2^{12}$-byte) pages
- 4 bytes per page-table entry
- the page table would consist of $2^{32}/2^{12} = 2^{20}$ entries
- its size would be $2^{22}$ bytes (or 4 megabytes)
» at $100$/gigabyte
• around $6$
- For a $2^{64}$-byte address space
- assume 4096-byte ($2^{12}$-byte) pages
- 8 bytes per page-table entry
- the page table would consist of $2^{64}/2^{12} = 2^{52}$ entries
- its size would be $2^{55}$ bytes (or 32 petabytes)
» at $1$/gigabyte
• over $33$ million
The IA32 architecture employs a two-level page table providing a means for reducing the memory requirements of the address map. The high-order 10 bits of the 32-bit virtual address are an index into what’s called the page directory table. Each of its entries refer to a page table, whose entries are indexed by the next 10 bits of the virtual address. Its entries refer to individual pages; the offset within the page is indexed by the low-order 12 bits of the virtual address. The current page directory is pointed to by a special register known as CR3 (control register 3), whose contents may be modified only in privileged mode. The page directory must reside in real memory when the address space is in use, but it is relatively small (1024 4-byte entries: it’s exactly one page in length). Though there are potentially a large number of page tables, only those needed to satisfy current references must be in memory at once.
Quiz 2
Can a page start at a virtual address that's not divisible by the page size?
a) yes
b) no
For Linux on the IA32, the OS kernel occupies the top quarter of the address space and is mapped into every process (though it may not be accessed in user mode). Each user process is mapped into the bottom three quarters of the address space — only one is mapped at a time for each processor.
Each process has its own page-directory table describing its address space. The top quarter (256) entries are the same as for all other processes and describe the OS kernel’s mappings. The bottom three quarters (768) are, in general, private to the process and describe its mappings.
For the x86-64, four levels of translation are done (the high-order 16 bits of the address are not currently used: the hardware requires that these 16 bits must all be equal to bit 47), thus it really supports “only” a 48-bit address space. Note that only the “page map table” must reside in real memory at all times. The other tables must be resident only when necessary.
Alternatively, there may be only three levels of page tables, ending with the page-directory table and 2MB pages. Both 2MB and 4KB pages may coexist in the same address space; which is being used is indicated in the associated page-directory-table entry.
The hardware also supports 1 GB pages by eliminating the page-directory table. Not many operating systems (if any) yet take advantage of this.
Why Multiple Page Sizes?
- **External fragmentation**
- for region composed of 4KB pages, average external fragmentation is 2KB
- for region composed of 1GB pages, average external fragmentation is 512MB
- **Page-table overhead**
- larger page sizes have fewer page tables
» less overhead in representing mappings
Recall that, in current implementations of the x86-64 architecture, only 48 bits of virtual address are used. Furthermore, the high-order 16 bits must be equal to bit 47. Thus the legal addresses are those at the top and at the bottom of the address space. The top addresses are used for the OS kernel, and thus mapped into all processes. The bottom address are used for each user process. The addresses in the middle (most of the address space — the slide is not drawn to scale!) are illegal and generate faults if used.
The reason for doing things this way (i.e., for the restrictions on the high-order bits) is to force the kernel to be at the top of the address space, allowing growth of the user portion as more virtual-address bits are supported.
Performance
- Page table resides in real memory (DRAM)
- A 32-bit virtual-to-real translation requires two accesses to page tables, plus the access to the ultimate real address
- three real accesses for each virtual access
- 3X slowdown!
- A 64-bit virtual-to-real translation requires four accesses to page tables, plus the access to the ultimate real address
- 5X slowdown!
To speed-up virtual-to-real translation, a special cache is maintained of recent translations — it’s called the translation lookaside buffer (TLB). It resides in the chip, one per core and hyperthread. The TLB shown in the slide is a two-way set associative cache, as discussed in lecture 17. This one assumes a 32-bit virtual address with a 4k page. Things are more complicated when multiple page sizes are supported. For example, is there just one entry for a large page that covers its entire range of addresses, or is a large page dealt with by putting into the cache multiple entries covering the large page, but each for the size of a small page? Both approaches are not only possible, but done.
Quiz 3
Recall that there is a 5x slowdown on memory references via virtual memory on the x86-64. If all references are translated via the TLB, the slowdown will be
a) 1x
b) 2x
c) 3x
d) 4x
OS Role in Virtual Memory
- Memory is like a cache
- quick access if what’s wanted is mapped via page table
- slow if not — OS assistance required
- OS
- make sure what’s needed is mapped in
- make sure what’s no longer needed is not mapped in
Mechanism
- Program references memory
- if reference is mapped, access is quick
» even quicker if translation in TLB and referent in on-chip cache
- if not, page-translation fault occurs and OS is invoked
» determines desired page
» maps it in, if legal reference
Three issues concerning the mechanism for caching are the following: the *fetch policy*, which governs when items are fetched to go into the cache, the *placement policy*, which governs where the fetched items are placed in the cache, and the *replacement policy*, which governs when and which items are removed from the cache (and perhaps written back to their source).
Hardware Caches
- **Fetch policy**
- when are items put in the cache?
» when they’re referenced
» prefetch might be possible (e.g., for sequential access)
- **Placement policy**
- where do they go in the cache?
» usually determined by cache architecture
» if there’s a choice, it’s typically a random choice
- **Replacement policy**
- what’s removed to make room?
» usually determined by cache architecture
» if there’s a choice, it’s typically a random choice
Software Caches
• Fetch policy
– when are items put in the cache?
» when they’re referenced
» prefetch might be easier than for hardware caches
• Placement policy
– where do they go in the cache?
» usually doesn’t matter (no memory is more equal than others)
• Replacement policy
– what’s removed to make room?
» would like to remove that whose next use is farthest in future
» instead, remove that whose last reference was farthest in the past
The (kernel) thread that maintains the free page-frame list is typically called the pageout daemon. Its job is to make certain that the free page-frame list has enough page frames on it. If the size of the list drops below some threshold, then the pageout daemon examines those page frames that are being used and selects a number of them to be freed. Before freeing a page, it must make certain that a copy of the current contents of the page exists on secondary storage. So, if the page has been modified since it was brought into primary storage (easily determined if there is a hardware-supported modified bit), it must first be written out to secondary storage. In many systems, the pageout daemon groups such pageouts into batches, so that a number of pages can be written out in a single operation, thus saving disk time. Unmodified, selected pages are transferred directly to the free page-frame list, modified pages are put there after they have been written out.
In most systems, pages in the free list get a “second chance” — if a thread in a process references such a page, there is a page fault (the page frame has been freed and could be used to hold another page), but the page-fault handler checks to see if the desired page is still in primary storage, but in the free list. If it is in the free list, it is removed and given back to the faulting process. We still suffer the overhead of a trap, but there is no wait for I/O.
The OS can keep track of the history of page frame by use of two bits in each page-table entry: the *modify* bit, which is set by hardware whenever the associated page frame is modified, and the *referenced* bit, which is set by hardware whenever the associated page is accessed (via either a load or a store).
A common approach for determining which page frames are not in use is known as the clock algorithm. All active page frames are conceptually arranged in a circularly linked list. The page-out thread slowly traverses the list. The “one-handed” version of the clock algorithm, each time it encounters a page, checks the reference bit in the corresponding translation entry: if the bit is set, it clears it. If the bit is clear, it adds the page to the free list (writing it back to secondary storage first, if necessary).
A problem with the one-handed version is that, in systems with large amounts of primary storage, it might take too long for the page-out thread to work its way all around the list of page frames before it can recognize that a page has not been recently referenced. In the two-handed version of the clock algorithm, the page-out thread implements a second hand some distance behind the first. The front hand simply clears reference bits. The second (back) hand removes those pages whose reference bits have not been set to one by the time the hand reaches the page frame.
Why is virtual memory used?
More VM than RM
File I/O in Unix, and in most operating systems, is not done directly to the disk drive, but through intermediary buffers, known as the buffer cache, in the operating system's address space. This cache has two primary functions. The first, and most important, is to make possible concurrent I/O and computation within a Unix process. The second is to insulate the user from physical disk-block boundaries.
From a user process’s point of view, I/O is *synchronous*. By this we mean that when the I/O system call returns, the system no longer needs the user-supplied buffer. For example, after a write system call, the data in the user buffer has either been transmitted to the device or copied to a kernel buffer — the user can now scribble over the buffer without affecting the data transfer. Because of this synchronization, from a user process’s point of view, no more than one I/O operation can be in progress at a time.
The buffer cache provides a kernel implementation of multibuffered I/O, and thus concurrent I/O and computation are made possible.
The use of *read-aheads* and *write-behinds* makes possible concurrent I/O and computation: if the block currently being fetched is block $i$ and the previous block fetched was block $i-1$, then block $i+1$ is also fetched. Modified blocks are normally written out not synchronously but instead sometime after they were modified, asynchronously.
Traditional I/O
User Process 1
1: read f1, p0
3: read f1, p1
5: read f3, p0
User Process 2
2: read f2, p0
4: read f2, p1
5: read f3, p0
Kernel Memory
Buffer Cache
File 1
page 0
page 1
File 2
page 0
page 1
page 2
page 3
page 4
page 5
page 6
page 7
File 3
page 0
page 1
page 2
page 3
page 4
page 5
page 6
page 7
Mapped File I/O
Process 1
Virtual Memory
Real Memory
Disk
File 1
page 0
page 1
page 2
page 3
page 4
page 5
page 6
page 7
page 0
page 1
page 2
page 3
page 4
page 5
page 6
page 7
page 0
page 1
page 2
page 3
page 4
page 5
page 6
page 7
Multi-Process Mapped File I/O
Process 2
Virtual Memory
File 1
page 0
page 1
page 2
page 3
page 4
page 5
page 6
page 7
Real Memory
page 0
page 1
page 2
page 3
page 4
page 5
page 6
page 7
Disk
Traditional I/O involves explicit calls to read and write, which in turn means that data is accessed via a buffer; in fact, two buffers are usually employed: data is transferred between a user buffer and a kernel buffer, and between the kernel buffer and the I/O device.
An alternative approach is to map a file into a process’s address space: the file provides the data for a portion of the address space and the kernel’s virtual-memory system is responsible for the I/O. A major benefit of this approach is that data is transferred directly from the device to where the user needs it; there is no need for an extra system buffer.
Mmap System Call
```c
void *mmap(
void *addr,
// where to map file (0 if don’t care)
size_t len,
// how much to map
int prot,
// memory protection (read, write, exec.)
int flags,
// shared vs. private, plus more
int fd,
// which file
off_t off
// starting from where
);
```
Mmap maps the file given by `fd`, starting at position `off`, for `len` bytes, into the caller’s address space starting at location `addr`
- `len` is rounded up to a multiple of the page size
- `off` must be page-aligned
- if `addr` is zero, the kernel assigns an address
- if `addr` is positive, it is a suggestion to the kernel as to where the mapped file should be located (it usually will be aligned to a page). However, if `flags` includes MAP_FIXED, then `addr` is not modified by the kernel (and if its value is not reasonable, the call fails)
- the call returns the address of the beginning of the mapped file
The flags argument must include either MAP_SHARED or MAP_PRIVATE (but not both). If it’s MAP_SHARED, then the mapped portion of the caller’s address space contains the current contents of the file; when the mapped portion of the address space is modified by the process, the corresponding portion of the file is modified.
However, if `flags` includes MAP_PRIVATE, then the idea is that the mapped portion of the address space is initialized with the contents of the file, but that changes made to the mapped portion of the address space by the process are private and not written back to the file. The details are a bit complicated: as long as the mapping process does not modify any of the mapped portion of the address space, the pages contained in it contain the current contents of the corresponding pages of the file. However, if the process modifies a page, then that particular page no longer contains the current contents of the corresponding file page, but contains whatever modifications are made to it by the process. These changes are not written back to the file and not shared with any other process that has mapped the file. It’s unspecified what the situation is for other pages in the mapped region after one of them is modified. Depending on the implementation, they might continue to contain the current contents of the corresponding pages of the file until they, themselves, are modified. Or they might also be treated as if they’d just been written to and thus no longer be shared with others.
The `mmap` system call maps a file into a process’s address space. All processes mapping the same file can share the pages of the file.
There are a couple options for how modifications to mmapped files are dealt with. The most straightforward is the *share* option in which changes to mmapped file pages modify the file and hence the changes are seen by the other processes who have share-mapped the file.
The other option is to *private*-map the file: changes made to mmapped file pages do not modify the file. Instead, when a page of a file is first modified via a private mapping, a copy of just that page is made for the modifying process, but this copy is not seen by other processes, nor does it appear in the file.
In the slide, the process on the left has private-mapped the file. Thus its changes to the mapped portion of the address space are made to a copy of the page being modified.
Here we map the contents of a file containing a dataObject_t into the caller’s address space, allowing it both read and write access. Note mapping the file into memory does not cause any immediate I/O to take place. The operating system will perform the I/O when necessary, according to its own rules.
When a process calls fork and creates a child, the child’s address space is normally a copy of the parent’s. Thus changes made by the child to its address space will not be seen in the parent’s address space (as shown in the left-hand column). However, if there is a region in the parent’s address space that has been mmaped using the MAP_SHARED flag, and subsequently the parent calls fork and creates a child, the mmaped region is not copied but is shared by parent and child. Thus changes to the region made by the child will be seen by the parent (and vice versa).
|
{"Source-Url": "http://cs.brown.edu/courses/csci0330/lecture/28VM.pdf", "len_cl100k_base": 5708, "olmocr-version": "0.1.49", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 61380, "total-output-tokens": 7410, "length": "2e12", "weborganizer": {"__label__adult": 0.00029850006103515625, "__label__art_design": 0.00043082237243652344, "__label__crime_law": 0.0002803802490234375, "__label__education_jobs": 0.0016965866088867188, "__label__entertainment": 0.00010639429092407228, "__label__fashion_beauty": 0.00014328956604003906, "__label__finance_business": 0.0002484321594238281, "__label__food_dining": 0.00031447410583496094, "__label__games": 0.0007433891296386719, "__label__hardware": 0.00806427001953125, "__label__health": 0.00046181678771972656, "__label__history": 0.0003643035888671875, "__label__home_hobbies": 0.00017082691192626953, "__label__industrial": 0.0006747245788574219, "__label__literature": 0.00025963783264160156, "__label__politics": 0.00018131732940673828, "__label__religion": 0.0004620552062988281, "__label__science_tech": 0.1510009765625, "__label__social_life": 9.298324584960938e-05, "__label__software": 0.0295562744140625, "__label__software_dev": 0.8037109375, "__label__sports_fitness": 0.0002435445785522461, "__label__transportation": 0.0005331039428710938, "__label__travel": 0.0002052783966064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25232, 0.06245]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25232, 0.4602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25232, 0.93562]], "google_gemma-3-12b-it_contains_pii": [[0, 22, false], [22, 604, null], [604, 969, null], [969, 1836, null], [1836, 2684, null], [2684, 3858, null], [3858, 4992, null], [4992, 5567, null], [5567, 7299, null], [7299, 7458, null], [7458, 8022, null], [8022, 8716, null], [8716, 9646, null], [9646, 9745, null], [9745, 10323, null], [10323, 10696, null], [10696, 10951, null], [10951, 11094, null], [11094, 11420, null], [11420, 12174, null], [12174, 12557, null], [12557, 13259, null], [13259, 13455, null], [13455, 13708, null], [13708, 13989, null], [13989, 14360, null], [14360, 14854, null], [14854, 15328, null], [15328, 16772, null], [16772, 17083, null], [17083, 18174, null], [18174, 18202, null], [18202, 18218, null], [18218, 18218, null], [18218, 18218, null], [18218, 19275, null], [19275, 19621, null], [19621, 19937, null], [19937, 20177, null], [20177, 20372, null], [20372, 21005, null], [21005, 23465, null], [23465, 23601, null], [23601, 23871, null], [23871, 24362, null], [24362, 24664, null], [24664, 25232, null]], "google_gemma-3-12b-it_is_public_document": [[0, 22, true], [22, 604, null], [604, 969, null], [969, 1836, null], [1836, 2684, null], [2684, 3858, null], [3858, 4992, null], [4992, 5567, null], [5567, 7299, null], [7299, 7458, null], [7458, 8022, null], [8022, 8716, null], [8716, 9646, null], [9646, 9745, null], [9745, 10323, null], [10323, 10696, null], [10696, 10951, null], [10951, 11094, null], [11094, 11420, null], [11420, 12174, null], [12174, 12557, null], [12557, 13259, null], [13259, 13455, null], [13455, 13708, null], [13708, 13989, null], [13989, 14360, null], [14360, 14854, null], [14854, 15328, null], [15328, 16772, null], [16772, 17083, null], [17083, 18174, null], [18174, 18202, null], [18202, 18218, null], [18218, 18218, null], [18218, 18218, null], [18218, 19275, null], [19275, 19621, null], [19621, 19937, null], [19937, 20177, null], [20177, 20372, null], [20372, 21005, null], [21005, 23465, null], [23465, 23601, null], [23601, 23871, null], [23871, 24362, null], [24362, 24664, null], [24664, 25232, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25232, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25232, null]], "pdf_page_numbers": [[0, 22, 1], [22, 604, 2], [604, 969, 3], [969, 1836, 4], [1836, 2684, 5], [2684, 3858, 6], [3858, 4992, 7], [4992, 5567, 8], [5567, 7299, 9], [7299, 7458, 10], [7458, 8022, 11], [8022, 8716, 12], [8716, 9646, 13], [9646, 9745, 14], [9745, 10323, 15], [10323, 10696, 16], [10696, 10951, 17], [10951, 11094, 18], [11094, 11420, 19], [11420, 12174, 20], [12174, 12557, 21], [12557, 13259, 22], [13259, 13455, 23], [13455, 13708, 24], [13708, 13989, 25], [13989, 14360, 26], [14360, 14854, 27], [14854, 15328, 28], [15328, 16772, 29], [16772, 17083, 30], [17083, 18174, 31], [18174, 18202, 32], [18202, 18218, 33], [18218, 18218, 34], [18218, 18218, 35], [18218, 19275, 36], [19275, 19621, 37], [19621, 19937, 38], [19937, 20177, 39], [20177, 20372, 40], [20372, 21005, 41], [21005, 23465, 42], [23465, 23601, 43], [23601, 23871, 44], [23871, 24362, 45], [24362, 24664, 46], [24664, 25232, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25232, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
8e21fa633c57055e4b1a9c4abbc6fa91628e3a0d
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-642-31875-7_9.pdf", "len_cl100k_base": 7473, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33214, "total-output-tokens": 9354, "length": "2e12", "weborganizer": {"__label__adult": 0.0003070831298828125, "__label__art_design": 0.0003771781921386719, "__label__crime_law": 0.0003604888916015625, "__label__education_jobs": 0.0008006095886230469, "__label__entertainment": 6.324052810668945e-05, "__label__fashion_beauty": 0.0001575946807861328, "__label__finance_business": 0.0001939535140991211, "__label__food_dining": 0.00030303001403808594, "__label__games": 0.0004351139068603515, "__label__hardware": 0.0007719993591308594, "__label__health": 0.00042939186096191406, "__label__history": 0.00023174285888671875, "__label__home_hobbies": 9.131431579589844e-05, "__label__industrial": 0.0003871917724609375, "__label__literature": 0.0002980232238769531, "__label__politics": 0.0002758502960205078, "__label__religion": 0.00040340423583984375, "__label__science_tech": 0.0292510986328125, "__label__social_life": 9.41157341003418e-05, "__label__software": 0.006229400634765625, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.00024580955505371094, "__label__transportation": 0.0004935264587402344, "__label__travel": 0.0001806020736694336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33789, 0.02724]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33789, 0.37424]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33789, 0.87176]], "google_gemma-3-12b-it_contains_pii": [[0, 2631, false], [2631, 5958, null], [5958, 9516, null], [9516, 12288, null], [12288, 15419, null], [15419, 19464, null], [19464, 20955, null], [20955, 24085, null], [24085, 27546, null], [27546, 30564, null], [30564, 33789, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2631, true], [2631, 5958, null], [5958, 9516, null], [9516, 12288, null], [12288, 15419, null], [15419, 19464, null], [19464, 20955, null], [20955, 24085, null], [24085, 27546, null], [27546, 30564, null], [30564, 33789, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33789, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33789, null]], "pdf_page_numbers": [[0, 2631, 1], [2631, 5958, 2], [5958, 9516, 3], [9516, 12288, 4], [12288, 15419, 5], [15419, 19464, 6], [19464, 20955, 7], [20955, 24085, 8], [24085, 27546, 9], [27546, 30564, 10], [30564, 33789, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33789, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
93f6f216cb52567f93622c83fad9178879d18154
|
CPS104
Computer Organization and Programming
Lecture 18: Cache Memory
Nov. 1, 1999
Dietolf (Dee) Ramm
http://www.cs.duke.edu/~dr/cps104.html
Outline of Today’s Lecture
° Direct Mapped Cache (review).
° Two-Way Set Associative Cache
° Fully Associative cache
° Replacement Policies
° Write Strategies
Direct Mapped Cache
For a Cache of \( 2^M \) bytes with block size of \( 2^L \) bytes
- There are \( 2^{M-L} \) cache blocks,
- Lowest \( L \) bits of the address are Block-Offset bits
- Next \( (M - L) \) bits are the Cache-Index.
- The last \( (32 - M) \) bits are the Tag bits.
<table>
<thead>
<tr>
<th>32-M bits</th>
<th>Tag</th>
<th>M-L bits Cache Index</th>
<th>L bits block offset</th>
</tr>
</thead>
</table>
Data Address
\[
\text{Cache-Index} = (\text{<Address>} \mod (\text{Cache\_Size}))/ \text{Block\_Size} \\
\text{Block-Offset} = \text{<Address>} \mod (\text{Block\_Size}) \\
\text{Tag} = \text{<Address>} / (\text{Cache\_Size})
\]
Example: 1-KB Cache with 32B blocks:
Cache Index = (Address Mod (1024))/ 32
Block-Offset = Address Mod (32)
Tag = Address / (1024)
<table>
<thead>
<tr>
<th>22 bits Tag</th>
<th>5 bits Cache Index</th>
<th>5 bits block offset</th>
</tr>
</thead>
</table>
Address
Cache Tag: 22 bits
Direct Mapped Cache Data: 32-byte block
<table>
<thead>
<tr>
<th>Byte 31</th>
<th>Byte 30</th>
<th>...</th>
<th>Byte 1</th>
<th>Byte 0</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
1K = 2^{10} = 1024
2^{5} = 32
Example: 1KB Direct Mapped Cache with 32B Blocks
- For a $2^{10}$ byte cache with 32-byte blocks:
- The uppermost $22 = (32 - 10)$ address bits are the Cache Tag
- The lowest 5 address bits are the Byte Select (Block Size = $2^5$)
- The next 5 address bits (bit5 - bit9) are the Cache Index
Example: 1K Direct Mapped Cache
Cache Tag
0x0002fe
Cache Index
0x00
Byte Select
0x00
Valid Bit
0
Cache Tag
0xxxxxxx
0x000050
0x004440
Cache Data
Byte 31
Byte 1
Byte 0
0
1
31
Byte 63
Byte 33
Byte 32
1
2
3
Byte 1023
Byte 992
31
Cache Miss
Byte Select
©GK&DR Fall 1999
Example: 1K Direct Mapped Cache
Cache Tag: 0x0002fe
Cache Index: 0x00
Byte Select: 0x00
Valid Bit
Cache Tag
0x0002fe
0x000050
0x004440
New Block of data
Byte 63 •• Byte 33 Byte 32
Byte 1023 •• Byte 992
Byte Select
Example: 1K Direct Mapped Cache
Cache Tag: 0x000050
Cache Index: 0x01
Valid Bit: 1
Byte Select: 0x08
Cache Hit
Byte 31: 0x0002fe
Byte 32: 0x004440
Byte 63: 0x000050
Byte 0: 0
Byte 1: 0
Byte 31: 0
Byte 32: 0
Byte 33: 0
Byte 992: 0
Byte 1023: 0
Cache Data
Cache Index: 0x01
Byte Select: 0x08
Valid Bit: 1
Cache Tag: 0x000050
CPS 104 Lecture 18.8
©GK&DR Fall 1999
Example: 1K Direct Mapped Cache
Cache Tag: 0x002450
Valid Bit: 1
Cache Index: 0x02
Byte Select: 0x04
Cache Data:
- Byte 31: *
- Byte 63: *
- Byte 1023: *
Cache Miss
Example: 1K Direct Mapped Cache
<table>
<thead>
<tr>
<th>Cache Tag</th>
<th>Cache Index</th>
<th>Byte Select</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x002450</td>
<td>0x02</td>
<td>0x04</td>
</tr>
</tbody>
</table>
- **Valid Bit**: 1
- **Cache Tag**: 0x002450
- **Cache Data**:
- Byte 31: *
- Byte 63: *
- Byte 1023: *
**New Block of data**
```
0x0002fe
0x000050
0x002450
```
Block Size Tradeoff
° In general, larger block size take advantage of spatial locality **BUT**:
• Larger block size means larger miss penalty:
- Takes longer time to fill up the block
• If block size is too big relative to cache size, miss rate will go up
- Too few cache blocks
° In general, Average Access Time:
• Hit Time $\times (1 - \text{Miss Rate}) + \text{Miss Penalty} \times \text{Miss Rate}$
A N-way Set Associative Cache
- **N-way set associative**: N entries for each Cache Index
- N direct mapped caches operating in parallel
- **Example**: Two-way set associative cache
- Cache Index selects a “set” from the cache
- The two tags in the set are compared in parallel
- Data is selected based on the tag result
Advantages of Set associative cache
- Higher **Hit rate** for the same cache size.
- Fewer **Conflict Misses**.
- Can can have a larger cache but keep the index smaller *(same size as virtual page index)*
Disadvantage of Set Associative Cache
° N-way Set Associative Cache versus Direct Mapped Cache:
• N comparators vs. 1
• Extra MUX delay for the data
• Data comes AFTER Hit/Miss decision and set selection
° In a direct mapped cache, Cache Block is available BEFORE Hit/Miss:
• Possible to assume a hit and continue. Recover later if miss.
And yet Another Extreme Example: Fully Associative cache
° Fully Associative Cache -- push the set associative idea to its limit!
• Forget about the Cache Index
• Compare the Cache Tags of all cache entries in parallel
• Example: Block Size = 32B blocks, we need N 27-bit comparators
° By definition: Conflict Miss = 0 for a fully associative cache
---
### Cache Tag (27 bits long)
<table>
<thead>
<tr>
<th>31</th>
<th>4</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cache Tag</td>
<td>Byte Select</td>
<td></td>
</tr>
</tbody>
</table>
#### Cache Data
<table>
<thead>
<tr>
<th></th>
<th>Byte 31</th>
<th>Byte 1</th>
<th>Byte 0</th>
</tr>
</thead>
<tbody>
<tr>
<td>Byte 31</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Byte 63</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Byte 33</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Byte 32</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
---
CPS 104 Lecture 18.15
©GK&DR Fall 1999
Sources of Cache Misses
° **Compulsory** (cold start or process migration, first reference): first access to a block
• “Cold” fact of life: not a whole lot you can do about it
° **Conflict** (collision):
• Multiple memory locations mapped to the same cache location
• Solution 1: increase cache size
• Solution 2: increase Associativity
° **Capacity**:
• Cache cannot contain all blocks access by the program
• Solution: increase cache size
° **Invalidation**: other process (e.g., I/O) updates memory
## Sources of Cache Misses
<table>
<thead>
<tr>
<th></th>
<th>Direct Mapped</th>
<th>N-way Set Associative</th>
<th>Fully Associative</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Cache Size</strong></td>
<td>Big</td>
<td>Medium</td>
<td>Small</td>
</tr>
<tr>
<td><strong>Compulsory Miss</strong></td>
<td>Same</td>
<td>Same</td>
<td>Same</td>
</tr>
<tr>
<td><strong>Conflict Miss</strong></td>
<td>High</td>
<td>Medium</td>
<td>Zero</td>
</tr>
<tr>
<td><strong>Capacity Miss</strong></td>
<td>Low(er)</td>
<td>Medium</td>
<td>High</td>
</tr>
<tr>
<td><strong>Invalidation Miss</strong></td>
<td>Same</td>
<td>Same</td>
<td>Same</td>
</tr>
</tbody>
</table>
**Note:**
If you are going to run “billions” of instruction, Compulsory Misses are insignificant.
The Need to Make a Decision!
° **Direct Mapped Cache:**
- Each memory location can only mapped to 1 cache location
- No need to make any decision :-)
- Current item replaces the previous item in that cache location
° **N-way Set Associative Cache:**
- Each memory location have a *choice of N* cache locations
° **Fully Associative Cache:**
- Each memory location can be placed in *ANY* cache location
° **Cache miss in a N-way Set Associative or Fully Associative Cache:**
- Bring in new block from memory
- Throw out a cache block to make room for the new block
- We need to make a decision on *which block to throw out!*
Cache Block Replacement Policy
- **Random Replacement:**
- Hardware randomly selects a cache block out of the set and replaces it.
- **Least Recently Used:**
- Hardware keeps track of the access history
- Replace the entry that has not been used for the longest time.
- For two way set associative cache one needs one bit for LRU replacement.
- Example of a Simple "Pseudo" Least Recently Used Implementation:
- Assume 64 Fully Associative Entries
- Hardware replacement pointer points to one cache entry
- Whenever an access is made to the entry the pointer points to:
- Move the pointer to the next entry
- Otherwise: do not move the pointer
Cache Write Policy: Write Through versus Write Back
° Cache read is much easier to handle than cache write:
• Instruction cache is much easier to design than data cache
° Cache write:
• How do we keep data in the cache and memory consistent?
° Two options (decision time again :-)
• **Write Back**: write to cache only. Write the cache block to memory when that cache block is being replaced on a cache miss.
- Need a “dirty bit” for each cache block
- Greatly reduce the memory bandwidth requirement
- Control can be complex
• **Write Through**: write to cache and memory at the same time.
- What!!! How can this be? Isn’t memory too slow for this?
Write Buffer for Write Through
- A Write Buffer is needed between the Cache and Memory
- Processor: writes data into the cache and the write buffer
- Memory controller: write contents of the buffer to memory
- Write buffer is just a FIFO:
- Typical number of entries: 4
- Works fine if: Store frequency (w.r.t. time) \(<\) 1 / DRAM write cycle
- Memory system designer’s nightmare:
- Store frequency (w.r.t. time) \(>\) 1 / DRAM write cycle
- Write buffer saturation
Write Buffer Saturation
° Store frequency (w.r.t. time) > 1 / DRAM write cycle
• If this condition exist for a long period of time (CPU cycle time too quick and/or too many store instructions in a row):
- Store buffer will overflow no matter how big you make it
- The CPU Cycle Time << DRAM Write Cycle Time
° Solution for write buffer saturation:
• Use a write back cache
• Install a second level (L2) cache:
• store compression
Write Allocate versus Not Allocate
- Assume: a 16-bit write to memory location 0x0 and causes a miss
- Do we read in the block?
- Yes: Write Allocate
- No: Write Not Allocate
Four Questions for Memory Hierarchy Designers
° **Q1:** Where can a block be placed in the upper level? *(Block placement)*
° **Q2:** How is a block found if it is in the upper level? *(Block identification)*
° **Q3:** Which block should be replaced on a miss? *(Block replacement)*
° **Q4:** What happens on a write? *(Write strategy)*
What is a Sub-block?
° Sub-block:
• Share one cache tag between all sub-blocks in a block
• A unit within a block that has its own valid bit
• Example: 1 KB Direct Mapped Cache, 32-B Block, 8-B Sub-block
- Each cache entry will have: 32/8 = 4 valid bits
° Write miss: only the bytes in that sub-block is brought in.
• reduce cache fill bandwidth (penalty).
```
+---------------+--------+--------+--------+--------+
| Cache Tag | SB3’s V| SB2’s V| SB1’s V| SB0’s V|
| | Bit | Bit | Bit | Bit |
+---------------+--------+--------+--------+--------+
| | | | | |
+---------------+--------+--------+--------+--------+
| | | | | |
```
```
<table>
<thead>
<tr>
<th>Cache Data</th>
</tr>
</thead>
<tbody>
<tr>
<td>B31 ** B24</td>
</tr>
<tr>
<td>Sub-block3</td>
</tr>
<tr>
<td></td>
</tr>
</tbody>
</table>
```
```
<table>
<thead>
<tr>
<th>Byte 1023</th>
</tr>
</thead>
<tbody>
<tr>
<td>Byte 992</td>
</tr>
<tr>
<td></td>
</tr>
<tr>
<td></td>
</tr>
</tbody>
</table>
```
Separate Instruction and Data Caches
- Separate Inst & Data Caches
- Harvard Architecture
- Can access both at same time
- Combined L2
- L2 >> L1
Cache Performance
\[ \text{CPU time} = (\text{CPU\_execution\_clock\_cycles} + \text{Memory\_stall\_clock\_cycles}) \times \text{clock\_cycle\_time} \]
\[ \text{Memory\_stall\_clock\_cycles} = \text{Memory\_accesses} \times \text{Miss\_rate} \times \text{Miss\_penalty} \]
**Example**
° Assume every instruction takes 1 cycle
° Miss penalty = 20 cycles
° Miss rate = 10%
° 1000 total instructions, 300 memory accesses
° Memory stall cycles? CPU clocks?
Cache Performance
° Memory Stall cycles = 300 * 0.10 * 20 = 600
° CPUclocks = 1000 + 600 = 1600
° 60% slower because of cache misses!
Improving Cache Performance
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
Reducing Misses
Classifying Misses: 3 Cs
- **Compulsory**—The first access to a block is not in the cache, so the block must be brought into the cache. These are also called *cold start misses* or *first reference misses.* *(Misses in Infinite Cache)*
- **Capacity**—If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved. *(Misses in Size X Cache)*
- **Conflict**—If the block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory and capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. These are also called *collision misses* or *interference misses.* *(Misses in N-way Associative, Size X Cache)*
Cache Performance
° Your program and caches
° Can you affect performance?
° Think about 3Cs
Reducing Misses by Compiler Optimizations
° Instructions
• Reorder procedures in memory so as to reduce misses
• Profiling to look at conflicts
• McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache with 4 byte blocks
° Data
• Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays
• Loop Interchange: change nesting of loops to access data in order stored in memory
• Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap
• Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows
Merging Arrays Example
/* Before */
int val[SIZE];
int key[SIZE];
/* After */
struct merge {
int val;
int key;
};
struct merge merged_array[SIZE];
Reducing conflicts between val & key
Loop Interchange Example
/* Before */
for (k = 0; k < 100; k = k+1)
for (j = 0; j < 100; j = j+1)
for (i = 0; i < 5000; i = i+1)
x[i][j] = 2 * x[i][j];
/* After */
for (k = 0; k < 100; k = k+1)
for (i = 0; i < 5000; i = i+1)
for (j = 0; j < 100; j = j+1)
x[i][j] = 2 * x[i][j];
Sequential accesses Instead of striding through memory every 100 words
Loop Fusion Example
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
a[i][j] = 1/b[i][j] * c[i][j];
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
d[i][j] = a[i][j] + c[i][j];
/* After */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{ a[i][j] = 1/b[i][j] * c[i][j];
d[i][j] = a[i][j] + c[i][j];}
2 misses per access to a & c vs. one miss per access
Blocking Example
/* Before */
for (i = 0; i < N; i = i + 1)
for (j = 0; j < N; j = j + 1)
{
r = 0;
for (k = 0; k < N; k = k + 1)
r = r + y[i][k]*z[k][j];
x[i][j] = r;
};
° Two Inner Loops:
• Read all NxN elements of z[
• Read N elements of 1 row of y[ ] repeatedly
• Write N elements of 1 row of x[ ]
° Capacity Misses a function of N & Cache Size:
• 3 NxN => no capacity misses; otherwise ...
° Idea: compute on BxB submatrix that fits
Blocking Example
/* After */
for (jj = 0; jj < N; jj = jj+B)
for (kk = 0; kk < N; kk = kk+B)
for (i = 0; i < N; i = i+1)
for (j = jj; j < min(jj+B-1,N); j = j+1)
{r = 0;
for (k = kk; k < min(kk+B-1,N); k = k+1) {
r = r + y[i][k]*z[k][j];
}
x[i][j] = x[i][j] + r;
}
° Capacity Misses from $2N^3 + N^2$ to $2N^3/B + N^2$
° B called \textit{Blocking Factor}
° Conflict Misses Too?
Conflicts misses in caches not FA vs. Blocking size
- Lam et al. [1991] a blocking factor of 24 had a fifth the misses vs. 48 despite both fit in cache
Summary of Compiler Optimizations to Reduce Cache Misses
- **vpenta (nasa7)**
- **gmty (nasa7)**
- **tomcatv**
- **btrix (nasa7)**
- **mxm (nasa7)**
- **spice**
- **cholesky (nasa7)**
- **compress**
**Performance Improvement**
- **merged arrays**
- **loop interchange**
- **loop fusion**
- **blocking**
CPS 104 Lecture 18.39
©GK&DR Fall 1999
Summary
° Cost Effective Memory Hierarchy
° Split Instruction and Data Cache
° 4 Questions
° CPU cycles/time, Memory Stall Cycles
° Your programs and cache performance
Next Time
° Virtual Memory
|
{"Source-Url": "https://users.cs.duke.edu/~dr/104.99f/L18.pdf", "len_cl100k_base": 5135, "olmocr-version": "0.1.49", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 59389, "total-output-tokens": 6647, "length": "2e12", "weborganizer": {"__label__adult": 0.0004870891571044922, "__label__art_design": 0.0006003379821777344, "__label__crime_law": 0.0005197525024414062, "__label__education_jobs": 0.0012941360473632812, "__label__entertainment": 9.59634780883789e-05, "__label__fashion_beauty": 0.00025177001953125, "__label__finance_business": 0.0002827644348144531, "__label__food_dining": 0.000537872314453125, "__label__games": 0.0010156631469726562, "__label__hardware": 0.01751708984375, "__label__health": 0.0007214546203613281, "__label__history": 0.0004127025604248047, "__label__home_hobbies": 0.00023293495178222656, "__label__industrial": 0.00146484375, "__label__literature": 0.0002205371856689453, "__label__politics": 0.00037789344787597656, "__label__religion": 0.0007243156433105469, "__label__science_tech": 0.1668701171875, "__label__social_life": 8.159875869750977e-05, "__label__software": 0.0081329345703125, "__label__software_dev": 0.7958984375, "__label__sports_fitness": 0.0006136894226074219, "__label__transportation": 0.0012302398681640625, "__label__travel": 0.00027370452880859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16196, 0.08278]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16196, 0.61679]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16196, 0.71382]], "google_gemma-3-12b-it_contains_pii": [[0, 144, false], [144, 308, null], [308, 970, null], [970, 1555, null], [1555, 1853, null], [1853, 2168, null], [2168, 2388, null], [2388, 2759, null], [2759, 2930, null], [2930, 3259, null], [3259, 3678, null], [3678, 4009, null], [4009, 4215, null], [4215, 4563, null], [4563, 5393, null], [5393, 5911, null], [5911, 6633, null], [6633, 7279, null], [7279, 7948, null], [7948, 8625, null], [8625, 9107, null], [9107, 9559, null], [9559, 9745, null], [9745, 10086, null], [10086, 11143, null], [11143, 11294, null], [11294, 11755, null], [11755, 11892, null], [11892, 12019, null], [12019, 12836, null], [12836, 12929, null], [12929, 13577, null], [13577, 13770, null], [13770, 14146, null], [14146, 14560, null], [14560, 15058, null], [15058, 15500, null], [15500, 15653, null], [15653, 16000, null], [16000, 16196, null]], "google_gemma-3-12b-it_is_public_document": [[0, 144, true], [144, 308, null], [308, 970, null], [970, 1555, null], [1555, 1853, null], [1853, 2168, null], [2168, 2388, null], [2388, 2759, null], [2759, 2930, null], [2930, 3259, null], [3259, 3678, null], [3678, 4009, null], [4009, 4215, null], [4215, 4563, null], [4563, 5393, null], [5393, 5911, null], [5911, 6633, null], [6633, 7279, null], [7279, 7948, null], [7948, 8625, null], [8625, 9107, null], [9107, 9559, null], [9559, 9745, null], [9745, 10086, null], [10086, 11143, null], [11143, 11294, null], [11294, 11755, null], [11755, 11892, null], [11892, 12019, null], [12019, 12836, null], [12836, 12929, null], [12929, 13577, null], [13577, 13770, null], [13770, 14146, null], [14146, 14560, null], [14560, 15058, null], [15058, 15500, null], [15500, 15653, null], [15653, 16000, null], [16000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16196, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16196, null]], "pdf_page_numbers": [[0, 144, 1], [144, 308, 2], [308, 970, 3], [970, 1555, 4], [1555, 1853, 5], [1853, 2168, 6], [2168, 2388, 7], [2388, 2759, 8], [2759, 2930, 9], [2930, 3259, 10], [3259, 3678, 11], [3678, 4009, 12], [4009, 4215, 13], [4215, 4563, 14], [4563, 5393, 15], [5393, 5911, 16], [5911, 6633, 17], [6633, 7279, 18], [7279, 7948, 19], [7948, 8625, 20], [8625, 9107, 21], [9107, 9559, 22], [9559, 9745, 23], [9745, 10086, 24], [10086, 11143, 25], [11143, 11294, 26], [11294, 11755, 27], [11755, 11892, 28], [11892, 12019, 29], [12019, 12836, 30], [12836, 12929, 31], [12929, 13577, 32], [13577, 13770, 33], [13770, 14146, 34], [14146, 14560, 35], [14560, 15058, 36], [15058, 15500, 37], [15500, 15653, 38], [15653, 16000, 39], [16000, 16196, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16196, 0.09459]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
cea03bdd002f1f6f1cf80840fe68416522bef4a9
|
Proceedings of the
Fourth International Workshop on
Graph-Based Tools
(GraBaTs 2010)
Sketch-based Diagram Editors with User Assistance based on Graph Transformation and Graph Drawing Techniques
Steffen Mazanek, Christian Rutetzki, and Mark Minas
14 pages
Sketch-based Diagram Editors with User Assistance based on Graph Transformation and Graph Drawing Techniques
Steffen Mazanek, Christian Rutetzki, and Mark Minas
(Steffen.Mazanek, Christian.Rutetzki, Mark.Minas)@unibw.de
Universität der Bundeswehr München, Germany
Abstract: In the last years, tools have emerged that recognize sketched diagrams of a particular visual language. That way, the user can draw diagrams with a pen in a natural way and still has available most processing capabilities. But also in the domain of conventional diagram editors, considerable improvements have been achieved. Among other features, powerful user assistance like auto-completion has been developed, which guides the user in the construction of correct diagrams. The combination of these two developments, sketching and guidance, is the main contribution of this paper. It not only shows feasibility and usefulness of the integration of user assistance into sketching editors, but also that novel user strategies for identifying and dealing with recognition errors are made possible that way. The proposed approach heavily exploits graph transformation and drawing techniques. It was integrated into a meta-tool, which has been used to generate an editor for business process models that comprises the features described in this paper.
Keywords: sketching, meta-tools, user assistance, graph transformation, graph drawing, process models
1 Introduction
An important benefit of sketch-based diagram editors is that diagrams can be drawn with maximal freedom in a very natural way. With the appearance of powerful and permissive approaches to their subsequent recognition – among others [HD05, CMP05, CDR05] – many advantages of traditional WIMP interfaces (Window, Icon, Menu, Pointer) can be carried over. Most importantly, diagrams, once recognized, can be further processed. However, one feature of state-of-the-art conventional diagram editors, namely user assistance, has not yet been integrated into sketch tools. The user assistance we aim at guides the user in the construction of correct diagrams. Indeed, the only existing attempt in this direction we are aware of is the work on symbol completion by Costagliola et al. [CDR07]. This approach helps the user in completing individual symbols (lexical level), but the overall diagram structure (syntactical level) is not at all considered – not even to mention language semantics or pragmatics. Moreover, this approach has not been integrated into a visual environment yet. In this paper we fill this gap by integrating a user assistance component into a sketching meta-tool, i.e., a framework for generating sketch editors from a language specification. We report on the challenges that had to be addressed and how graph transformation and graph drawing techniques have been used for solving them.
Fig. 1 shows a bird’s eye view of the proposed approach, i.e., the overall architecture of sketching editors with assistance. The user, who is represented by the stickman in the mid-
dle, draws strokes, which are the basic input of most sketch recognition tools. The recognizer transforms these strokes either on-line or on user’s request into a set of diagram components. Next, a language-specific analysis of the diagram is performed, e.g., a syntax check. For the diagram given in Fig. 1, it might be checked that there are no arrows without proper source and target components or that processing components (rectangles) are connected to data structures (ellipses) only. The result of this analysis step is passed back to the user as visual feedback.
The novel aspects of this work are surrounded by a dashed line. For the proposed approach, the analysis additionally has to return a set of suggestions for the user, e.g., how the diagram can be completed. The user can choose among these suggestions, e.g., by using a preview of the corresponding diagram changes. The selected suggestion then is integrated into the sketch. Therefore, a translator component generates the set of corresponding strokes and adds them to the user’s sketch. That way, the next analysis cycle will directly consider the applied suggestion.
This paper covers the following assistance features, which are all based on syntax:
- **auto-completion**: the computation of missing diagram components that transform the incomplete diagram into a proper member of the underlying visual language,
- **auto-link**: the derivation of missing edges in graph-like languages according to node arrangement and other kinds of editing accelerators,
- **example generation**: the generation of correct example diagrams that can be explored by the user for the sake of language learning.
Suggestions that remove parts of a sketch are not considered.
Sketch tools with powerful recognizers [HD05, CMP05, CDR05] as well as tools for the computation or specification of suggestions [AHHG09, MMM08b, SBV08] already exist. Therefore, this paper focuses on the following three aspects:
- **User interaction**: how can the user invoke and control assistance,
- **Stroke generation**: how and where should the translator generate strokes from the suggestion (this actually is a graph drawing problem),
- **Dealing with recognition errors**: indeed, syntactical assistance not only provides clues for syntactical problems, but also simplifies the identification of recognition errors.
This paper is structured as follows: Sect. 2 introduces the running example language, namely business process models (BPMs). Our implementation relies on existing frameworks for sketch recognition and user assistance; Sect. 3 recapitulates their concepts. How these two approaches actually have been combined and integrated is described in Sect. 4. A discussion is provided in Sect. 5. Finally, related work is reviewed and the paper is concluded (Sect. 6 and 7).
2 Business Process Models
BPMs are used to represent the workflows within an enterprise and, thus, are a highly relevant diagrammatic language today. In recent years a standardized visual notation, the Business Process Modeling Notation BPMN [Obj09], has been developed. Since BPMs are frequently developed in creative team meetings, this language ideally should be supported by sketch editors. Fig. 2 shows a small sales process, which has been drawn and recognized by the sketching editor described in this paper. The magnified (assistance) toolbar will be described later.
BPMs basically are graphs, where the connecting arrows represent sequence flow. The example process starts with the receipt of an order, which is expressed by a start event (circle). Thereafter, the sequence flow is split by an exclusive gateway (diamond shape). If the ordered product is available, it is prepared and shipped, which is expressed by activities (rectangles). Otherwise, a notification is sent to the customer. Thereafter, the sequence flow is joined again by another gateway, and the process terminates as indicated by the end event (circle).
In the following, only well-structured BPMs are treated, i.e., we require splits and joins to be properly nested. This restriction improves the understandability of process models in the same way as structured programming improves the understandability of program code [MRA09]. For well-structured BPMs, moreover, powerful syntactical user assistance is available [MM09].
3 The Frameworks PerSUADE and DSketch
The general approach proposed in this paper (Fig. 1) is generic as it is not restricted to a particular visual language. Hence, an implementation requires frameworks for sketch recognition as well as syntax analysis and user assistance that are generic as well, i.e., they must be adaptable to different visual languages. Concretely, we have chosen the DiaGen approach [Min02] as a base, where hypergraphs are used as a model for diagrams and hypergraph grammars as a means for syntax definition. Accordingly, this formalism is introduced at first. Thereafter, the PerSUADE approach, an extension of DiaGen by syntax-based user assistance, is introduced. Finally, the sketching approach DSketch, which is also based on DiaGen, is recapitulated.
3.1 Hypergraphs and Hypergraph Grammars
Hypergraphs are generalized graphs whose edges can connect an arbitrary number of nodes. This notion of graphs allows a uniform representation of all kinds of diagrams. The key idea is that diagram components are represented by hyperedges and their attachment areas by nodes. Fig. 2 also shows the chosen hypergraph representation of the example BPM. The hyperedges are drawn as rectangular boxes and the nodes as black dots. If a hyperedge and a node are incident, they are connected by a line called tentacle. Activities and events have two attachment areas, i.e., incident nodes: one for incoming and one for outgoing sequence flow. Gateways have four attachment areas (namely their corners). Note that sequence arrows do not explicitly occur in this hypergraph representation. They are rather represented implicitly by the fact that connected components visit the same node: the source component via its outgoing tentacle, the target component via its incoming tentacle, respectively. The hypergraph shown in Fig. 2 actually is the result of a lexical analysis step, which performs such simplifications.
In *DiaGen*, hypergraph grammars are used for language definition. For this paper, only context-free ones are considered [DHK97]. Such hypergraph grammars consist of two finite sets of terminal and nonterminal hyperedge labels and a starting hypergraph that contains only a single nonterminal hyperedge. Syntax is described by a set of productions. The hypergraph language generated by a grammar is defined by the set of terminally labeled hypergraphs that can be derived from the starting hypergraph.
Fig. 3 shows the productions of a hypergraph grammar $G_{BPM}$ for very simple process models. A more comprehensive version that includes pools (process containers), different kinds of intermediate events, and embedded messages has been shown in [MM09]. The types *event*, *activity*, and *gateway* are the terminal hyperedge labels. The set of nonterminal labels consists of *Process*, *Flow*, and *FlElem*. The starting hypergraph consists of just a single *Process* edge. The application of a context-free production removes an occurrence $e$ of the hyperedge on the left-hand side of the production from the host graph and replaces it by the hypergraph $H_r$ on the right-hand side. Matching node labels of both sides of a production determine how $H_r$ has to fit in after removing $e$. Fig. 4 shows an example derivation.
### 3.2 User Assistance with *PerSUADE*
For visual languages defined by hypergraph grammars, *hypergraph patches* have been proposed as a means for the realization of Syntax-based User Assistance in Diagram Editors (*PerSUADE*) [MMM08b]. A patch basically describes a modification of a given hypergraph $H$ that transforms


$H$ into a valid member of the language defined by a given grammar $G$. Two different kinds of atomic modifications are considered: merging nodes and adding edges. The application of a patch for a hypergraph $H$ then corresponds to the construction of a so-called quotient hypergraph $H/\sim$ whose nodes are equivalence classes of the original nodes of $H$. Correcting patches indeed can be computed while parsing hypergraphs [MMM08a]. Consider the hypergraph $H$ given in Fig. 5 as an example. Hypergraph $H$ does not belong to the language of $G_{BPM}$, but it can be corrected by merging the nodes $n_3$ and $n_4$. It can also be corrected by inserting an activity hyperedge at the proper position. Note that there usually is an infinite number of correcting patches. Actually, according to $G_{BPM}$, an arbitrary number of activities could be inserted between the activity and the event hyperedge at the right. So, the size of desired patches, i.e., the number of additional hyperedges, must be restricted by the user. A special case of patches is the empty input hypergraph. Its patches can be used for exhaustive example generation.
Assistance based on hypergraph patches has been integrated into DiaGen editors as follows: The editor automatically maintains the hypergraph representation of the diagram. On user’s request, the patch-computing parser [MMM08a] is applied to this hypergraph representation with the desired size of patches as a parameter. It computes all possible correcting hypergraph patches of this size. From those, the user has to choose via a preview functionality. The selected patch is translated to diagram modifications by a language-specific update translator. Finally, the diagram is beautified by a layout component. That way, powerful syntax-based user assistance for BPMs has been realized already [MM09] — however, only in the context of a conventional WIMP editor. A screencast is available at www.unibw.de/inf2/DiaGen/assistance/bpm.
For the computation of patches, PerSUADE can only consider the context-free part of a hypergraph [MM09]. This limitation naturally also applies to the sketch editors with guidance to be discussed in Section 4.
Sketch-based Diagram Editors with User Assistance
3.3 Diagram Recognition à la DS\textit{k}etch
DS\textit{k}etch is an extension of Dia\textit{G}en that complements the conventional WIMP-based GUI of diagram editors by a drawing canvas, which readily accepts all kinds of user strokes freely entered with a stylus. The integrated recognizer allows diagrams to be analyzed and further processed [BM08c]. The main characteristics of this approach are: (i) Little restrictions to drawing components, e.g., a rectangle can be drawn clockwise, counterclockwise, or even interleaved with other components. (ii) Syntactic and semantic information is used to resolve ambiguities that occur in the recognition process. For instance, if a sloppily drawn BPM component could be both an activity or an event, the actual decision is postponed to the analysis stage where the interpretation of the respective strokes might get clear from the context. And finally (iii), the approach is generic, i.e., editors for a wide range of languages can be specified.
Fig. 6 shows the overall architecture of this sketching approach. The first processing step is the recognizer, which analyzes the sketch’s strokes and creates a corresponding set of diagram components. Actually, several primitive recognizers (called transformers in [BM08b]) for lines, arcs, circles, etc. search for corresponding primitives in the sketch. The main recognizer queries these primitive recognizers and – directed by the language specification – assembles the diagram components from those primitives. Generally, the recognition is very tolerant to avoid false negatives. The inevitably resulting false positives are resolved not until parsing. The actual analysis of the diagram now works in several steps similar to the analysis in conventional Dia\textit{G}en editors: First, a hypergraph model is created from all components. Then the reducer is applied (lexical analysis) and yields the reduced hypergraph model as shown in Fig. 2. The parser syntactically analyzes this hypergraph and builds up a derivation structure that is similar to a derivation tree, but that also reflects non-context-free aspects of the diagram. The parser ensures that no two possible interpretations of the same stroke are integrated into the same derivation structure. That way, ambiguities are effectively resolved. Each derivation structure then represents a correct diagram and is rated according to its quality. Finally, a semantic representation of the best-rated
derivation is computed via attribute evaluation. If this is not possible, the next best derivation is tried and so on. Details about this process can be found in [BM08a].
The DSkeetk approach is efficient and fully functional, but it cannot recognize dashed lines nor distinguish different line widths. BPM messages, which are usually drawn as dashed lines, and BPM end events, which are drawn as bold circles, thus, must be represented with another notation. Moreover, text recognition is not integrated into DSkeetk. Textual labels, hence, must be entered via keyboard or an extra text recognizer.
4 Integration of User Assistance into DSkeetk
In this section we describe how the assistance provided by PerSUADE has been integrated into DSkeetk. The overall architecture of the editors generated by the realized system is shown in Fig. 7, which basically refines Fig. 1. The right-hand side of Fig. 7 comprises the analysis steps of DSkeetk. The analysis performed by the PerSUADE parser belongs to this side, too. The left-hand side comprises the novel part of the system where the results of PerSUADE are further processed for the sake of assistance.
The processing steps up to the parser remain almost unchanged. Just the recognizer needed to be slightly adapted. Recall that in DSkeetk the recognizer is very error-tolerant. So, often the same stroke is accepted by several different primitive recognizers. This results in double findings that are resolved in DSkeetk during syntax analysis. The PerSUADE framework cannot deal with such ambiguities yet. Therefore, we have enforced the recognizer to make the decision if one of the assistance functions is invoked. Basically, the recognizer now selects the interpretation with the highest rating from the double findings. This rating depends on how precisely a primitive is drawn, how close the connections at the junctions are, and how well the corresponding constraints are met.
The next adapted component is the parser, i.e., the process of syntactically analyzing the diagram. Actually, the DSketch parser has remained unchanged, but an additional parser component from the PerSUADE framework now complements it. All kinds of assistance are supported by this parser instead of the normal DSketch parser. On user’s request, this parser computes hypergraph patches for (the recognized parts of) the diagram’s reduced hypergraph model. The user can explore these patches and choose one of them using a preview functionality.
Let us assume that the user has selected one of the patches. Consider the example traced in Fig. 8. There, the smallest existing patch just merges the outgoing node of the activity and the incoming node of the right-most event. The update translator translates this patch into changes of the hypergraph model. For our example, an arrow needs to be introduced that is attached to the activity and the event (indicated by the spatial relationship edges “at”). Thereafter, it is up to the layouter to find an appropriate position for the newly introduced components. For the example of Fig. 8, this is an easy task because the source and target components of the sequence arrow already exist. The more complex completion examples given in Fig. 9 and the generated example diagrams given in Fig. 10, however, show that this step is not always that simple. The used layout approach and the actual user interface are discussed in the following subsections. The last processing step, i.e., the stroke generator, is rather simple. It just draws perfect components with the optimal sample rate, thus maximizing the recognition rate.
4.1 Placement of New Components by the Use of Graph Drawing Techniques
A basic assumption of our implementation is that user strokes remain unchanged (in contrast to conventional PerSUADE editors, where the existing components can be adapted during assistance). That way, surprises are prevented and the special flavor of sketching is preserved. So, we need a flexible layout engine for graph-like languages (other languages would require some adaptations) that only integrates the new components and leaves the remaining diagram unchanged. These requirements can be satisfied by layout algorithms based on physical analogies [Bra01]. Concretely, we have adapted a spring embedder, which interprets edges as springs with their particular attraction forces. Furthermore, special repulsive forces take effect between the node components. During layout, the node components move in increments according to the respective sum of forces until an equilibrium state has been reached. However, in our context not all nodes can be moved around freely, but only the new ones introduced by the update translator. The existing nodes, in contrast, are locked into their positions. An important benefit of spring embedders besides their simplicity is that they can also be used for static layout in a straightforward way. Static layout is required here, e.g., for example generation (cf. Fig. 10).
There are two problems with spring embedders in our context: The top diagram of Fig. 9 would look much better if the new activity were positioned further to the top. However, spring forces pull the new activity to the presented position, i.e., springs prevent bent arcs that way. This behavior can only be avoided by introducing invisible components in a context-sensitive way. Another problem is that new components, if positioned randomly at the beginning, cannot “pass” existing components due to the repulsive forces. This may result in strange layouts. We have prevented this problem by introducing an additional processing step that guesses more appropriate initial coordinates for new node components to be refined afterwards.
Other layout algorithms might yield more common looking process models; actually, most professional business modeling tools apply Sugiyama-style layout algorithms. However, such
algorithms are less suited in the context of this paper where the (usually user-chosen) position of existing nodes must be preserved when new nodes or edges have been introduced.
4.2 User Interface
The actual user interface is quite simple and easy to use — the complete editor window is shown in Fig. 2. There is a button for starting the computation of patches (see the magnified part of Fig. 2). After pressing this button, the first solution is shown immediately. Arrow buttons can be used for browsing through the other solutions. In particular the generation of examples usually results in a large number of solutions. A check button has to be pressed in order to accept the currently previewed solution. Strokes are then generated from the previewed diagram components. The resulting diagram looks like the preview, but the new components are not highlighted anymore, but drawn as normal, although perfect, strokes. Note that the user does not need to accept the previewed solution, but can use it as a kind of template for drawing the suggested components with his own strokes. That way, he will get a diagram that looks more homogeneous than the one with the generated perfect strokes. To continue with the UI, the preview also can be canceled, of course. Finally, the patch size can be set via the plus and minus buttons. This parameter basically indicates how many new diagram components (or more precisely, terminal hyperedges) are to be introduced. In the figure this parameter is set to its default value 1.
5 Discussion
Although an elaborate user study still remains to be done, the results so far are promising. As before, the user can freely sketch diagrams. He is not restricted in any way in the creative process of sketching. This actually is the reason why we have not realized a more pervasive assistance, e.g., on-line after every single stroke. In many conventional sketch editors, the user is in trouble if his sketch cannot be recognized. With the developed editor, he can ask for syntactical guidance instead. But this is not the only help he can get as we describe next.
5.1 Location of Recognition Errors
An important benefit of our approach is that it helps identifying recognition errors (besides syntax errors). Consider Fig. 11 as an example. A human can easily see that the sketched diagram is a structured business process. Still the diagram is not correctly recognized by DSketch. Normally, the user would have no clue what is wrong here. Has the start event mistakenly been recognized as an activity? Or has the end event been drawn too sloppily? Invoking assistance yields the answer. The red arrow between the activity and the end event clearly points out the problem: either the existing arrow has not been correctly recognized, or the gap between its head and the end event is too large. In either case, the user now can correct this problem without the need to redraw the whole diagram. It would even be possible to automatically mask or remove those strokes that do not contribute to the solution.
This problem, however, mainly arises for quite restricted visual languages where either the whole diagram is correct or nothing. Indeed, a single misrecognized arrow affects the correctness
of the whole BPM. It simply is not well-structured anymore as we have required by the grammar. With a more relaxed syntax definition at least sub-diagrams would be recognized correctly so that the visual feedback given by the DiaGen parser might indicate what is wrong. Actually, languages, where either the whole diagram or nothing is recognized as correct, have been very critical for sketching systems so far, because the recognition rate exponentially drops down with the size of the diagram. This problem is solved with our approach (although it would be even better to re-feed the analysis result in the recognizer, so that it can try harder at the weak points).
5.2 Stroke Interference
A problem of integrating PerSUADE into a sketching system is that it may happen that a sketch is not recognized as correct after a suggested patch has been accepted by the user. When using PerSUADE in conventional WIMP editors, this cannot happen: diagrams resulting from the application of assistance are always correct. In the context of sketching, newly generated strokes may interfere with existing strokes that, e.g., had been ignored by stroke recognition before.
6 Related Work
Of course, there are also other sketch editors for BPMs such as [MS09]. Moreover, due to the practical relevance of this language, various kinds of guidance have been developed for conventional WIMP-based BPM environments (an example is [BBM+09]). However, to our best knowledge such guidance has not been integrated into sketch editors yet.
As already noted in the introduction, the most closely related work is [CDR07] by Costagliola et al. Here, an LR parser as known from textual languages is used for syntax analysis with respect to a so-called sketch grammar. Thereby, syntactic information is exploited to resolve ambiguities similar to the DSkeṭt approach [CDR08]. The symbol table of the parser then can be exploited to realize symbol completion in sketch editors (and so-called symbol prompting in conventional diagram editors). The strong points of this approach are that it is generic, that direct feedback is provided (the approach is actually incremental), that the user’s own drawing style is used for completion (a stroke repository is filled by the different symbol recognizers to this end), and that the recognition of complex symbols can generally be improved that way. But, like with our approach, explicit user interaction is still required. In contrast to [CDR07], our approach does not stop at the lexical level, but also considers the overall diagram structure. Even other kinds of assistance not necessarily based on syntax could be integrated.
Another meta-tool where it should be possible to combine assistance with sketching is the Marama toolkit. For Marama, both a critic authoring tool [AHHG09] for the specification of user feedback and a sketching framework [GH07] are available. Here, however, critics would have to be specified manually whereas we gain the feedback automatically from the parser. The strong points of [GH07] are that only very little extra specification effort is needed for complementing a normal diagram editor with a sketching editor and that the user can easily overrule the recognizer when it makes a mistake.
7 Conclusion
In this paper we have shown that user assistance functionality can be provided by sketch editors and that this actually is useful. The presented approach allows to generate sketching editors with user assistance from a language specification based on the existing sketching editor generator DS$k$etch and the user assistance library Per$S$UADE. As a representative example, we have created a sketch editor for business process models with assistance features such as auto-completion or example generation.
But we have noticed yet another benefit of this approach besides helping the user with the language. The very same assistance features actually can be put to a good use in locating recognition errors. Those often directly result in syntax errors, whose potential corrections then point the user precisely to the recognition error. If a new component is suggested as a correction where already a component exists, the user can conclude that the existing component had been drawn too sloppily and needs to be redrawn.
The developed sketch editor for business process models is demonstrated in several screencasts and can be downloaded from www.unibw.de/inf2/DiaGen/assistance/sketching.
Future Work
In the future we want to experiment with relaxations of the assumption that the existing user strokes must not be changed. It is certainly imaginable that sketched components are moved around or even resized similar to the assistance in conventional DiaGen editors [MM09]. In this context it should also be possible to integrate existing component fragments into the newly introduced components in order to reuse as many strokes of the user as possible.
It would be also important to integrate the suggestions into the diagram closely following the user’s drawing style. Perfect components mixed with sloppily drawn components make the diagram look inhomogeneous. Costagliola et al. have proposed a stroke repository to this end, which is used already for their symbol completion [CDR07]. Alternatively, the user strokes could be beautified to close this gap.
Finally, DS$k$etch and Per$S$UADE need to be more deeply intertwined. While DS$k$etch originally postpones final decision of stroke recognition until syntax analysis in order to improve the recognition rate and to make ambiguity resolution possible, we had to enforce early recognition decisions in order to integrate Per$S$UADE into DS$k$etch.
Bibliography
Sketch-based Diagram Editors with User Assistance
|
{"Source-Url": "https://journal.ub.tu-berlin.de/eceasst/article/download/508/525", "len_cl100k_base": 6195, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34339, "total-output-tokens": 8206, "length": "2e12", "weborganizer": {"__label__adult": 0.00032448768615722656, "__label__art_design": 0.0014429092407226562, "__label__crime_law": 0.0002574920654296875, "__label__education_jobs": 0.0008068084716796875, "__label__entertainment": 0.00012177228927612303, "__label__fashion_beauty": 0.00015282630920410156, "__label__finance_business": 0.00018978118896484375, "__label__food_dining": 0.0003039836883544922, "__label__games": 0.0005450248718261719, "__label__hardware": 0.0005979537963867188, "__label__health": 0.0003631114959716797, "__label__history": 0.0002605915069580078, "__label__home_hobbies": 6.639957427978516e-05, "__label__industrial": 0.00032830238342285156, "__label__literature": 0.0005598068237304688, "__label__politics": 0.00019812583923339844, "__label__religion": 0.0004935264587402344, "__label__science_tech": 0.0330810546875, "__label__social_life": 9.006261825561523e-05, "__label__software": 0.018310546875, "__label__software_dev": 0.94091796875, "__label__sports_fitness": 0.0002155303955078125, "__label__transportation": 0.0003020763397216797, "__label__travel": 0.00016307830810546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34922, 0.03021]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34922, 0.52039]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34922, 0.92344]], "google_gemma-3-12b-it_contains_pii": [[0, 258, false], [258, 3290, null], [3290, 6120, null], [6120, 9565, null], [9565, 11349, null], [11349, 13536, null], [13536, 16048, null], [16048, 17989, null], [17989, 19659, null], [19659, 21958, null], [21958, 25196, null], [25196, 27849, null], [27849, 30875, null], [30875, 33322, null], [33322, 34922, null]], "google_gemma-3-12b-it_is_public_document": [[0, 258, true], [258, 3290, null], [3290, 6120, null], [6120, 9565, null], [9565, 11349, null], [11349, 13536, null], [13536, 16048, null], [16048, 17989, null], [17989, 19659, null], [19659, 21958, null], [21958, 25196, null], [25196, 27849, null], [27849, 30875, null], [30875, 33322, null], [33322, 34922, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34922, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34922, null]], "pdf_page_numbers": [[0, 258, 1], [258, 3290, 2], [3290, 6120, 3], [6120, 9565, 4], [9565, 11349, 5], [11349, 13536, 6], [13536, 16048, 7], [16048, 17989, 8], [17989, 19659, 9], [19659, 21958, 10], [21958, 25196, 11], [25196, 27849, 12], [27849, 30875, 13], [30875, 33322, 14], [33322, 34922, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34922, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
c8f269471533d7a1b9a990a43d72ee2f624f2a2a
|
Introducing New Methodologies for Identifying Design Patterns for Internationalization and Localization
Nicole Schadewitz and Timothy Jachna
School of Design, The Hong Kong Polytechnic University,
Hung Hom, Kowloon, Hong Kong, China
{sd.nic@polyu.edu.hk, sdtim@polyu.edu.hk}
Abstract. This paper describes a new methodology for deriving interaction design patterns from an analysis of ethnographic data. It suggests using inductive and deductive analysis processes to identify and articulate patterns that address the needs of culturally diverse users of interactive, collaborative systems. This might inform the internationalization and localization process of computer supported collaboration systems.
Introduction
A growing number of design and usability researchers and practitioners are beginning to take an interest in design patterns as a method to capture and communicate effective design solutions. The practical format of patterns enables designers to reuse and share design knowledge among various stakeholders in the design process. The concept of design patterns has received less attention in the field of internationalization and localization of products and systems than in other related areas of research. This might be due to the uncertainty as to whether or not patterns are an appropriate method to capture and communicate appropriate design solutions to support the use of interactive systems by culturally diverse users. Generally, the process of enabling localization of systems for different cultures starts with the development of a core interactive system, which is considered international. The core system is designed to receive local specific data, when it is localized. This paper introduces a methodology for studying and analyzing designs in culturally varying contexts in order to represent the findings in the format of design patterns that support the localization of collaborative systems.
Background
Since Alexander (1979) introduced the design patterns format into the field of architecture and Gamma et al. (1995) developed patterns to communicate reusable parts of computer engineering knowledge, a discussion among interaction design researchers has evolved regarding which role design patterns could take in the
usability and interaction design process (Borchers 2001), (McInerney 2002) (Tidwell 2005) (Erickson 2000). Two teams of researchers are currently investigating the possibility of patterns-supported cross-cultural usability in the field of internationalization and localization (Alostath and Wright 2004), (Mahemoff and Johnston 2001). Mahemoff and Johnston (2001) suggest design patterns that are closely related to usability standards for internationalization and localization. These researchers’ patterns offer support for the design process of internationalization of computer systems but do not give consistent advice as to which cultural differences and models need to be considered and communicated in different design and development contexts. Some patterns reference cultural dimensions (i.e. Hofstede 1997) to describe culturally varying forces that determine the usability problem. However, this information is not provided consistently throughout all patterns.
Researchers have discussed the importance of context descriptions and appropriate naming of design patterns (Alexander 1979), (Borchers 2001), (Hall 2003), (Tidwell 2005). Based on their research in the area of internationalization and localization of products, Hall et. al. (2003) suggest that the same problem can have a different solution depending on the culturally varying context of use. A design pattern is a description of a solution to a problem in a certain context (Alexander 1979). The depiction of conditions that lead to a successful design solution in this context is very important in order to communicate the purpose and scope of an interaction design pattern. Suitable choices of terminology, writing style and graphical representations of design patterns contribute to the understanding and correct use of a pattern. In this view, the author believes that the way patterns are articulated is greatly influenced by the methodology by which patterns are identified. In the past, pattern researcher accepted that pattern identification is based on the long term work and research experience of the composer. Although, literature provides information how to construct a pattern language (Meszaros and Doble 1999) or how to improve patterns in a shepherding process, researchers criticize that there are few concrete descriptions of methodologies to identify design patterns (Baggetun et al. 2007). Therefore, this paper introduces a qualitative methodology for identifying design patterns in ethnographic data of cross-cultural computer-supported collaborative interactions.
Methods
In the context of a long-term ethnographic study over a period of three years, inductive and deductive qualitative analysis methods were explored and developed for supporting the identification and articulation of interaction design patterns. In the inductive approach to data analysis all findings are grounded in the data. The deductive approach to ethnographic data analysis uses scientific theories to structure, code and report the data to test or extend an existing theory or hypothesis. (Tesch 1990)
Introducing New Methodologies for Identifying Design Patterns for Internationalization and Localization
Fig. 1. Development Cycle of Interaction Design Patterns: from Deductive Mapping to Prototyping and Testing of the Prototype.
From September 2003 to December 2005, I observed an undergraduate university design studio subject entitled "Only Connect - international collaboration project". This was a 6-7-week course organized by the School of Design at the Hong Kong Polytechnic University and taught in collaboration with partner universities and design schools in Korea, Austria and Taiwan. Each year, teams of 2-4 second year Hong Kong students from product, visual communication and environmental design were paired up with partner teams of 1-3 students from a similar design discipline and from another country. Each time, there were approximately 110 Hong Kong participants and 50 international partners. Each discipline had 2-3 tutors from Hong Kong and from the respective partner university. Though distributed geographically,
students collaborated using various communication technologies. Teams utilized synchronous communication tools like MSN or ICQ chat systems or Video-chat. In addition teams used asynchronous communication media like email, shared documents and different community and group websites like weblogs or Yahoo! Groups. Data about the collaborative interactions between distributed international design teams were collected using naturalistic observation, in-depth and informal interviews, as well as online conversation protocols.
The research project consisted of three phases. In the first year data were gathered to discover similarities in the teams’ interactions and communications in order to identify reoccurring issues in intercultural computer-supported collaboration. Those identified issues were used as guidelines to carry out observations and conduct interviews with the participants during the second year.
The data from the second year of the observations were analyzed in cycles of inductive coding and deductive mind mapping. The emerging design patterns were mapped into a hierarchical graph to discover possible connections among individual design patterns. While patterns in the upper hierarchy informed about concepts of cross-cultural differences in interaction design, patterns lower in the hierarchy related possible design solutions to those concepts. A few emerging solutions were tested in design scenarios and paper prototypes. A cycle of the process is displayed in Figure 1. These activities produced 14 design patterns, which were evaluated in design pattern workshops with novice and expert designers. After this evaluation, patterns were further developed using a deductive analysis of the interactions between Hong Kong and remote (in this case Korean) participants.
For this development, a coding scheme informed by theories from intercultural and cross-cultural communication, and collaborative learning and design was utilized in
Introducing New Methodologies for Identifying Design Patterns for Internationalization and Localization
the third stage of the study (Hofstede 1997), (Gunawardena, et. al. 1997), (Preece 2002). The computer-assisted analysis software package TAMSAnalyzer™ and GraphViz were used to view, sort, code and analyze the data. In this deductive analysis code frequencies and co-coding frequencies were used to compare the data, find patterns and explore relations among the patterns. The deductive analysis process was more rigorous than the previously-used, inductive analysis process. Due to the differences in the values of coding frequencies, dominant patterns in the data could be captured without difficulty. Moreover, patterns of stronger and weaker relations were acquired comparing the co-coding frequencies with other codes. This could be mapped and explored visually. The exploration of relations was accomplished mainly through mind mapping activities as shown in Figure 2. In comparison to the previous analysis technique, the emerging patterns were not structured in a predetermined hierarchical map; instead a network map evolved through inherent relations in the data. As a result, 18 design patterns for cross-cultural computer-supported collaborative design learning were written.
Having compared the methodologies for writing design patterns, I will now turn my attention to presenting findings from each stage of the study to demonstrate the evolution of the process of identification and articulation of patterns.
Findings
In this long-term ethnographic study, knowledge about intercultural collaborative activities and how they could possibly be supported by the development of interactive systems in the internationalization and localization process evolved gradually. The results of the first stage of the research project produced guidelines for the further study. Measures such as the coordinated use of synchronous and asynchronous communication tools, collocated intensive workshops, online tutorials, and indicators supporting the presence and background information of other participants emerged among others as central topics in intercultural collaborative design learning. Building on these findings, the second stage of this research generated 14 fully articulated patterns and an abundance of pattern beginnings. It is beyond the scope of this paper to present all fully articulated patterns. Nevertheless some design pattern thumbnails are detailed below:
- Blended Collaboration: This pattern suggests blending local and remote teamwork activities seamlessly into one collaboration process.
- Community Workshop: This pattern builds on the previous pattern, recommending running a collocated community workshop to start the project and establish trust through a mix of social and task-related communication.
- Community Portal: The design solution in this pattern advises setting up a virtual community portal to strengthen the relation of the members in the newly established virtual team and the entire learning community.
- Personal Profile: This pattern details the use of a personal page to represent information on each member in the team and community.
• Awareness Indicator: This pattern suggests conveying information about past activities, present states and possible future events of the artifacts used and members represented in the project.
In 2005, the author conducted an evaluative pattern workshop to investigate how the identified interaction design patterns were perceived and used in the design process. A set of interaction design patterns from the first analysis was handed to groups of experienced and novice designers for comment. Some workshop discussion results suggested that while design patterns were instructional, the writing style was slightly too prescriptive. For more experienced practitioners the patterns’ contents were perceived as being “too close to the data” and not descriptive, inspiring or revealing enough. Also, the relation of cross-cultural concepts and internationalization and localization of systems was judged to be too weak. To address this problem, the first group of workshop participants suggested the grouping of patterns into domain-specific clusters such as cross-cultural dimensions or technological concepts, whereas the second group proposed less generic pattern names to increase curiosity while browsing the patterns map. After the first workshop I concluded that giving usable problem-solution description was difficult without explaining concepts of cross-cultural and intercultural communication as contextual information. Hence, for the further development of the patterns I decided to use intercultural and cross-cultural communication concepts throughout my patterns to achieve a less, prescriptive more descriptive and informative writing style.
**Fig. 3.** Hierarchical Design Pattern Map of the Inductive Analysis
Some patterns were followed-up in the second deductive analysis. While a few patterns proved not to be as good as initially thought and were hence disregarded after the second analysis, other findings seemed to propose entirely new patterns that were...
Introducing New Methodologies for Identifying Design Patterns for Internationalization and Localization
not identified in the first, inductive analysis. Patterns were grouped around concepts of “Instructional Force”, “Community Coordination”, “Collective Awareness”, “Contextual Communication”, “Specified Context” and “Implementation” as presented in Figure 3.
The formerly identified pattern “Personal Profile” could not be confirmed as a successful solution to represent an individual in collaborating collective cultures and was not further developed. Patterns that evolved from previous ideas were “Grand Opening”, which takes up the idea of the previously identified solution of “Community Workshop” and contextualizes the solution within the need for a public display of a strong community in collective cultures. A further development of the “Awareness Indicator” pattern produced the design solution “Co-presence”, which builds on the idea of a local team sharing one online chat account, even though the actual individuals who are chatting might change. This addresses strong collective community orientation and suggests the development of co-presence indicators of locally nearby collaborators for one chat account. The following is example of an entirely new pattern that derived from deductive analysis:
<table>
<thead>
<tr>
<th>PATTERN: “GLOBAL RESOLUTION” ***</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Thumbnail:</strong> Visually and textually supported synchronous tutorials help in gaining common ground among culturally diverse distributed learning teams and their tutors.</td>
</tr>
<tr>
<td><strong>Breakdown:</strong> After a local, face-to-face tutorial, students discuss remotely the options for the development of the designs but cannot gain common ground about the direction they want to follow. This might be due to differences in the local and remote tutors’ advice or because some students do not want to adopt the tutor opinion without discussing the feasibility. However, since local teams are assessed locally, the local tutors’ opinion has a high significance for students. A breakdown between the local and remote teams occurs because the possibly contradictory instructions of the tutors cannot be resolved in the global team. Hence, instructing distributed design-learning teams exclusively in local face-to-face tutorials causes breakdowns in the coordination and decision-making process of the global virtual team.</td>
</tr>
</tbody>
</table>
| **Forces:** Collective and hierarchically oriented cultures have a tendency to follow the tutor’s advice without question. Maintaining harmony in the global virtual team is also important. However, if tutors give different advice in separate local tutorials, the harmony in the global virtual team is disturbed and can only be restored if both tutors’
views are balanced and a decision is either imposed from above or negotiated between all participants.
Solution: Establish several synchronous communication sessions over the project period involving local and remote instructors and students of the global team in computer-supported peer tutorials. To schedule at least three sessions, one at the beginning of the project, one interim and a final presentation, is a minimum requirement for establishing common ground within the design learning community. Synchronous peer tutorials involving local and remote tutors and teams are major project milestones and offer full awareness of the team’s progress and of the opinions and suggestions of both tutors. Design decisions can be made instantly. Possible conflicts in advice can be discussed on the spot. In textual synchronous communication tutors can refer to representations of the designs stored in the DESIGN GALLERY as references for the discussion. However, video-supported presentations of the teams’ designs are especially successful. In the discussion and comparison of local implementations, students and tutors can communicate in reference to shared physical design artifacts. Since sound quality is sometimes a problem, textual synchronous communication can be used to support the visual demonstration of the design artifacts.
Why: In visual and textual supported synchronous discussion and tutorial sessions, students explain their design implementations in detail by means of sketches and prototypes. The discussants can immediately check and clarify misunderstandings in the design process. Both teams share and explain the design process from their local point of view, which fosters equality among the teams. All attending students and
Introducing New Methodologies for Identifying Design Patterns for Internationalization and Localization
Tutors gain awareness and common ground through high contextual communication. Having gained an understanding of the entire picture of the global team’s process and progress, tutors can discuss the proposal among themselves and communicate their decisions and instructional advice immediately and in unity to the global team. High contextual and multi modal information about the designs enables tutors to give low contextual, clear and direct advice. Due to the strong hierarchical orientation of the students the advice is taken without objection. This resolves possible uncertainties among the students and restores harmony in collective oriented cultures. While the conclusion of the discussion satisfies achievement-oriented cultures, the involvement of the entire community reassured ascription-oriented cultures that they are aligned with the projects values and goals, too. Therefore, global resolutions given by the tutors may address universal goals and directions, but tutors should refrain from giving concrete tasks for the teams. Even if the teams’ abilities and skills do not match the assignment or task, students from collective-oriented cultures would not object to the tutors’ instruction, not wanting the tutor to lose face. Afterwards, while discussing the new design direction, teams can clarify new tasks and roles that emerged from the tutorial.
The identification of this and similar patterns was strongly influenced by the deductive coding scheme used in the second analysis. Since it was informed by ideas from collaborative learning and design, collaboration support and intercultural and cross-cultural communication theories, the terminology used to describe the phenomena and scope of the patterns were more consistent. The format of the developed patterns is descriptive and informative rather than prescriptive. The descriptions of breakdowns that cause a design problem and the forces that can resolve this problem suggest efficient design solutions that are embedded in the context of cross-cultural computer-supported design learning. Designers, educators and system developers who use such patterns are not only informed about possible solutions but also about the socio-cultural principles that underlie the solutions.
Conclusions
This paper has presented a qualitative methodology for identifying and articulating cross-cultural computer supported design-learning patterns in ethnographic data. It suggests a combination of inductive and deductive data analysis to articulate design patterns that take a cross-cultural context into consideration and aim at informing designers and developers in the process of internationalization and localization of products and systems. In addition to several pattern thumbnails, one fully articulated pattern called “Global Resolution” was presented in this paper. Based on these examples, I suggest that an inductive analysis of ethnographic data to establish pattern categories followed by a deductive analysis of this data to compose patterns with consistent style is an suitable process to identifying and articulating interaction design patterns for cross-cultural computer-supported collaborative design learning.
References
Alostath, J. M. and Wright, P.: Pattern Languages towards a tool for cross-cultural user
interface design development. In Proc. of 7th International Conference on Work With
Baggetun, R., Rusman, E., Poggi, C.: Design Patterns For Collaborative Learning: From
Borchers, J., Buschmann, F.: A Pattern Approach to Interaction Design, John Wiley & Sons,
Erickson, T.: Lingua Francas for Design: Sacred Places and Pattern Languages. In Proc of
Gamma, E., Helm, R., Johnson, R. and Vlissides J.: Design Patterns - Elements of Reusable
Object-Oriented Software. Reading, Mass.: Addison-Wesley (1995)
Gunawardena, C. N., Lowe, C. A., & Anderson, T.: Analysis of a global online debate and the
development of an interaction analysis model for examining social construction of
knowledge in computer conferencing. Journal of Educational Computing Research 17(4).
(1997)
Mahemoff, M. J. and Johnston, L. J.: The Planet Pattern Language for Software
Internationalisation. In the Proc. of Pattern Languages of Program Design (PLOP) (1999)
Mahemoff, M. J. and Johnston, L. J.: Mahemoff, M. J. and Johnston, L. J. In Proc of Human-
Computer Interaction: Interact '01 (2001)
Meszaros, G. and Doble, J.: A Pattern Language for Pattern Writing.
2007)
Wiley and Sons New York (2002)
Tesch, R.: Qualitative research: analysis types and software tools. Falmer Press, New York
(1990)
|
{"Source-Url": "https://oro.open.ac.uk/12735/1/schadewitzHCII.pdf", "len_cl100k_base": 4098, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23239, "total-output-tokens": 4834, "length": "2e12", "weborganizer": {"__label__adult": 0.0012865066528320312, "__label__art_design": 0.05718994140625, "__label__crime_law": 0.000972747802734375, "__label__education_jobs": 0.346435546875, "__label__entertainment": 0.0004811286926269531, "__label__fashion_beauty": 0.0008935928344726562, "__label__finance_business": 0.0014600753784179688, "__label__food_dining": 0.0011110305786132812, "__label__games": 0.0016622543334960938, "__label__hardware": 0.00337982177734375, "__label__health": 0.0018815994262695312, "__label__history": 0.002239227294921875, "__label__home_hobbies": 0.0005564689636230469, "__label__industrial": 0.001766204833984375, "__label__literature": 0.0033092498779296875, "__label__politics": 0.0008883476257324219, "__label__religion": 0.0017547607421875, "__label__science_tech": 0.1759033203125, "__label__social_life": 0.0006670951843261719, "__label__software": 0.0240631103515625, "__label__software_dev": 0.36865234375, "__label__sports_fitness": 0.000766754150390625, "__label__transportation": 0.001979827880859375, "__label__travel": 0.0008192062377929688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23562, 0.05178]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23562, 0.52633]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23562, 0.92615]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2260, false], [2260, 5340, null], [5340, 6381, null], [6381, 8346, null], [8346, 11541, null], [11541, 13525, null], [13525, 16249, null], [16249, 18004, null], [18004, 21308, null], [21308, 23562, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2260, true], [2260, 5340, null], [5340, 6381, null], [6381, 8346, null], [8346, 11541, null], [11541, 13525, null], [13525, 16249, null], [16249, 18004, null], [18004, 21308, null], [21308, 23562, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23562, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23562, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2260, 2], [2260, 5340, 3], [5340, 6381, 4], [6381, 8346, 5], [8346, 11541, 6], [11541, 13525, 7], [13525, 16249, 8], [16249, 18004, 9], [18004, 21308, 10], [21308, 23562, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23562, 0.04762]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
e2eedf5a0cf62d06ca69db0753c2b28a243163cc
|
NP and Nondeterminism
More traditional way of viewing NP:
- Imagine a nondeterministic algorithm, where next step is not determined.
- E.g., choose a random number \( n \) and set \( x = n \)
- \( L \) is in NP if there is a nondeterministic algorithm \( A \) that runs in polynomial time such that
- if \( x \in L \), some computation accepts (returns 1)
- if \( x \not\in L \), no computation accepts
- "runs in polynomial time" means exists \( c \) such that all computations on input \( x \) run in time \( O(|x|^c) \).
- Because of the non-determinism, different computations on input \( x \) may have different running times.
Connection to previous definition:
- if there's a verification algorithm, can convert it to a nondeterministic polynomial algorithm:
- nondeterministically try all possible verification strings \( y \) such that \( |y| = O(|x|^c) \)
- Can do this in PTIME with branching
- Conversely, if there's a nondeterministic algorithm, can convert it to a verification algorithm:
- \( y \) describes the choices made along a given branch
NP, Co-NP, and PTIME
\( L \) is in co-NP if \( \overline{L} \) is in NP:
Examples:
- \( L \) is the set of encodings of graphs that do not have Hamiltonian paths.
Major questions of complexity theory:
- Does \( P = NP \)?
- Probably not, but no proof yet
- If \( P = NP \), then there are PTIME algorithms for lots of problems that we don’t know how to do efficiently yet
- E.g., factoring, scheduling, bin-packing, ...
- Does \( P = \text{co-NP} \)?
- Since \( P \) is closed under complementation, this is true iff \( P = NP \) (see homework)
- Does \( NP = \text{co-NP} \)?
- Does \( P = \text{NP} \cap \text{co-NP} \)?
- We can’t answer any of these questions (yet)
- Solving them gets you a Turing award ...
The little we know:
- \( P \subseteq \text{NP}/\text{co-NP} \subseteq \text{PSPACE} \subseteq \text{EXPTIME} \)
- \( P \neq \text{PSPACE} \)
Reducibility
Key idea in complexity theory: reducibility
- Making precise the well-known mathematical idea of reducing one problem to another
- Idea: If you can reduce $L_1$ to $L_2$, then if you have an efficient algorithm to decide $L_2$, then you get an efficient algorithm to decide $L_1$
Formal definition:
$L_1 \subseteq \Sigma^*$ is polynomial-time reducible to $L_2 \subseteq (\Sigma)^*$ if there is a polynomial time computable function $f : \Sigma^* \rightarrow (\Sigma)^*$ such that $x \in L_1$ iff $f(x) \in L_2$.
Lemma 1: If $L_2 \in P$ and $L_1 \leq_P L_2$, then $L_1 \in P$.
Proof: Suppose $A_2$ is a PTIME algorithm that decides $L_2$, and $f$ reduces $L_1$ to $L_2$
- $x \in L_1$ iff $f(x) \in L_2$
Let $A_1(x) = A_2(f(x))$.
- $A_1$ is PTIME, since $A_2$ and $f$ are.
- $x \in L_1$ iff $f(x) \in L_2$ iff $A_1(x) = A_2(f(x)) = 1$.
NP-Completeness
A language $L$ is NP-complete if
1. $L$ is in NP and
2. $L$ is NP hard – i.e., $L$ is the “hardest” NP problem:
- every language $L'$ in NP can be reduced to $L$
- If $L' \in NP$, then $L' \leq_P L$
Theorem: If any NP-complete language is in P, then every language in NP is in P.
Proof: Suppose that $L$ is NP-complete, and $L$ is in P. If $L' \in NP$, then $L' \leq_P L$. Therefore $L'$ is in P.
There are thousands of known NP-complete languages.
- See Garey and Johnson (1979) for the classic compendium
We haven’t found any PTIME algorithm for any of them yet.
Lemma 2: Reduction is transitive: If $L_1 \leq_P L_2$ and $L_2 \leq_P L_3$, then $L_1 \leq_P L_3$.
Proof: Suppose $f$ reduces $L_1$ to $L_2$, $g$ reduces $L_2$ to $L_3$:
- $x \in L_1$ iff $f(x) \in L_2$
- $x \in L_2$ iff $g(x) \in L_3$.
Then $x \in L_1$ iff $g(f(x)) \in L_3$.
$g \circ f$ is PTIME computable.
Therefore $L_1 \leq_P L_3$ (using $g \circ f$)
Proving a Language is NP-complete
General strategy for proving language $L$ is NP-complete:
- Show $L$ is in NP (usually easy)
- Reduce a known NP-complete problem $L'$ to $L$
- That is, show that $L' \leq_P L$
- This means $L$ is NP-hard
- This is because $\leq_P$ is transitive
- If $L''$ is in NP, $L'' \leq_P L'$
- Since $L' \leq_P L$, it follows that $L'' \leq_P L$.
Thus, it helps to have a core set of NP-complete problems to start with.
Getting off the ground is hard:
- How do you prove that every language in NP can be reduced to a particular language $L$?
For this we need a model of computation.
Turing Machines
A Turing machine (TM) can be thought of as an infinite tape, where a head can write 0s and 1s, together with some instructions for what to write.
- Initially the tape has the input written on it.
**Key question:**
- How are instructions described?
- i.e., what is the programming language?
- Idea: there is a finite set of states
- In a given state, the head can
- Read the symbol on the tape cell under it,
- Write a symbol (0/1) on the tape cell under it,
- Move one step left or one step right,
- Then the TM can change to a new state.
- The new state depends on the old state and the symbol read.
- There may be more than one possible next state (nondeterminism).
This may not like a very powerful model of computation, but ...
- Every program in a standard programming language (Java, C) corresponds to some TM
To show that a language \( L \) is NP-hard, we have to show that for every language \( L' \) in NP, there is a function \( f_{L'} \) such that \( x \in L' \) iff \( f_{L'}(x) \in L \).
- Idea: since \( L' \in \text{NP} \), there is a TM \( M_{L'} \) that outputs 1 on input \( x \) iff \( x \in L \)
- \( f_{L'}(x) \) simulates the computation of \( M_{L'} \) on \( x \)
**Satisfiability: the canonical NP-complete problem**
Propositional logic:
- Start with a set of primitive propositions \( \{p_1, \ldots, p_n\} \).
- Form more complicated formulas by closing off under conjunction (\( \land \)) and negation (\( \neg \))
Typical formula: \( \neg(p_1 \land \neg p_2) \land (p_3 \land \neg p_1) \).
Standard abbreviation: \( p \lor q \) is an abbreviation for \( \neg(\neg p \land \neg q) \).
Given a formula, we want to decide if it is true or false.
- The truth or falsity of a formula depends on the truth or falsity of the primitive propositions that appear in it. We use truth tables to describe how the basic connectives (\( \neg, \land \)) work.
**Truth Tables**
For \( \neg \):
<table>
<thead>
<tr>
<th>( p )</th>
<th>( \neg p )</th>
</tr>
</thead>
<tbody>
<tr>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>F</td>
<td>T</td>
</tr>
</tbody>
</table>
For \( \land \):
<table>
<thead>
<tr>
<th>( p )</th>
<th>( q )</th>
<th>( p \land q )</th>
</tr>
</thead>
<tbody>
<tr>
<td>T</td>
<td>T</td>
<td>T</td>
</tr>
<tr>
<td>T</td>
<td>F</td>
<td>F</td>
</tr>
<tr>
<td>F</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>F</td>
<td>F</td>
<td>F</td>
</tr>
</tbody>
</table>
For \( \lor \):
<table>
<thead>
<tr>
<th>( p )</th>
<th>( q )</th>
<th>( \neg p )</th>
<th>( \neg q )</th>
<th>( \neg p \land \neg q )</th>
<th>( \neg(\neg p \land \neg q) )</th>
<th>( p \lor q )</th>
</tr>
</thead>
<tbody>
<tr>
<td>T</td>
<td>T</td>
<td>F</td>
<td>F</td>
<td>T</td>
<td>T</td>
<td>T</td>
</tr>
<tr>
<td>T</td>
<td>F</td>
<td>F</td>
<td>T</td>
<td>T</td>
<td>T</td>
<td>T</td>
</tr>
<tr>
<td>F</td>
<td>T</td>
<td>T</td>
<td>F</td>
<td>T</td>
<td>T</td>
<td>T</td>
</tr>
<tr>
<td>F</td>
<td>F</td>
<td>T</td>
<td>T</td>
<td>F</td>
<td>F</td>
<td>T</td>
</tr>
</tbody>
</table>
This means that \( \lor \) is inclusive or, not exclusive or.
Equivalence
Two formulas are equivalent if the same truth assignments make them true.
Examples:
- Distribution Laws:
- $p \land (q_1 \lor q_2)$ is equivalent to $(p \land q_1) \lor (p \land q_2)$
- $p \lor (q_1 \land q_2)$ is equivalent to $(p \lor q_1) \land (p \lor q_2)$
- DeMorgan’s Laws
- $\neg (p \land q)$ is equivalent to $\neg p \lor \neg q$
- $\neg (p \lor q)$ is equivalent to $\neg p \land \neg q$
How do you check if two formulas are equivalent?
- Fill in the truth tables for both.
Satisfiability
Is $(p_1 \lor p_2) \land (\neg p_2 \lor p_3) \land (\neg p_3 \lor p_1)$ satisfiable?
- Is there a truth assignment to the primitive propositions that makes this formula true?
- Yes: $p_1 \leftarrow T, p_2 \leftarrow T, p_3 \leftarrow T$
- How about $(p_1 \lor p_2) \land (\neg p_2 \lor p_3) \land (\neg p_3 \lor \neg p_1)$?
- $p_1 \leftarrow T, p_2 \leftarrow T, p_3 \leftarrow T$ doesn’t work.
- $p_1 \leftarrow T, p_2 \leftarrow F, p_3 \leftarrow F$ does.
- How about $(p_1 \lor p_2) \land (\neg p_1 \lor p_3) \land (\neg p_3 \lor \neg p_1) \land \neg p_1$?
- Nothing works ...
In general, you can tell if a formula is satisfiable by guessing a truth assignment, and verifying that it works.
- The truth assignment is a certificate ...
Satisfiability is also NP-hard . . . .
Idea of proof:
- Start with a language $L'$ in NP and input $x$
- Since $L'$ is in NP, there exists $c, k, L'$ and a (non-deterministic) TM $M_L'$ such that $M_L'$ accepts $L'$ using at most $c|x|^k$ steps on input $x$
- Construct formula $\varphi_{x,L'}$ that is satisfiable iff $x \in L'$
- Want $|\varphi_{x,L'}|$ to be $O(|x|^{2k})$
- Then $f_{L'}(x) = \varphi_{x,L'}$
Main ideas of construction
- $M_{L'}$ uses at most $c|x|^k$ cells on the tape
- Have propositions $p_{0,i,t}, p_{1,i,t}, p_{b,i,t}, i, t = 1, \ldots, c|x|^k$
- $i$ has a 0/1/b (b for blank) at step $t$
- Part of $\varphi_{x,L'}$ says that exactly one of $p_{0,i,t}, p_{1,i,t}, p_{b,i,t}$ holds at each time $t$
- $p_{0,i,t} \lor p_{1,i,t} \lor p_{b,i,t}$
- $\neg (p_{0,i,t} \land p_{1,i,t} \land p_{b,i,t})$
- Have propositions $p_{b,i,t}, i, t = 1, \ldots, c|x|^k$
- The head is in position $i$ at time $t$
- Exactly one of $p_{h,1,1}, \ldots, p_{h,c|x|^k,t}$ holds (for all $t$)
- $p_{h,1,1}$ holds
- The tape is initially at the far left
- If $x = x_1 \ldots x_k$, then $p_{x_1,1,1} \land p_{x_2,1,1} \land \ldots \land p_{x_k,1,1} \land p_{h,k+1,1} \land p_{h,c|x|^k,t}$ holds
- $x$ is written out initially at the far left of the tape, followed by blanks.
- Similarly, can say that at time $c|x|^k$, there is a 1 at the far left, followed by blanks
- $M_{L'}$ accepts $x$
- The hard part is to write the part of the formula that captures the step-by-step operation of $M_{L'}$
- Need proposition that talk about the current state of $M_{L'}$ and how it changes
Bottom line: We can simulate TMs that run in non-deterministic polynomial time using propositional logic.
- Satisfiability is NP complete!
- This was the first problem proved NP complete (by Steve Cook)
- Validity is co-NP complete
3-CNF Satisfiability
A literal is a primitive proposition or its negation:
• $p$ or $\neg p$
A clause is a disjunction of distinct literals:
• $p_1 \lor p_2 \lor \neg p_7 \lor p_2 \lor \neg p_8$
A formula is in CNF (conjunctive normal form) if it is a conjunction of clauses
$$(p_1 \lor \neg p_3) \land (p_1 \lor p_5 \lor \neg p_2 \lor p_7) \land (p_8 \lor \neg p_9)$$
A formula is in $k$-CNF if each clause has exactly $k$ literals.
**Theorem:** The satisfiability problem for 2-CNF formulas is in P.
**Theorem:** The satisfiability problem for 3-CNF formulas in NP-complete.
**Proof:** It’s clearly in NP. To show that it’s NP-hard, it suffices to show that the satisfiability of an arbitrary formula $\varphi$ can be reduced in polynomial to the satisfiability of a 3-CNF formula $\varphi'$.
Three steps:
**Step 1:**
• Write a binary parse tree for $\varphi$,
• internal nodes are labeled with $\neg$, $\land$, and $\lor$
• leaves are labeled with literals
• An internal node represents a subformula of $\varphi$
• Introduce a new primitive proposition $q$ for each internal node
• Write formula that says that $q$ characterizes the formula at that node.
• If internal node is $\neg$ and successor is labeled by $q'$,
$$(q \land \neg q') \lor (\neg q \land q')$$
• If internal node is $\land$ and successors are $q_1$ and $q_2$:
$$(q \land q_1 \land q_2) \lor (\neg q \land \neg(q_1 \land q_2))$$
• Let $\varphi'$ be the conjunction of these formulas.
• Not hard to show that $\varphi'$ is satisfiable iff $\varphi$ is satisfiable
Step 2: Convert $\varphi'$ to an equivalent CNF formula, using various equivalences, where each clause has at most 3 literals:
• Using Distribution Laws, $(q \land \neg q') \lor (\neg q \land q')$ is equivalent to
$$(q \lor \neg q) \land (q \lor q') \land (\neg q' \lor \neg q) \land (\neg q' \lor q')$$
• Using Distribution Laws and DeMorgan’s Laws, can do the same for other clauses.
• (Actually, every formula is equivalent to a CNF formula)
Step 3: Get an equi-satisfiable 3-CNF formula
• Replace a disjunct $p_1 \lor p_2$ by
$$(p_1 \lor p_2 \lor q) \land (p_1 \lor p_2 \lor \neg q)$$
• The new formula is satisfiable iff the original was.
|
{"Source-Url": "http://www.cs.cornell.edu/courses/cs409/2001SP/notes/409wk13_x4.pdf", "len_cl100k_base": 4326, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 31591, "total-output-tokens": 4570, "length": "2e12", "weborganizer": {"__label__adult": 0.0005269050598144531, "__label__art_design": 0.00047659873962402344, "__label__crime_law": 0.0008158683776855469, "__label__education_jobs": 0.0024509429931640625, "__label__entertainment": 0.00022351741790771484, "__label__fashion_beauty": 0.0002739429473876953, "__label__finance_business": 0.0004949569702148438, "__label__food_dining": 0.0010223388671875, "__label__games": 0.0016193389892578125, "__label__hardware": 0.0015878677368164062, "__label__health": 0.0017642974853515625, "__label__history": 0.0005640983581542969, "__label__home_hobbies": 0.0002796649932861328, "__label__industrial": 0.0011529922485351562, "__label__literature": 0.0016326904296875, "__label__politics": 0.0006155967712402344, "__label__religion": 0.0011615753173828125, "__label__science_tech": 0.475830078125, "__label__social_life": 0.000194549560546875, "__label__software": 0.006649017333984375, "__label__software_dev": 0.49853515625, "__label__sports_fitness": 0.0006022453308105469, "__label__transportation": 0.0012531280517578125, "__label__travel": 0.00027441978454589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12250, 0.01233]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12250, 0.62461]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12250, 0.75641]], "google_gemma-3-12b-it_contains_pii": [[0, 1950, false], [1950, 4396, null], [4396, 6899, null], [6899, 10016, null], [10016, 12250, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1950, true], [1950, 4396, null], [4396, 6899, null], [6899, 10016, null], [10016, 12250, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12250, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12250, null]], "pdf_page_numbers": [[0, 1950, 1], [1950, 4396, 2], [4396, 6899, 3], [6899, 10016, 4], [10016, 12250, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12250, 0.07656]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
6d6d7f76df0ed8ab59fa587a680d20109e729855
|
Some issues for the Modelling of Interactive E-Services from the Customer Multi-Channel Interaction Perspectives
Vincent Chevrin
Laboratoire Trigone
Université de Lille 1
V.chevrin@ed.univ-lille1.fr
59655 Villeneuve d'Ascq cedex
France
Alain Derycke
Laboratoire Trigone
Université de Lille 1
alain.derycke@univ-lille1.fr
59655 Villeneuve d'Ascq cedex
France
José Rouillard
Laboratoire Trigone
Université de Lille 1
jose.rouillard@univ-lille1.fr
59655 Villeneuve d'Ascq cedex
France
Abstract
We are involved in research on new interactive systems for e-Commerce, especially in the B-to-C perspective. These works are in the crossroads of HCI and e-Services area. So, we have studied existing researches about web services, which appear insufficient for our needs. First of all, we explain the complexity of the interactive systems for direct marketing. Then we discuss about the design of generic architecture for rich interactions in B-to-C and we give a definition of our interpretation of e-Service. Finally, we propose an analysis of the models and protocols for the e-Services.
1. Introduction
We are involved in research on new interactive systems for e-Commerce, especially in the B-to-C perspective. But our interest is also for the field of application such as e-Learning and e-Government. Our main experiences are in the design of interactive services, which benefit from new channels for interaction, due to the rapid spreading of wireless phone and data devices, and to the development of intelligent communications networks and middleware, enabling emergence of ubiquitous computing.
In co-operation with an important Direct Marketing company, we are working in two directions for the design of future multi-channel flexible interactive services:
- A direction centred on the complex interactions allowed by the right combination of channels or/and interaction modalities (i.e. language vs. direct manipulation). The coupling of these elements is done dynamically, depending on the user contexts and marketing rules [5].
- A direction centred on the global architecture of the information and communication systems. Effectively most of the customer channels (for example Web or Audiotel) have been designed case by case and relatively independently. Their convergences were only effective by their connections to a common back office system. The needs for a re-factoring have been accelerated by several factors: convergences of legacy systems either through the uses of ERP or deployment, in the organisation or through the supply chain, of EAI solutions, and evolution of the business rules with requirements for flexibility and for more dynamic relationships (or affiliation) with third parties, for cross-selling for example. But the convergence at the customer interaction level is still lagging in spite of use of Customer Relationship Management systems (e-CRM). It seems for us that an approach in the spirit of Service Oriented Architecture (SOA) or Computing (SOC) could be an elegant way to solve this problem [20] [13].
This contribution is focused on the interaction that our two research directions have shown from our first investigations and developments. The design of an experimental technological platform for multi-channel and multi-modal intelligent interaction have leaded us to reveal some issues about the modelling and design of e-services, that current proposals, such as those of the Web services suite [21] are not satisfactory for our purposes. Our analysis is situated at the crossroad of three disciplines: Human-Computer Interaction (HCI), marketing and distributed systems.
2. The complexity of interactive systems for Direct Marketing
2.1 About Direct Marketing and E-Commerce
In direct marketing focused on the individual customer, whatever her/his location, it is not true that development of e-Commerce means the exclusive use of the Web channel. Indeed there are already well-established relationships with the customer through a diversity of communications channels (or medium) such as Call Centres for phone, Audiotel and videotext servers, Web, WAP, e-mail, SMS…
Nevertheless, Direct Marketing companies have also developed some forms of personalised relations with their customers, and some are going toward a One-to-One marketing strategy [14]. This has been amplified by the support of the personalisation processes into e-Commerce solutions where, thanks to new technologies [6], it is more easy now to apply this kind of personalisation, not only based on the direct knowledge of the customers (through her/his passed purchases or actions and from direct filing of some forms) but also indirect knowledge, for example by inferring from user interactions through the HCI. Our view of the e-Commerce reflects the central place of the knowledge about the customer not only as a user of the interface, with his/her preferences and skills, but also as a long term customer relationship, in order to augment or to maintain her/his loyalty or fidelity. We are closed to the definition of the e-Commerce given by [7] “E-commerce is an approach to achieving business goals in which technology for information exchanges enables or facilitates execution of activities in and across value chain as well as supporting decision making that underlies those activities”. In putting emphasis on the decision process, informed by knowledge learned from the past relations and interactions with the customer, those authors shown also, that this knowledge management process is central. All the relationships can be seen as an “evolutionary learning relationship” where both the parties (seller and customer) co-operate, more or less, in order to maintain a mutual understanding of the transaction and to achieve their respective satisfaction.
2.2 Complexity of a multi-channel and multi-modal customer relationship approach
There are a lot of possibilities in the combination of interaction electronic channels to support a transaction relationship between a customer/user and the different parts of the value chain. By channel we mean different medium (text, image, and voice) supported by different technologies which implies different network accesses and protocols. This will be illustrated later.
Of course not all the combinations of these channels make sense, but it appears more and more marketing scenarios, which take benefit of some good innovative combinations (for example: voice via a call-centre and SMS, or phone interaction during web co-browsing). The degree of coupling of these channels from the transactions viewpoint can vary from loosely coupled (each channel is used in different episodes and for different user tasks) to highly coupled where two channels, for example, are used in quasi real-time for the achievement of a particular user activity, in a synergetic mode. In this last case, in HCI, we speak about multi-modality [5] [11]. This trend is fostered by new proposals for the Web standards, issued by the W3C consortium, such as VoiceXML for voice interaction [23] or multi-modal web [24].
The studies of isolated uses of channels, or of more rich scenarios implying several channels for a same transaction, have shown that the temporal logic of these transactions, in term of elementary task composition, is different considering the channel, or the combination of channels, used. For example the identification of the customer must be done first, before all others actions, in the case of the Audiotel, but it can be delayed to the payment phase in case of a Web interaction. In the integration of the different channels accesses and management systems into a “seamlessness” infrastructure, these differences, into the automation of the actions, would be taken into account. This should be reflected into the dynamic composition of e-Services supporting group of user activities or tasks.
3. The design of generic software architecture for rich interactions in B-to-C
3.1 An overview of a technological platform for experiments
From our previous experiences of developing and experimenting some Ad Hoc designed prototypes [5] using the potential of multi-channel and multi-modal interactions, we have start to design and implement a more generic and open architecture in order to have more flexibility in the composition of innovative
scenarios for interactive marketing, and their experimentation and evaluation. Our aim is not to compete with large solutions provided by important software companies in the domain of e-CRM or of e-Commerce technological platforms.
Our technological platform, under development, provides four facets corresponding to the four main interfaces with its environment followings two main axis.
A facet is an API, which represents four knowledge domains. An API makes the supervision also and manages the four other.
- First axis of the Figure 1, horizontal, is the main one and corresponds to the information flow and control relatives to the transaction and management of the “dialogue” with the customer/user. It goes from e-Services, which provide an abstraction and a clear delimitation for all the functions handled by the value chain, from the user technological environments with their different interfaces of the telecom networks.
- Second axis, vertical, is relative to all the operations of transformation –filtering, enriching, and formatting– of the information flow, in order to provide personalisation, either in relation with the user context, technological environments and profiles, or in relation with the marketing rules managed by the e-CRM system.
Facet 1: contains the model of adaptation to the e-Services. In terms of HCI architecture it can be seen as the functional kernel, the Model in the well-known MVC pattern. It serves to interface the e-Services for interaction, with a common level of abstraction and its related language, and to assure the composition and intertwining of the e-services, in a way, which depends of the nature, and the history, of the relation with the customer. We will elaborate more on this facet in the following.
Facet 2: provides all the adaptations and interfaces to the different kind of channels. We have identified five main groups of channels depending on their underlying technologies, networks accesses and rules of use:
* The web group which is mostly handled by a Web portal and some proxy mechanisms (for example for real-time co-browsing, or adjunction of videoconferencing) and extended by functions or components such as community or forum management systems. In this case the client is a classical Web browser and some adaptation of contents and layout for PDAs.
* The “voice” group which handles all the functions relative to speech recognition and speech synthesis and of the phone protocols (supravocal keying, etc.), which can be coupled [5] with the Web group by the virtue of the VoiceXML standard. In this case the client is a telephone set either fixed or mobile.
* The wireless mobile phone group, which handles gateways for I-Mode, WAP, SMS…In, this case the client is a mobile phone with some extensions provided by the telecom operator such as more powerful data links than GSM (i.e. GPRS, EDGE, UMTS...). Of course this group can share some information servers with the web group.
* The broadcast channel group: due to the digitalisation of most of the broadcast medium (TV, FM radio, etc.) it appears an opportunity to use them into e-Commerce systems. This will be developing in the future with the potential of rich media delivery (SMILE and MPEG4 standards), of streaming servers and narrow casting over the Internet, and provision for controlling the quality of services over the networks.
* And finally the human channel group, which is the interface with a multimedia call-centre: direct phone calls, e-mail reading and answering handled by human agents with the assistance of the e-CRM systems. It must be remember that, depending on the intelligence of the solutions, which are deployed, SMS and e-mail inward and outward messages can be proceed either by human agents or intelligent software agents.
Facet 3: is the support of the model of adaptation to the contexts of interaction. It is closed to the approached done for the adaptive support of mobile or ubiquitous interactions and the handle of client multiple interaction platforms. For that purpose some abstract interaction description languages have been proposed such as UIML or PlasticML [17]. This is not described in this paper. We want just only mention that, depending on the marketing rules, this facet can offer the possibility to personalise by the user herself not only the interface she wants for interaction but also the composition of services she wants to use (self-service). Telecommunication providers already plan this for the future Intelligent Networks where subscribers will be enable to compose their bouquet of services [15].
Facet 4: is the support of both the interface and model of adaptation of the interaction to the Customer Relationship Management system. This will not be developed here, but it must be recalled than CRM systems are more and more open, not only in terms of technologies with the benefit of Web Services adoption, but also in terms of modularity and semantic interoperability. We know also that there is no a clear delimitation between some functions of our proposed experimental system and those provided by the e-CRM systems. We see actual systems of CRM, already in operation in the enterprises, as legacy systems that must be integrated into our architecture.
3.2 Introducing an e-Services approach: our definition and argumentation
First of all it appears that the word “e-Services”, or even “services” is still confused and have different meaning for different communities. We agree with Baida
et Al [1] that a least three perspectives on e-Services must be understood in order to provide a shared terminology for e-Services: a business science one, a computer science and an information science one, in their case. In our case, our aim is to introduce an HCI science perspective, that, same as the information science, can give a bridge between business and computer sciences, because it takes seriously into account the customer viewpoint, as an interface user but is a customer-oriented design approach where usability is one of the key factor for the success of the business, and also the technology viewpoint. It must be also understood than we privilege the business and marketing perspective, over the technologies one.
In business science: an e-Services can be defined as “the provision of services over electronic network” [18] where electronic networks have an assumption larger than Internet, and including wireless networks, Kiosks, ATM… This shift of paradigm for the e-Commerce toward the service-focused paradigm is justified because it is a more customer-oriented approach, allowing more efficient and effective satisfaction of the market needs, in a more competitive way. For Rust and Kannan [18] this is achieved trough “uses of two ways dialogues to build customized services offerings, counting on knowledge about the customer to build strong customer relationships”. This is the same strategy than those developed in the One-to-One marketing. We retaine from these analysis that: 1) Services must be dynamically composed either by system initiative inferring from knowledge about the customer (a learning relationship [14]) or by the customer/user, who can be seen by extension as a co-producer of the service (self-services [26]); 2) The scope of an e-Services approach of e-commerce is to extend the upstream and downstream channels of an organisation in direction of its customers, or users, in an information orientation, flows of information and controls, and that channels are not only the Web or even the pure Internet.
In computer science: the definition of what is an e-Service, or even service, is strongly influenced be the movement around the Web engineering and the development of a lot of Web services related standards. And most of the time it is used as a synonym for Web services. In [1] one of the definitions mentioned is “loosely coupled, reusable software components that encapsulate discrete functionality and are distributed and programmatically accessible over standards Internet protocols”. We can agree with this definition if the standards mentioned can be enlarged to those, for example, of the wireless phone ones. We can observe that the support of upstream and downstream channels, aforementioned, is sometimes categorised in computer science as “information-providing service”.
In HCI science: concept of e-services is not now very mature, and there are many definitions. The definitions oscillate between the business science and computer science viewpoints. With the Web centric approaches, of most of the user interfaces for e-Commerce, it is clear that the emphasis is also on the Web services viewpoint, where the traditional separation of concerns, a good design pattern, that exists in HCI software development - separation of the functional kernel (the Model) from the presentation (the View), and from the dialog control (the Controller) - is kept. Of course this question, about abstraction of Web services for user interaction, will be more acute in the case of connection of Web portal to Web services, using standards protocols in the WSIA suite. In our case our definition of e-services from an interaction with the end-user viewpoint (in the following “interactive e-Services”) could be specialisation of previous one: an interactive e-services is viewed as a collection of user similar tasks which are grouped in manner to have sense for his/her activities, and maintains coherence from the business and marketing rules and management viewpoint. This is closed to the use of specific design patterns for the design of the user interface in e-Commerce [23].



Our first investigations for the re-factoring of the value chain in e-commerce for direct marketing have distinguished of several examples of basic interactive e-Services:
**E-catalogue**, which is used for the search and selection of products to buy. It could be basic online catalogue with simple search tools, for example by products categories, or more intelligent one based on recommending systems and “tribes” behaviour;
**E-order** is the filling of “caddy” or order forms, and the payment tasks with the various possibilities and possible credits negotiation;
**E-tracking** is dedicated to the interaction with the various actors of the supply chain (in what stage is the delivery of my ordered products?);
And some others e-services such as, **e-registration, e-claims** top handle all the customers complaints, **e-communities** to have contact with other customers to share experiences about uses of the products or services, and **e-profile** in order to provide the way for the customer/user to customise his/her interface, to give information and preference for the different networks and interaction devices uses...
These interactive e-Services are only examples, and we could give another, as the domain of e-banking, etc.
### 4. Analysis of the models and protocols proposed for the e-Services
#### 4.1 The Web engineering approach: the Web services suite
The development of Web technologies has produced a rapid increase of new standards proposed by different consortiums such as W3C, OASIS, etc. [21] [12] Web services appear as a new paradigm for e-business. A Web service is a self-describing, self-contained, modular application over the Web. Mainly it is viewed as a standard interface, and a life cycle, to favour communication between distributed applications.
The Web services are composed, and this composition is connected to the final user, if required, through one of the specific application, also a web services, dedicated to the end-user interaction management. It means than the embedded web services do not need to understand and manage the dialogue with the end users. For us it seems that complex multi-channel interactions require something in the middle between “interaction ignorant” web services, and user interface informed web services. At this stage of investigation we call them Interactive Web Services. Same as in the Web Portal special case, the proposal can be seen as a layer above the already recognised standards, especially the trio WSDL, SOAP and UDDI. Our interest is also about the proposals relative to the composition, aggregation or orchestration of web services. In front of the entire composition languages proposal, and of their overlaps, the situation seems confuse for us. However service-oriented composition, such as those proposed in the Business Process Execution Language for Web Services (BPEL4WS), Web Services Conversation language (WSCL) or Web Services Choreography Interface (WSCI) could be on interest for us. Nevertheless we observe:
* First, the realisation of the composition specified in one of the aforementioned languages is left to the developers. In most of the design this is achieved through the use of a workflow system which leads to the problem of matching between the patterns specified in these composition languages an those find in the workflow approach [22] and that the composition is mostly done as the design stage and cannot be easily change a run-time due to the lacks of flexibility of workflows. This has been sometimes overcome by the use of multi-agent technologies.
* Second, that these composition languages allow uses of conversational compositional patterns, in conjunction with the hierarchical one, such as shown for BPEL4WS by [9]. But this mix is not so easy at the implementation level by the limit of workflow systems. Several works [2] [3] have proposed new approaches for the support of web services composition focused on the conversation patterns and modelling, which give more flexibility and dynamicity in the composition at the run-time. These works also point the lacks of the composition languages from this viewpoint.
Our requirement for a description language for Interactive e-Services invocation and composition is slightly different of those of the Web services mainstream, because the needs for dynamicity is different: in the case of Web services standards (WSDL & UDDI) the aim is the possible discovering at run time of an e-Services, answering to a particular need (rent a car for example) with possibility for brokering of services. Although in our case, in a first stage, there no real needs for e-services discovering of the fly, even the external e-Services are well known at design time due to affiliation rules of marketing and needs to keep consistent the end-user dialogue, but fine grain composition is required at run-time in an opportunistic way.
The requirement for a dynamic adaptation of the composition of Interactive E-Services, in relation with the channels compositions and interactions, leads us to consider the design of a complementary Interactive Web Services description language and a composition mechanism based on a conversational pattern focused on dialogue with the user, through multi-channel interactions. This is investigated through the study of two approaches that aim to give an abstract level between the e-services and the interaction engine: the web portal one and the “Ubiquitous Interactor” one [10].
#### 4.2 Surface integration for the end-user interaction: the web portal approach
A portal mostly represents the aggregation of either content from diverse sources, both inside and outside...
(distant or not) or of functions, such as email, calendaring, etc. For that matter presentation is an important issue of portal. This portal must provide both a single consistent interface across diverse content and function and a common user interaction model and API which new applications can build on. Moreover, a portal must provide different levels of personalisation, like, for instance, allowing different classes of users to have different levels of privileges. Nevertheless, integration of content, application or services into portals has been task that requires significant custom programming effort.
In this area, “WSIA and WSRP are new web services standards that enable businesses to create user-facing, visual, end interactive web services that organizations can easily plug-and-play into their applications and portals” [16]. Web Service for Remote Portlet (WSRP) and Web Service for Interactive Applications (WSIA) are specifications provided by OASIS consortium. WSRP focuses the improving of presentation-oriented web services interoperability and WSIA addresses the interactive web services interoperability. A WSRP web service is plugged into a portal without programming and constitutes thus a standard remote portlet. This kind of solution could be adapted for our issues, but WSRP still remains presentation oriented. And we can differentiate two kinds of web services (Figure 5):
- The presentation-oriented web services. This solution allows suppliers to manage the presentation of their services.
- The data-oriented web services. In this case, we want to consume and process data from the portal (source IBM)
For example, an organisation wants keep its own graphical chart and its visual design. This approach enable to plug web services into portals without programming effort, via proxy portlets, they define as abstract portlets that can received any presentation-oriented web services. Moreover, WSRP includes “markup fragmentation” mechanisms, which allow adapting data presentation for several markup languages such as XHTML and HTML in the current version. In the future, standards like WML for WAP, VoiceXML, and cHTML, for iMode are also considered. In spite of the interest of this solution for lot of portal applications, this is not adapted to ours issues. Indeed, we want to hold dialogue integrity and the continuity for a long duration of the interaction over various channels. Thus, presentation-oriented web services go too far for our needs.
- The data-oriented web services, is more close to our concern. In this case, web services receive requests and return data objects encoded in XML documents in the response. This approach allows the portal operator to make its own presentation of the web services. In our case, we want to consume and process data from services, so it is already a better solution. However, this approach is not enough interaction oriented. Thus, WSRP allows only surface integration where e-services aggregated don’t collaborate directly, and are not aware of the “state of affairs” of the others. This is the reason why our investigation led us to have interest in other works like [10]. It is the object of the next section.
From the HCI viewpoint it means that the two approaches of WSRP don’t satisfy clearly the separation of concerns between models, dialogue controllers and presentations, and than, in spite of possibility of generating several interaction mark-up languages, it is too dependant of a visual web interaction.
5. Some protocols for interaction service
As we discuss in the last section, our works are interaction oriented, thus, interactive e-services must be described in this way. So, we need an e-service modelling language interaction oriented for describe services at a high level of interaction. In this area, several studies emerge since few years. Here we will discuss about ISL (Interaction Specification Language) by [10] and WSIA.
5.1 “Ubiquitous interactors” with Interaction Specification Language (ISL)
In this way, [10] proposes ISL (Interaction specification language) in order to describe both services and services interactions. ISL is different from WSDL (Web Service Description Language) that is at a more low level, and represents the activity in a static view that doesn’t take into account the nature of the contents exchanges for interaction with the end user. Thus, [10] proposed eight interaction acts (“abstract units of user-service that contain no information about modality or presentation”) for describe most of possible interactions.
In the ubiquitous interactors approach, there is an interaction engine, which is device specific and service independent. This engine allows presenting device specific information. We can see that in Figure 6.
At the opposite this work has no direct links with the web services movement and no compatibility with the
stacks of standards provided by the W3C or OASIS for example. The services considered are mainly those related to document accesses.
5.2 Proposal: WSXL and WSUI
Interaction management behind services and applications user-oriented is become a current research subject. Several standards emerge from diverse works. Two languages, WSXL (Web Service eXperience Language) [8] and WSUI (Web Service User Interface) from the technical committee WSIA, supported by OASIS have appeared to supplement WSRP in the WSIA suite.
Interaction management behind services and applications user-oriented means, on one hand, that user can access directly to the services by a simply browser, and on the other hand, the association management and the assembling of services by different intermediaries before interaction with the end user is done.
The elaboration of this specification is based on two scenarios:
- The multi-channel access to the web services by users (PC, PDA, cellular phones, etc.).
- The web services aggregation in a single page by a distribution channel intermediary (such as portal).
WSUI and WSXL specifications try to define an abstract XML representation of the web services presentation and of user interaction with it. This allows a simplification of the aggregation and association of web services, and also simplifies their adaptation to the access mode (that it could even be done automatically).
We think that the flexibility about the interaction between services and user is not sufficient for our works. Indeed, contrary to ISL, WSXL is too channel dependent.
These approaches are not sufficient for our purposes. Few ideas about our works for performed the lack of these solutions can be given.
5.3 Our proposal for the specification of ESI
We need to define a solution that is channel independent and both service and interaction aware for the e-Service. A grammar needs to be develop, which describe service independent of content and device, and both services and users oriented interaction. This can be based on a taxonomy or an ontological model of multi-modality which describes the basis task model of each e-Service. We have started from the [25] proposal described in UML. In this way, a combination of e-Services will be able to do according to users profiles, channels used, a stat of the interaction and also CRM that represent both direct marketing domain constraints and knowledge.
Our challenge is to propose an XML compliant ESI specification language, which is still under development. Several elements about it such as simple eight primitives see in [10] will be implemented. Of course, we want to be interaction oriented, i.e., an end-user can interact with it. That's why; input tag must have attributes, as “type” that describes type of media used during interaction, like text, voice, etc. Moreover, interaction flow must be managed between ESI. Indeed, ESI can exchange data and controls during the interaction and the difficulties appear about the services composition. Figure 7 Shows an ESI schematisation. Due to the dynamicity of ESI composition, input and output must be abstracted. Requirement is the result of the current interaction and the effect must affect the future interaction. Presentation of this formalisation will presented in details in the conference.
6. Conclusion
New emerging standards for Web services and new service-oriented design approaches are great opportunities for the re-engineering of current e-commerce solutions in order to provide a better multi-channel support, and a better customer relationship through more flexibility, and possibility for self-services. Multi-channel and multi-modality in the customer/user interactions are difficult challenges from the HCI design viewpoint, and lead to new constraints for the re-engineering of the value chain, in a services oriented approach. This requires that most of the e-Services constituting the assets of the (virtual) organisation must be mediating by specific interactive e-Services, and that the composition is done dynamically, and in an opportunistic way regarding the marketing rules. This is enforced by the use of best practices and design patterns, already known in the field of HCI, such as separation of concerns exemplifies through the controller pattern and the MVC framework.
This more HCI view of what is an interactive e-Services, and how it can be interfaced, shows the limits of current proposals, even those which are already presentation-oriented such as those related to the Web portals. This work about description and interaction languages for interactive e-Services is in progress starting from the critics presents here, and we will provide more elaborate results for the workshop.
Our activity, about description and specification of interactive e-Services, is part of the design of a generic software architecture for experimentation of rich multi-channel and multi-modal interactions if the framework of e-Commerce B-to-C, taking benefit of the potential of the ubiquitous computing and mobile interaction devices. The need for high flexibility at different levels
from interactive e-Services composition, to personalisation and channel dynamic choice and combination has push us to choose a distributed multi-agent system (a MAS) in the technological platform, to realise the central part of the architecture presented in figure 1. This choice of MAS has already shown its potential either in the design of more flexible workflow systems, running in a peer-to-peer mode, or in the field of support of multi-modality in the HCI field. But the rational for this choice and the selection of the technologies for the MAS is out of scope of this paper.
7. Acknowledgments
Authors want to thank Yves Bayart, Research and development manager of 3 Suisses International group, the Cité Numérique as well as the Région Nord-Pas-de-Calais for their supports in these research works.
8. BIBLIOGRAPHIE
|
{"Source-Url": "http://www.lifl.fr/~rouillar/publi/2005_Chevrin_Derycke_Rouillard_eee.pdf", "len_cl100k_base": 6811, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25770, "total-output-tokens": 8711, "length": "2e12", "weborganizer": {"__label__adult": 0.00047397613525390625, "__label__art_design": 0.002849578857421875, "__label__crime_law": 0.0005483627319335938, "__label__education_jobs": 0.00196075439453125, "__label__entertainment": 0.00021028518676757812, "__label__fashion_beauty": 0.00026869773864746094, "__label__finance_business": 0.004596710205078125, "__label__food_dining": 0.0005154609680175781, "__label__games": 0.0008263587951660156, "__label__hardware": 0.001590728759765625, "__label__health": 0.0008268356323242188, "__label__history": 0.0005660057067871094, "__label__home_hobbies": 8.70823860168457e-05, "__label__industrial": 0.0006694793701171875, "__label__literature": 0.0006532669067382812, "__label__politics": 0.000560760498046875, "__label__religion": 0.0005674362182617188, "__label__science_tech": 0.123291015625, "__label__social_life": 0.0001100301742553711, "__label__software": 0.0343017578125, "__label__software_dev": 0.8232421875, "__label__sports_fitness": 0.00022804737091064453, "__label__transportation": 0.0008058547973632812, "__label__travel": 0.00029969215393066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39091, 0.02463]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39091, 0.20366]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39091, 0.92369]], "google_gemma-3-12b-it_contains_pii": [[0, 4159, false], [4159, 8360, null], [8360, 13865, null], [13865, 18163, null], [18163, 23844, null], [23844, 28734, null], [28734, 33868, null], [33868, 39091, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4159, true], [4159, 8360, null], [8360, 13865, null], [13865, 18163, null], [18163, 23844, null], [23844, 28734, null], [28734, 33868, null], [33868, 39091, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39091, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39091, null]], "pdf_page_numbers": [[0, 4159, 1], [4159, 8360, 2], [8360, 13865, 3], [13865, 18163, 4], [18163, 23844, 5], [23844, 28734, 6], [28734, 33868, 7], [33868, 39091, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39091, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f638b707d6ec33d220d0a09353ebd5db7f668b17
|
Standard Interfaces for Open Source Infrastructure as a Service Platforms
Andrei IONESCU
Bucharest University of Economic Studies
andrei.ionescu@antiferno.ro
To reduce vendor lock-in and fragmentation and to evolve into a transparent platform, IaaS platforms must adhere to open standards. All the major open source IaaS Platforms offer interfaces compatible with the de facto standards but mostly lacks support for the de jure, open efforts of standardization. Available implementations of open standards are not part of their main development efforts. More development resources as well as consolidation of open standards are needed to achieve increased portability and interoperability.
Keywords: Cloud Computing, RESTful API, interfaces, Open Cloud Computing Interface, Cloud Data Management Interface, Cloud Infrastructure Management Interface, Open Virtualization Format
1 Introduction
Cloud Computing is an already established technology, already having lost its aura as one of the hottest and most rapid developing topics in the industry. At the base of its services stack, Infrastructure as a Service (IaaS) model utilizes well understood architectures, providing access, mostly, to the same type of computing and storage resources across all platform providers. New topics entered the spotlight, such as integrated management of IaaS platforms, selection of an IaaS platform, vendor lock-in, interoperability and identity in the Cloud and efforts were made to address them by defining and using standardized interfaces. Standard data models and technologies compatible with both the IaaS platforms and the existing Internet Infrastructure had to be used. This led to adoption of established standards and technologies such as XML web services or JSON over RESTful services.
2 Open Cloud Computing Interface
Open Cloud Computing Interface (OCCI) is a collection of community generated open specifications built through Open Grid Forum [1]. Intended to be an open and interoperable RESTful protocol and an API for all cloud-related management activities, it started with a focus on the Infrastructure-as-a-Service layer but later extended to include all the other layers in the Cloud stack.
The specifications are broken into several modules in order to achieve a greater flexibility and extensibility. Separate modules describe:
- the core models, defines an abstract representation of real-world resources intended to be manipulated through OCCI renderings [1].
- the rendering of the code model using HTTP/REST, describes the interactions available for an OCCI implementation with the resources built using the core models [2].
- the extensions to the code models specific to implementation of an Infrastructure as a Service API [3], defining its parameters for compute, storage and network.
The main reasons behind the development of OCCI were identified in [4] and [5] as:
- Interoperability, demanding a standardized API and protocol.
- Integration, allowing different service providers to bring together and interconnect platforms based on different technologies.
- Portability, providing standardized data formats understood by different providers, allowing porting between them.
- Innovation, considering that established standards can be a driver for innovation.
- Reusability, Figure 1, working on two levels, first allowing reuse of code through basic standardized APIs and, second, promoting reuse of standards in different
DOI: 10.12948/issn14531305/19.4.2015.06
technology fields.
3 Cloud Data Management Interface
Cloud Data Management Interface (CDMI) describes a functional interface allowing applications to create, retrieve, update and delete data elements from the Cloud [6]. The standard is developed by SNIA, a global organization of storage solution providers. Using a CDMI compatible interface, cloud data consumers are able to discover the storage features offered by IaaS platforms. Along with data elements, the standard also allows the management of containers, accounts and retrieval of monitoring and billing information, Figure 2. CDMI is not designed to replace other object access protocols but to complement them. The standard uses RESTful protocol for building its interfaces, to keep it as simple as possible and to encourage its adoption. Adding discovery functions, it allows future extensions to the standard without breaking client compatibility.
CDMI serves as both a storage protocol while, at the same time, offering a layer of client to cloud management and cloud to cloud standardized interactions. Clients can manage credentials to domains defined in the cloud forming a hierarchical structure that have objects attached to them and building a path for accessing and controlling these objects. For cloud to cloud interaction it introduces globally unique identifiers linked to objects for the whole of their lifetime to persist their identity if moved or replicated between clouds. Serialization and deserialization into and from JSON format can be used to transfer objects and their metadata. Primitives are defined which permit the clients to build transfer request indicating source and destination cloud for objects, along with the credentials required for accessing them.
4 Cloud Infrastructure Management Interface
Cloud Infrastructure Management Interface (CIMI) is an open standard providing an API for administering Cloud Computing infrastructures. The standard is maintained by Distributed Management Task Force (DMTF) Cloud Management Working Group, a non profit organization of industry members. CIMI describes the model and the protocol used by Infrastructure as a Service consumers to interact with the cloud [7], addressing the runtime maintenance and provisioning of cloud services. It uses both JavaScript Object Notation (JSON) and eXtensible Markup Language (XML) to encode communication. The model described by the standard can be mapped to any existing cloud infrastructure and it provides the means for the clients to discover which feature are provided by the cloud implementations. There are not many CIMI implementations but the Apache Deltacloud is one of them and having drivers for almost all the major Infrastructure as a Service platforms means that any CIMI compatible client is able to interact with most of the deployed clouds.
5 Open Virtualization Format
Another standard maintained by DMTF, Open Virtualization Format (OVF) describe the means to package and distribute software appliances to be run in virtual machines in a hypervisor independent way. Packages distributed using OVF consist of one XML descriptor containing the meta-data which describes the appliance along with its disk images, certificates and auxiliary files. In the lifecycle of a virtual appliance, Figure 3, OVF covers the packaging, distribution and deployment phases.
6 Apache Deltacloud
Started in 2010 by Red Hat as a solution for an increasingly heterogeneously cloud interfaces environment, the project intended to be a vendor neutral openly developed API. To escape worries that it will only ever be a single vendor effort, it was proposed as an Apache Software Foundation project and graduated to top level status in 2012. Deltacloud provides a cloud abstraction, standard REST API. It is not another library but a web service covering functions for management of compute and storage resources. It exposes three different APIs, wrapping their functionalities for 15 Infrastructure as a Service platforms and 7 storage engines [9]:
• Deltacloud classic API.
• DMTF Cloud Infrastructure Management Interface (CIMI) API.
• Amazon Web Services Elastic Cloud Computing (EC2) compatible API.
7 OpenStack
OpenStack is an open source operating system for the cloud, a collection of projects used to setup and run compute and storage services. Initially developed by Rackspace Hosting and NASA, OpenStack Consortium, the maintainer of the platform with the same name has more than 150 members, including AT&T, Canonical, HP, IBM, Intel and Rackspace.
There are several service families under OpenStack, each with its own API for interfacing with the cloud clients and with the other services, Figure 4.
![Fig. 4. Basic OpenStack architecture, adapted after [10]](image)
Nova manages the complete lifecycle of virtual machine instances in an OpenStack deployment. It manages the compute and network resources along with the required authorizations for starting, scheduling and stopping virtual machines. Its exposes all its functions through its own web services API as well as a layer compatible with Amazon Web Services EC2. Its API server is the only OpenStack component interacting directly with the outside world.
Glance is the service responsible with storage and retrieval of machine disk images. It can use local file systems, OpenStack’s own Object Store services or any storage exposing its functions through an AWS S3 compatible interface.
Swift provides object store services for OpenStack, storing and retrieves data using a RESTful API. It is one of the most mature modules, providing the base services on which Rackspace’s Cloud Files services is built [11]. The Swift API is compatible with AWS S3.
Keystone provides authentication and authorization services for OpenStack, managing domains, users and roles. It’s a crucial system used by all the other modules using its own REST API (Identity API).
Horizon provides a web portal for interacting and administering OpenStack. It uses APIs provided by the other services to build the cloud’s administrative interface.
Standards
Amazon Web Services EC2 and S3 APIs are natively supported by Nova and Swift modules. Third party Python components provides support for OCCI and CDMI while having an Apache Deltacloud driver offers support for CIMI also. There is no native support for OVF in OpenStack, in order to be executed, virtual machines image files must be manually extracted from OVF packages.
8 Apache CloudStack
Apache CloudStack is an open-source Infrastructure as a Service platform used to build public and private clouds. It is designed to allow the deployment and administration of big networks of virtual machines while providing the highest availability and scalability. One of its main advantages resides in the fact that it is hypervisor agnostic. It is compatible with Bare Metal, Vmware, KVM, XenServer, Xen Cloud Platform (XCP), vSphere, LXC and Hyper-V [12].
There are multiple ways of administering an Apache CloudStack deployment: through a Web interface, using a full set of command line utilities or using an RESTful API. The platform also provides Amazon Web Services Elastic Cloud Computing (EC2) and Simple Storage Server (S3) compatible API implementations, one of the main reasons behind its development. This allows an easy porting of cloud applications to Apache CloudStack as well as hybrid and federated cloud deployments.
Architecture
Apache CloudStack features a hierarchical architecture, which allows centralized management of a big number of servers through a unique interface, Figure 5. Integration with public clouds implementing Amazon Web Services interfaces is also possible.
In its simplest form, the Apache CloudStack architecture consists of a single central server, optionally having a CloudDB instance running on it. The server’s function is to manage the virtual machine instances by collaborating with the hypervisors installed on the Cloud’s nodes. These nodes (physical machines) may be located or not in the same data-center and, for ease of administration, are grouped in several levels: regions, zones, pods and clusters.

Regions are linked to the geographical distribution of the physical servers within the cloud. A Region is the highest organizational unit available in an Apache CloudStack deployment. Its administration is made by a server cluster physically located within the region. Virtual machines can be deployed by users in different zones providing a certain level of geographical redundancy. Cloud reactivity is also increased by having the administration server physically closed to the cluster stations as well as deploying virtual machine instances as close as possible to the final end users.
Zones corresponds, usually, to data-centers while still being possible to define multiple zones within the same data-center where they can have separate energy supplies and data lanes, providing a certain level of physical redundancy.
Zones are the highest organizational units available to the users. Starting a virtual machine instance involves selecting a target zone explicitly or having a default one
selected by default. Images and configurations are not shared between different zones.
A zone has one or more pods which consists of one ore more server clusters and at least one storage servers. Pods can be assimilated with racks within the data-center and all its servers are running in the same subnet. Pods are hidden from the end users as the main reason behind grouping servers into pods is the ease of administration.
Clusters are built from servers sharing single hardware specifications and using the same hypervisor. They are using the same subnet, as part of a pod, and use a shared storage space available at pod level. Virtual machine instances can be migrated between servers within the same cluster at runtime.
Hosts are individual physical machines sharing their CPUs, memory, storage and network access for running virtual machine instances. Virtual machine hypervisors are installed on each host. Just as the clusters and pods, they are hidden to the end user. There is no possibility to select a specific host on which a virtual machine instance is to be started.
Storage space
Storage space is segregated between primary and secondary storage. Primary storage is a critical resource, used for executing virtual machine instances along with the application running on them, and its format is dependent on the used hypervisor. Both local (part of VM) and external storage (seen as external volumes mounted by VMs) are part of the primary storage. Secondary storage is used for storing virtual machines’ images and snapshots, ISO images, etc.
Resource management
An Apache CloudStack deployment uses a management server to administer its resources. Its base functions, as described in [13] are:
- Web user interface available for both cloud administrators and end users.
- CloudStack APIs, both native and compatible with AWS EC2 and S3, using both JSON and XML for data transfer.
- Dispatch virtual machine instances to physical hosts.
- Manage public and private IP account addresses.
- Manage storage space during virtual machines start-up procedure.
- Manage snapshots, disk and ISO images.
- Single point of access for cloud configuration.
The CloudStack APIs, Figure 6 uses three roles [15]:
- Root admin: has access to all the cloud functions, including management of physical and virtual resources.
- Domain admin: has access to all the virtual resources available for the administered domain.
- User: access only to its own virtual machines, attached storage space and network configurations.
An optional component of the resource management stack is the usage server, providing aggregate billing data. It also allows the defining of limits and quotas at domain level for the available number of virtual CPUs, RAM, primary and secondary storage space.
**Standards**
Apache CloudStack implements an estimated 60% of the de facto Amazon EC2 API standard [16]. It has an OCCI implementation through the rOCCI project but no new developments were submitted from the end of 2013. There is no CIMI implementation or translation layer for CloudStack. Open Virtualization Format is also not supported as CloudStack uses native disk images. There is no Apache Deltacloud driver available for Apache CloudStack.
**10 Eucalyptus**
Eucalyptus is a Linux based software architecture for implementing private and hybrid clouds using in place enterprise architecture [17]. Its name is an acronym for Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems. The main objectives behind its development were [18]:
- Designed from the start for a lean and not intrusive install. It can coexist with other software applications on the same physical machine.
- It has a modular structure. Communication between its modules is language independent and based on open standards. It encourages building of communities centered on the platform.
• Amazon Web Services EC2 and S3 API compatibility for external interfaces.
• Network virtualization which allows isolation of user generated traffic while also having multiple clusters on the same local network.
Architecture
Eucalyptus was designed to allow interactions using the same tools that were used for Amazon EC2. Each component is implemented as a stand-alone web service exposing its functionalities using WSDL. Each Eucalyptus installation deploys five major components (Fig. 7):
- Cloud controller, administrator’s interface with the cloud
- Cluster controller, schedule virtual machines deployment on node and manages the virtual networks.
- Node controller, manages starting, querying shutting-down of virtual machines on the physical machines.
- Storage controller, manages block storage providing the same functions as Amazon Elastic Block Storage.
- Walrus, manages persistent storage of data, organized in buckets and objects, compatible with Amazon S3.
Cloud controller is a collection of web services exposing cloud administration functions. It is the entry point for the whole Eucalyptus system. It also implements a Web interface for cloud’s services management. These services are grouped in:
• Resource services, processing requests related to virtual machines, accepting or rejecting them based on the global system status. Service Level Agreements are monitored and enforced at this level.
• Data services, manage creation and retrieval of user and system generated data.
• Interface services allow user access to web interfaces and cloud services. It provides interfaces compatible with Amazon Web Services.
Cluster controller manages cloud nodes (physical machines). It schedules and dispatches virtual machine instance creation requests to the nodes, it manages the virtual network overlay. Data about node status is gathered and aggregated at this level. By default, the cluster controller is executed on a different server than the nodes and it requires direct connection also with the cloud controller.
The Node controller is executed on each server designated to execute virtual machines. It gathers information regarding the hardware configuration of each machine on which is run and report them to the cluster controller. The node controllers do not start or stop the virtual machines, this being the function of the cluster controller.
Storage controllers manage network block devices. Exported volumes (elastic block storages) can be attached to virtual machines but cannot be shared between multiple instances. Data stored on these volumes is persistent after virtual machines are stopped. Snapshots of the volumes can be created and stored using Walrus.
Walrus is a put/get service for storing persistent objects. It is compatible with Amazon S3 and provides SOAP and REST interfaces to its functions. Eucalyptus uses Walrus for virtual machine image storing.
Standards
Eucalyptus was designed from the start as Amazon Web Service compatible, it implements large parts (but not all) of EC2 API through Euca2ools, a set of command line tools, and S3 API through Walrus component. There is no OCC API support though an implementation was defined as a target of “Flexible Services for the Support of Research”, a 2010 project, without a palpable outcome. There is no CDMI or OVF support/implementation for Eucalyptus. While not providing native support, Apache DeltaCloud has a Eucalyptus driver giving it indirect access to interfaces using CIMI API.
11 OpenNebula
OpenNebula was designed to offer a simple and flexible solution offering a complete set of functionalities for installing and managing enterprise clouds and virtualized data centers [19]. Started in 2006 as a research project, its first version was published in 2008, OpenNebula is now an open-source project. OpenNebula offers four type of interfaces for interacting with and administering the cloud:
- For the consumers of cloud resources, there is an interactive web interface as well as APIs compatible with Amazon Web Services EC2, EBS and S3.
- Administrators have access to a full set of command line utilities as well as to a dedicated web interface.
- Service integrators can use low level APIs written in Ruby, Java and XMLRPC API.
- A catalogue of third party appliances, ready to be used in the OpenNebula environments.
OpenNebula has a modular architecture Figure 8 offering the possibility to use off-the-shelf as well as enterprise grade hardware and software components as hypervisors, for monitoring, storage or network communications. Any OpenNebula deployment will feature:
- A front-end executing the OpenNebula services.
- Hosts with hypervisors providing the resources for executing virtual machines.
- Data stores for the virtual machines images.
- Physical networks linking the cloud components.
The front end is a server having OpenNebula installed on it. It is connected with all the cloud’s hosts. The management daemons, the scheduler and the administration web interface are running on this machine.
Any deployment consists of at least a zone Figure 9: a group of interconnected physical hosts with hypervisors controlled by OpenNebula [20], typically no more than 500 in a zone. More than one zone can be managed using OpenNebula oZones component and
can be combined to form a Virtual Data center (VDC). Using zones guarantees complete users and domains isolation, increased scalability, centralized management [21].
Fig. 9. OpenNebula Zones Architecture
Hosts are physical machines on which virtual machines are executed. A supported hypervisor must be installed on them (Xen, KVM or VMware) under one of the operating systems certified for OpenNebula (RedHat Enterprise Linux, Ubuntu Server, SUSE Linux Enterprise, CentOS, openSUSE, Debian).
Virtual machine images are managed through datastores, usually a SAN/NAS, always available to the front-end server. System datastores are those used for running virtual machine images. Image datastores store disk images. These are copied/clone from/to the system datastores when virtual machines are started/stopped or when snapshots are taken. An image datastore can be [22]:
- A filesystem, when the images are stores as files on volumes mounted from a NAS/SAN.
- Vmfs, specialized datastore using VMFS format for the use or VMware hypervisors. It is not UNIX-comptabile and cannot be mounted on the front-end server.
- LVM: LVM volumes can be used instead of file systems
- Ceph, specialized for use with Ceph block devices.
Interfaces
OpenNebula was designed as an expandable, modular system, allowing deployment of customized Cloud Computing architecture and easy interaction with data-center services. Its interfaces are subdivided as cloud interfaces, targeted against cloud resource consumers, and system interfaces (Fig. 10). Cloud interfaces provide an abstraction layer above OpenNebula’s services, allowing software tools and components to be build for cloud interaction. System interfaces provide access to all the cloud’s services and are used to adapt the cloud to the targeted infrastructure.
Cloud interfaces manage virtual machines, networks and images using standard APIs such as AWS EC2 and OGF OCCI. System interfaces use XML-RPC or OpenNebula’s own API, having bindings for Ruby and Java programming languages.
Standards
As all the other open source Infrastructure as a Service platforms, OpenNebula implements the de facto standard: Amazon Web Services EC2 and S3 APIs. OCCI has a native implementation through a pluggable component and another one by using the rOCCI project. OVF is supported by using a separate Java component. No direct CIMI interface is available but Apache Deltacloud provides a driver for OpenNebula using its OCCI server component.
12 Conclusion
As the Cloud Computing industry matures, the Infrastructure as a Service platforms tend to implement a homogenized set of functions. Choosing a platform might depend on its openness and adherence to standards, exposing the cloud or the appliance to the greatest number of clients. While implemented open standards might look as a sure way to achieve this goal, in practice, even the major open-source platforms offer limited support in this area. Portability and interoperability can only be assured for now by using the de facto standard, Amazon Web Services APIs. Putting Apache Deltacloud aside, there is no native or third party support for Cloud Infrastructure Management Interface on any of the studied IaaS platforms. At the same time Apache Deltacloud adds its own API and cannot ignore AWS EC2/S3 compatibility.
Table 1. Main standards support in major open source IaaS platforms.
<table>
<thead>
<tr>
<th>Standard Platform</th>
<th>OCCI</th>
<th>CDMI</th>
<th>CIMI</th>
<th>OVF</th>
<th>AWS</th>
<th>Apache Deltacloud driver</th>
</tr>
</thead>
<tbody>
<tr>
<td>OpenStack</td>
<td>3rd party component</td>
<td>3rd party component for CDMI 1.0.2</td>
<td>Apache Deltacloud</td>
<td>-</td>
<td>EC2, S3</td>
<td>Yes</td>
</tr>
<tr>
<td>Apache CloudStack</td>
<td>rOCCI</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>EC2, S3, 60%</td>
<td>No</td>
</tr>
<tr>
<td>OpenNebula</td>
<td>rOCCI, native OCCI 1.1</td>
<td>-</td>
<td>Apache Deltacloud through OCCI</td>
<td>3rd party Java component</td>
<td>EC2, S3</td>
<td>Yes</td>
</tr>
<tr>
<td>Eucalyptus</td>
<td>-</td>
<td>-</td>
<td>Apache Deltacloud</td>
<td>-</td>
<td>EC2, S3</td>
<td>Yes</td>
</tr>
</tbody>
</table>
Cloud Standards Coordination, an initiative launched by the European Commission and The European Telecommunications Standards Institute, identified in an October 2015 report no less than 16 standardization organizations and 114 documents related to cloud standards (94 with the status “Published”, 14 “Draft” and 6 “In Progress”) [24]. None of these are likely to enjoy in the near future the massive adoption displayed by AWS APIs but lessons from it could and are learned and, as the Cloud enabled applications become ubiquitous, we can only hope that an open, vendor neutral, Cloud interfacing specification gains more traction and becomes both a de facto and a de jure standard.
References
Andrei IONESCU obtained his Master of Science diploma in Economic Informatics in 2014. He has more than 10 years of experience as a software developer, currently working as an independent contractor. Since 2014 he is a PhD candidate at the Bucharest University of Economic Studies with a thesis in the field of Resource Management in Cloud Computing.
DOI: 10.12948/issn14531305/19.4.2015.06
|
{"Source-Url": "http://revistaie.ase.ro/content/76/06%20-%20Ionescu.pdf", "len_cl100k_base": 5403, "olmocr-version": "0.1.51", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 36414, "total-output-tokens": 6381, "length": "2e12", "weborganizer": {"__label__adult": 0.0003044605255126953, "__label__art_design": 0.0006685256958007812, "__label__crime_law": 0.0005521774291992188, "__label__education_jobs": 0.001140594482421875, "__label__entertainment": 0.00014913082122802734, "__label__fashion_beauty": 0.00016176700592041016, "__label__finance_business": 0.0022411346435546875, "__label__food_dining": 0.00030922889709472656, "__label__games": 0.0005269050598144531, "__label__hardware": 0.002460479736328125, "__label__health": 0.0005464553833007812, "__label__history": 0.0005283355712890625, "__label__home_hobbies": 0.0001207590103149414, "__label__industrial": 0.0007233619689941406, "__label__literature": 0.0003113746643066406, "__label__politics": 0.0004000663757324219, "__label__religion": 0.0004410743713378906, "__label__science_tech": 0.287109375, "__label__social_life": 0.0001475811004638672, "__label__software": 0.111083984375, "__label__software_dev": 0.5888671875, "__label__sports_fitness": 0.0001962184906005859, "__label__transportation": 0.0006532669067382812, "__label__travel": 0.0003108978271484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28464, 0.02326]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28464, 0.27557]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28464, 0.89892]], "google_gemma-3-12b-it_contains_pii": [[0, 3495, false], [3495, 4407, null], [4407, 7544, null], [7544, 9951, null], [9951, 12682, null], [12682, 14849, null], [14849, 16562, null], [16562, 18942, null], [18942, 21861, null], [21861, 23669, null], [23669, 25176, null], [25176, 28073, null], [28073, 28464, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3495, true], [3495, 4407, null], [4407, 7544, null], [7544, 9951, null], [9951, 12682, null], [12682, 14849, null], [14849, 16562, null], [16562, 18942, null], [18942, 21861, null], [21861, 23669, null], [23669, 25176, null], [25176, 28073, null], [28073, 28464, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28464, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28464, null]], "pdf_page_numbers": [[0, 3495, 1], [3495, 4407, 2], [4407, 7544, 3], [7544, 9951, 4], [9951, 12682, 5], [12682, 14849, 6], [14849, 16562, 7], [16562, 18942, 8], [18942, 21861, 9], [21861, 23669, 10], [23669, 25176, 11], [25176, 28073, 12], [28073, 28464, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28464, 0.03974]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
8666ececea3e52dbe7eef4b2a57d52dd866367c5
|
Køpsala
Transition-Based Graph Parsing via Efficient Training and Effective Encoding
Hershcovich, Daniel; De Lhoneux, Miryam; Kulmizev, Artur; Pejhan, Elham; Nivre, Joakim
Published in:
Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies
DOI:
10.18653/v1/2020.iwpt-1.25
Publication date:
2020
Document version
Publisher's PDF, also known as Version of record
Document license:
CC BY
Citation for published version (APA):
Køpsala: Transition-Based Graph Parsing via Efficient Training and Effective Encoding
Daniel Hershcovich∗♦ Miryam de Lhoneux∗♦ Artur Kulmizev♥
Elham Pejhan♥ Joakim Nivre♥
◇University of Copenhagen ✡Uppsala University
{dh, ml, ep}@di.ku.dk,
{artur.kulmizev, joakim.nivre}@lingfil.uu.se
Abstract
We present Køpsala, the Copenhagen-Uppsala system for the Enhanced Universal Dependencies Shared Task at IWPT 2020. Our system is a pipeline consisting of off-the-shelf models for everything but enhanced graph parsing, and for the latter, a transition-based graph parser adapted from Che et al. (2019). We train a single enhanced parser model per language, using gold sentence splitting and tokenization for training, and rely only on tokenized surface forms and multilingual BERT for encoding. While a bug introduced just before submission resulted in a severe drop in precision, its post-submission fix would bring us to 4th place in the official ranking, according to average ELAS. Our parser demonstrates that a unified pipeline is effective for both Meaning Representation Parsing and Enhanced Universal Dependencies.
1 Introduction
The IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies (Bouma et al., 2020) involves sentence segmentation, tokenization, lemmatization, part-of-speech tagging, morphological analysis, basic dependency parsing, and finally (for the first time) enhanced dependency parsing. The enhancements encode case information, elided predicates, and shared arguments due to conjunction, control, raising and relative clauses (see Figures 1 and 2).
In Universal Dependencies v2 (UD; Nivre et al., 2020), enhanced dependencies (ED) are a separate dependency graph than the basic dependency tree (BD). However, ED is almost a super-set of BD,1 and so most previous approaches (Schuster and Manning, 2016; Nivre et al., 2018) have attempted to recover ED from BD using language-specific rules. On the other hand, Hershcovich et al. (2018) experimented with TUPA, a transition-based directed acyclic graph (DAG) parser originally designed for parsing UCCA (Abend and Rappoport, 2013), for supervised ED parsing. They converted ED to UCCA-like graphs and did not use pre-trained contextualized embeddings, yielding sub-optimal results. Taking a similar approach, we adapt a transition-based graph parser (Che et al., 2019) designed for Meaning Representation Parsing (Oepen et al., 2019), but parse ED directly and use BERT embeddings (Devlin et al., 2019).
The main contribution of our work is a transition system supporting the graph structures exhibited by ED, including null nodes (meaning this is not a strictly bilexical formalism), cycles and non-crossing graphs (§3.1), as Figure 4 demonstrates for the sentence from Figure 2. We parse ED completely separately from BD, demonstrating the applicability of a full graph parser, starting from only segmented and tokenized text, to ED. Our code is available at https://github.com/coastalcph/koepsala-parser.
2 Preprocessing
As the focus of this shared task is ED parsing, we rely on existing systems for preprocessing. Here, we consider two off-the-shelf pipelines: STANZA
---
1Some BD arcs are deleted in ED, e.g., orphan arcs.
Deze is de modernste en grootste hal van België, en de enige die voldoet aan de Olympische normen.
(Qi et al., 2020)\textsuperscript{2} and UDPIPE 1.2 (Straka and Straková, 2017; Straka et al., 2016),\textsuperscript{3} both of which have models pre-trained on UD v2.5 treebanks. We experiment with either pipeline during prediction to process the raw text files for the dev and test sets, eventually selecting UDPIPE for our primary submission. This process entails sentence segmentation, tokenization, lemmatization, part-of-speech tagging, morphological feature tagging, and BD parsing.\textsuperscript{4} For training our ED parser (\S 3), however, we use gold inputs for simplicity. We use the conllu\textsuperscript{5} Python package to read CoNLL-U files.
**Preprocessing model selection.** Since the dev and test data do not denote their source treebanks, we simply process the text using the pipeline model trained on the language’s largest treebank. To experiment with an alternative method, for languages with more than one treebank, we also train UDPIPE models on combined training treebanks. Table 1 shows the comparison of LAS on the combined dev set, for these models and for the models (pre-)trained on the language’s largest treebank. The results show that using the combined training sets does not lead to consistent improvements in terms of LAS, and so we continue using pre-trained treebank-specific preprocessing models henceforth.

**Table 1:** LAS on the combined dev set for UDPIPE models trained on the language’s combined training treebanks and the models trained on the language’s largest treebank. No consistent trend is observed.
<table>
<thead>
<tr>
<th>Language</th>
<th>Czech</th>
<th>Dutch</th>
<th>Estonian</th>
<th>Polish</th>
</tr>
</thead>
<tbody>
<tr>
<td>combined</td>
<td>78.88</td>
<td>76.50</td>
<td>77.01</td>
<td>82.96</td>
</tr>
<tr>
<td>largest</td>
<td>83.97</td>
<td>74.97</td>
<td>77.61</td>
<td>82.59</td>
</tr>
</tbody>
</table>
\(\text{Qi et al., 2020}^{2}\) and UDPIPE 1.2 (Straka and Straková, 2017; Straka et al., 2016),\(^{3}\) both of which have models pre-trained on UD v2.5 treebanks. We experiment with either pipeline during prediction to process the raw text files for the dev and test sets, eventually selecting UDPIPE for our primary submission. This process entails sentence segmentation, tokenization, lemmatization, part-of-speech tagging, morphological feature tagging, and BD parsing.\(^{4}\) For training our ED parser (\S 3), however, we use gold inputs for simplicity. We use the conllu\textsuperscript{5} Python package to read CoNLL-U files.
**Preprocessing model selection.** Since the dev and test data do not denote their source treebanks, we simply process the text using the pipeline model trained on the language’s largest treebank. To experiment with an alternative method, for languages with more than one treebank, we also train UDPIPE models on combined training treebanks. Table 1 shows the comparison of LAS on the combined dev set, for these models and for the models (pre-)trained on the language’s largest treebank. The results show that using the combined training sets does not lead to consistent improvements in terms of LAS, and so we continue using pre-trained treebank-specific preprocessing models henceforth.
**Table 1:** LAS on the combined dev set for UDPIPE models trained on the language’s combined training treebanks and the models trained on the language’s largest treebank. No consistent trend is observed.
<table>
<thead>
<tr>
<th>Language</th>
<th>Czech</th>
<th>Dutch</th>
<th>Estonian</th>
<th>Polish</th>
</tr>
</thead>
<tbody>
<tr>
<td>combined</td>
<td>78.88</td>
<td>76.50</td>
<td>77.01</td>
<td>82.96</td>
</tr>
<tr>
<td>largest</td>
<td>83.97</td>
<td>74.97</td>
<td>77.61</td>
<td>82.59</td>
</tr>
</tbody>
</table>
\(\text{Qi et al., 2020}^{2}\) and UDPIPE 1.2 (Straka and Straková, 2017; Straka et al., 2016),\(^{3}\) both of which have models pre-trained on UD v2.5 treebanks. We experiment with either pipeline during prediction to process the raw text files for the dev and test sets, eventually selecting UDPIPE for our primary submission. This process entails sentence segmentation, tokenization, lemmatization, part-of-speech tagging, morphological feature tagging, and BD parsing.\(^{4}\) For training our ED parser (\S 3), however, we use gold inputs for simplicity. We use the conllu\textsuperscript{5} Python package to read CoNLL-U files.
3 Transition-Based Enhanced Dependency Parser
Our system is a transition-based graph parser, based on the HIT-SCIR system (Che et al., 2019), which achieved the highest average score across frameworks (AMR, EDS, UCCA, DM and PSD) in the CoNLL 2019 shared task on Meaning Representation Parsing (MRP; Oepen et al., 2019). It is written in the AllenNLP (Gardner et al., 2018) framework. For training efficiently, it employs stack LSTMs (Dyer et al., 2015), batching operations across sentences. For better encoding, HIT-SCIR fine-tuned BERT (Devlin et al., 2019) while training the parser.
A transition-based parser operates by manipulating a buffer (originally containing the input words provided by the preprocessor, see \S 2) and a stack (originally containing the root, i.e., word at index 0), to incrementally create the output dependency graph. At each point in the parsing process, a tran-
position is selected out of a pre-defined set of possible transitions. A classifier is trained to predict the best transition to apply at each step, by mimicking an oracle during training (see §3.1).
HIT-SCIR used a different transition system per framework (AMR, EDS, UCCA; and one system for DM and PSD), according to the graph properties of each and based on existing framework-specific parsers (Liu et al., 2018; Buys and Blunsom, 2017; Hershcovich et al., 2017; Wang et al., 2018). We construct a transition system for ED using subsets of transitions from two of the HIT-SCIR systems: their system for DM and PSD, as well as their system for UCCA, with some further adaptations specific to ED graphs.
3.1 Transition System
Our system contains the following transitions: \{SHIFT, LEFT-EDGE\_l, RIGHT-EDGE\_l, REDUCE-0, REDUCE-1, NODE, SWAP and FINISH\}. The SHIFT transition pops the first element of the buffer and pushes it onto the stack. The LEFT-EDGE\_l and RIGHT-EDGE\_l transitions add an arc\(^6\) between the two top items of the stack with label \(l\). We need two different REDUCE transitions to pop the topmost and second topmost items of the stack, which we name REDUCE-0 and a REDUCE-1 respectively. This makes it possible to construct length-2 cycles, which ED allows (and most MRP frameworks do not). The NODE transition inserts a null node as the first element of the buffer, needed to support null nodes. SWAP moves the second-top node of the stack to the buffer, thus swapping the order between the two top nodes of the stack. This is necessary for handling crossing graphs (analogous to non-projective trees). Finally, FINISH terminates the transition sequence. A formal definition of the transition set is shown in Figure 3.
Separate EDGE transitions exist for each edge label. Labels containing coordination or case suffixes (such as nmod:van) are treated as any other label and are not split, resulting in a large number of transitions for some languages, shown in Table 2.
NODE transitions, on the other hand, do not select any label or features, since null nodes are only evaluated with respect to their incoming and outgoing edges. All other information is ignored, and thus not predicted by the parser: predicted null nodes are thus only placeholders.
**Constraints.** In addition to the modified transition set, we change the constraints for some transitions according to the required graph structure.
Since LEFT-EDGE\_l and RIGHT-EDGE\_l transitions do not reduce the dependent, we need to ensure that we do not draw the same arc twice. For this reason, these transitions are not allowed if there is already an arc with label \(l\) between the two nodes. We also disallow to add an arc with the root as dependent.
To ensure every node gets attached to at least one head, we disallow the REDUCE-0 and REDUCE-1 transitions for nodes that do not have a head yet. We also disallow reducing the root.
For the SWAP transition, we maintain the generated order of each node, assigned when the node is shifted (for words) or created (for null nodes). To prevent infinite loops during inference, we only allow swapping nodes whose order in the stack is the same as their generation order.
To limit repeated actions, we arbitrarily constrain NODE transitions such that there are no more null nodes than words (although a lower limit would suffice), and EDGE transitions to limit the
<table>
<thead>
<tr>
<th>Language</th>
<th>Total</th>
<th>EDGE</th>
<th>w/ suffix</th>
</tr>
</thead>
<tbody>
<tr>
<td>ARABIC</td>
<td>402</td>
<td>395</td>
<td>345</td>
</tr>
<tr>
<td>BULGARIAN</td>
<td>197</td>
<td>191</td>
<td>137</td>
</tr>
<tr>
<td>CZECH</td>
<td>768</td>
<td>761</td>
<td>702</td>
</tr>
<tr>
<td>DUTCH</td>
<td>393</td>
<td>386</td>
<td>336</td>
</tr>
<tr>
<td>ENGLISH</td>
<td>300</td>
<td>293</td>
<td>232</td>
</tr>
<tr>
<td>ESTONIAN</td>
<td>445</td>
<td>438</td>
<td>381</td>
</tr>
<tr>
<td>FINNISH</td>
<td>266</td>
<td>259</td>
<td>210</td>
</tr>
<tr>
<td>FRENCH</td>
<td>112</td>
<td>106</td>
<td>59</td>
</tr>
<tr>
<td>ITALIAN</td>
<td>281</td>
<td>274</td>
<td>216</td>
</tr>
<tr>
<td>LATVIAN</td>
<td>238</td>
<td>232</td>
<td>161</td>
</tr>
<tr>
<td>LITHUANIAN</td>
<td>323</td>
<td>317</td>
<td>263</td>
</tr>
<tr>
<td>POLISH</td>
<td>676</td>
<td>669</td>
<td>615</td>
</tr>
<tr>
<td>RUSSIAN</td>
<td>944</td>
<td>938</td>
<td>861</td>
</tr>
<tr>
<td>SLOVAK</td>
<td>266</td>
<td>259</td>
<td>204</td>
</tr>
<tr>
<td>SWEDISH</td>
<td>209</td>
<td>202</td>
<td>153</td>
</tr>
<tr>
<td>TAMIL</td>
<td>146</td>
<td>140</td>
<td>103</td>
</tr>
<tr>
<td>UKRAINIAN</td>
<td>290</td>
<td>283</td>
<td>225</td>
</tr>
</tbody>
</table>
Table 2: Number of transitions for each language.
---
\(^6\)For consistency, we keep the transition nomenclature using “EDGE”, although they create directed dependency arcs. Note that in analogous transitions in some transition systems, such as ARCEAGER (Nivre, 2003), the dependent of the transition is also popped out of the stack as part of either of these two transitions. Here, since dependents can have multiple heads and can have arcs with multiple labels, we stick to the EDGE action and use our two REDUCE transitions to pop elements of the stack when necessary.
number of heads per node to 7.\footnote{While the observed number of heads per node in the data goes up to 36, in the training data there is only a small minority of cases where a node has more than 7 heads.}
Finish is only allowed when the buffer is empty and the stack only contains the root. If no valid transition is available, the sequence is terminated prematurely by applying the Finish transition, regardless of the Finish constraints.
Oracle. We use a static oracle similar to HIT-SCIR (a single “gold” transition sequence is given during training, which the parser is forced to follow), but develop one for our transition system.
The oracle deterministically chooses the transition to take given the current configuration. Let $s_1$ and $s_0$ be the two top items of the stack and $b$ the first item of the buffer (if these are defined in the current configuration). If the buffer is empty and the stack only contains the root, take a Finish transition. Otherwise, if there is an arc between $s_1$ and $s_0$ with label $l$ that has not yet been constructed, take the necessary Right-Edge$_l$ or Left-Edge$_l$ action. Otherwise, if $s_0$ has a node dependent, take a Node transition. Otherwise, if $s_0$ has all its heads and dependents, take Reduce-0, if $s_1$ has all its heads and dependents, take Reduce-1. Otherwise, if $s_1$ and $s_0$ are in their generated order and $s_0$ has a head or a dependent in the stack that is not $s_1$, take a Swap. Otherwise Shift. Figure 4 shows an example transition sequence.
3.2 Classifier
The parser uses BERT (Devlin et al., 2019) for token representation. While Che et al. (2019) used pre-trained English model (wwm_cased_L-24_H-102416), we replaced it with a pre-trained multilingual one (multi_cased_L-12_H-76812), trained on 104 languages, including all 17 languages participating in the shared task. As done by Che et al. (2019), we use the bert-pretrained text field embedder from AllenNLP, which extracts the first word-piece of each token, applying a scalar mix on all layers of transformer.
The transition classifier is a stack-LSTM (Dyer et al., 2015) with only BERT embedding features for words, as well as a scalar feature denoting the ratio between the number of (null) nodes and the number of words (Hershcovich et al., 2017), as in HIT-SCIR. We do not fine-tune BERT due to memory limitations, though fine-tuning would likely result in improved performance.
3.3 Postprocessing
The enhanced graphs are required to be connected, i.e., every node must be reachable from the root.\footnote{This is enforced by the task organizers by running validate.py --level 2 --lang ud on the system predictions before evaluation.}
While the transition constraints ensure that every node has a head, there may be unconnected cycles at the end of the parse, resulting in invalid graphs. To fix the problem, at the end of the parse, we iteratively find the unconnected node with the most descendants, and attach it to the predicate (the first dependent of the root) with an orphan-labeled arc. In addition to unconnected cycles, this resolves the problem of prematurely terminated transition sequences due to no valid transition being available according to the constraints: instead of resulting in partially-constructed graphs, headless nodes are similarly attached with an orphan-labeled arc to the predicate, if it exists, or otherwise to the root.
Parsing tragedy. Our postprocessing procedure to attach unconnected subgraphs had a bug at the time of submission, where many nodes were incorrectly identified as unconnected and thus un-
necessarily attached to the predicate/root. While this still yielded valid graphs, precision dropped precipitously from before the introduction of the postprocessing procedure. Due to the late stage in the evaluation period at which we made this change, we failed to properly monitor our development scores and could not identify the cause for the drop in time, resulting in low official scores. However, after submission we identified the bug and fixed it, improving our parser’s accuracy back to the range we had observed during development.
### 3.4 Training
For training the ED parser we do not simply train it on the largest treebank per language, but rather train it on the concatenated training treebanks per language. In preliminary experiments, this did lead to improvements in terms of combined dev ELAS over treebank-specific models, contrary to our findings in BD parsing for preprocessing (§2). We train our models on an NVIDIA P100 GPU with a batch size of 8. All other hyperparameters can be found in the configuration files in the repository.11
Training until convergence took 1h30 (for Tamil, 11https://github.com/coastalcph/koepsala-parser/blob/master/config/transition_eud.jsonnet
the smallest treebank) to up to 2 days (for Arabic, which contains many long sentences). Prediction on the dev set took between 4 minutes (for Tamil) and 55 minutes (for Czech), ranging from 117 words/second (7 sentences/second, for Tamil) to 1300 words/second (81 sentences/second, for Czech), including the model loading time.
3.5 Baselines
In addition to providing validation scores for our trained parsers, we consider three competitive baselines, as provided by the task organizers:
- **B1**: gold standard dependency trees copied as enhanced graphs. Though this can be technically considered an upper bound, as gold tree information is provided, it should nonetheless provide some idea of how much of the enhanced graph can be derived from the dependency tree.
- **B2**: predicted trees yielded by UDPipe models trained on UD v2.5 (using the largest treebank where applicable), copied as enhanced graphs. This is more representative than B1 of realistic parsing scenarios, which rely on predictions.
- **B3**: similar to B2, but applying the Stanford enhancer post-hoc over the predicted trees. Scores for Finnish and Latvian were not provided.
4 Results
Table 3 displays our results on the per-language (not per-treebank) test partitions of the shared task data. As explained in §2, for languages with multiple training treebanks available (Czech, Estonian, Dutch, Polish), we preprocessed the raw text of each treebank using the pipeline trained on the largest treebank available for that language (e.g. alpino for Dutch). Also, aforementioned in §3.4, we then trained our parsers on the concatenation of each language’s treebanks, so that we could parse at the language level (as opposed to treebank). Though we observed scant differences between the two preprocessing pipelines, it was UDPIPE that produced fewer validation errors. As such, we adopted it as the main preprocessor for our official submission.
It is apparent in Table 3 that the unconnected graph issue (described in §3.3) severely affected our official submission to the shared task (observed in the official column). After diagnosing and fixing this problem, we observed an improvement of 14.1 ELAS, which is consistent with our scores on the treebanks’ development sets. With this in mind, our (fixed) parser tends to perform in a generally stable fashion across languages, with an average ELAS of 76.48 and standard deviation of 6.86. Among our highest scoring languages are Bulgarian, French, and Italian—the former two of which are corroborated by the strong B1 baseline. Indeed, Tamil is the notable exception among all results, with 56.85 ELAS. We surmise that treebank size is the biggest factor in this degradation of performance, as Tamil has, by far, the smallest treebank at 400 sentences. As such, our parser has comparatively fewer graph samples to train on than it would for some other languages.
When comparing against the organizer-provided baselines, we see a strong improvement in using our system over both B2 and B3 systems. This is encouraging, as it demonstrates the benefit of parsing enhanced dependency graphs directly, rather than copying dependency trees as enhanced graphs.
than relying on predicted trees to accurately relay the enhanced structure \((B1)\) or employing a heuristic-driven post-processor to derive it \((B2)\). Furthermore, though the organizers consider \(B1\) as an indirect upper bound due to the gold-standard tokenization and dependency structure employed therein, we can nonetheless observe an advantage in using our parser for some languages. These are Arabic \((+2.16 \text{ ELAS})\), Finnish \((+3.32)\), Italian \((+4.46)\), and Ukranian \((+0.86)\). Again, this is promising, given that our parser does not rely on any tree structure in order to parse graphs.
### 4.1 Pre-processing Analysis
Since the test data was provided in a raw, untokenized format, we were interested in the extent of accuracy loss we might observe when relying on off-the-shelf pre-processors. Table 4 displays these results over the development data. It is clear that when we employ predicted segmentation, etc. using either \textsc{stanza} or \textsc{udpipe} pipelines, we observe a slight degradation in accuracy, as compared to the gold data. Omitting Czech, Estonian, Dutch, and Polish (which had several associated treebanks), all other languages degrade by an average of 2.00 ELAS for \textsc{stanza} and 2.32 for \textsc{udpipe}. Though one typically expects such a degradation when evaluating with predicted segmentation, we did observe some unreasonably large gaps in accuracy: namely for Arabic \((-4.02, -8.32\) for \textsc{stanza} and \textsc{udpipe}, respectively) and Tamil \((-12.19, -8.59)\). The latter can likely be explained via its small training set, which undoubtedly affects all components of the preprocessing pipeline.
When we examine the scores for all multitreebank languages, we do not notice a large difference between gold and predicted tokenization—which we expect to be different across treebanks. Here, we simply choose the one trained on the largest treebank (\textsc{picTree} for Czech, \textsc{edt} for Estonian, \textsc{alpino} for Dutch, and \textsc{lfg} for Polish), as we consider this a simple yet reliable heuristic. However, when generating predictions for the smaller treebanks using the bigger treebank’s pre-processing model, we only notice a notable drop in accuracy for Dutch \((-2.15, -2.54\) for \textsc{stanza} and \textsc{udpipe}, respectively). This indicates that there are likely major differences in the treebanks’ domains or how they are respectively segmented or annotated. In general, however, the differences between gold and predicted tokenization is surprisingly not as large as we expected.
### 5 Conclusion
In this paper, we have described the IWPT 2020 Shared Task submission by the Copenhagen-Uppsala team, consisting of graphs predicted by a transition-based neural dependency graph parser with pre-trained multilingual contextualized embeddings. While not ranked among the top submission according to the official scores, the parser architecture proved effective for the type of dependency graphs exhibited by ED, and after fixing a critical bug we found the scores to improve dramatically and agree with the scores we had observed during development.
We expect that with more resources for BERT fine-tuning, hyperparameter tuning, language-specific pre-trained representations and careful pre- and post-processing, our parser will be a competitive system in this task. However, our contribution is a transition system that can directly handle ED, in a unified transition-based parsing system.
<table>
<thead>
<tr>
<th>Language</th>
<th>\textsc{stanza}</th>
<th>\textsc{udpipe}</th>
<th>Gold Tok.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Arabic</td>
<td>73.66</td>
<td>69.36</td>
<td>77.68</td>
</tr>
<tr>
<td>Bulgarian</td>
<td>83.46</td>
<td>83.17</td>
<td>83.89</td>
</tr>
<tr>
<td>Czech</td>
<td>75.60</td>
<td>75.47</td>
<td>76.00</td>
</tr>
<tr>
<td>Dutch</td>
<td>78.66</td>
<td>78.27</td>
<td>80.81</td>
</tr>
<tr>
<td>English</td>
<td>80.79</td>
<td>79.80</td>
<td>82.77</td>
</tr>
<tr>
<td>Estonian</td>
<td>75.43</td>
<td>75.32</td>
<td>75.81</td>
</tr>
<tr>
<td>Finnish</td>
<td>80.87</td>
<td>80.59</td>
<td>81.89</td>
</tr>
<tr>
<td>French</td>
<td>86.05</td>
<td>85.29</td>
<td>88.97</td>
</tr>
<tr>
<td>Italian</td>
<td>85.24</td>
<td>85.04</td>
<td>85.52</td>
</tr>
<tr>
<td>Latvian</td>
<td>79.00</td>
<td>78.39</td>
<td>79.28</td>
</tr>
<tr>
<td>Lithuanian</td>
<td>74.92</td>
<td>74.84</td>
<td>75.51</td>
</tr>
<tr>
<td>Polish</td>
<td>71.94</td>
<td>73.22</td>
<td>73.63</td>
</tr>
<tr>
<td>Russian</td>
<td>78.53</td>
<td>78.60</td>
<td>78.87</td>
</tr>
<tr>
<td>Slovak</td>
<td>77.54</td>
<td>77.33</td>
<td>79.17</td>
</tr>
<tr>
<td>Swedish</td>
<td>78.26</td>
<td>78.18</td>
<td>78.37</td>
</tr>
<tr>
<td>Tamil</td>
<td>50.66</td>
<td>54.26</td>
<td>62.85</td>
</tr>
<tr>
<td>Ukranian</td>
<td>79.70</td>
<td>79.67</td>
<td>79.89</td>
</tr>
</tbody>
</table>
**Table 4:** Development ELAS for our \textit{fixed} parser. While in all cases we train the parser on the concatenation of all of a language’s gold treebanks (applicable only to Czech, Dutch, Estonian, and Polish), \textsc{stanza} and \textsc{udpipe} refer to generating predictions on the development data preprocessed by the corresponding pipeline. \textit{Gold Tok.} refers to generating predictions over gold development data (tokenization, etc).
Acknowledgments
We thank the anonymous reviewers for their helpful comments. ML is funded by a Google Focused Research Award. We acknowledge the computational resources provided by CSC in Helsinki and Sigma2 in Oslo through NeIC-NLPL (www.nlpl.eu).
References
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/254669149/OA_K_psala.pdf", "len_cl100k_base": 7127, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 36009, "total-output-tokens": 9062, "length": "2e12", "weborganizer": {"__label__adult": 0.0007777214050292969, "__label__art_design": 0.0013427734375, "__label__crime_law": 0.0006909370422363281, "__label__education_jobs": 0.003086090087890625, "__label__entertainment": 0.0006456375122070312, "__label__fashion_beauty": 0.000431060791015625, "__label__finance_business": 0.0004513263702392578, "__label__food_dining": 0.0006508827209472656, "__label__games": 0.0016536712646484375, "__label__hardware": 0.0010061264038085938, "__label__health": 0.0012359619140625, "__label__history": 0.00075531005859375, "__label__home_hobbies": 0.00013697147369384766, "__label__industrial": 0.0007071495056152344, "__label__literature": 0.00797271728515625, "__label__politics": 0.0006351470947265625, "__label__religion": 0.0012531280517578125, "__label__science_tech": 0.2890625, "__label__social_life": 0.0003426074981689453, "__label__software": 0.02911376953125, "__label__software_dev": 0.65625, "__label__sports_fitness": 0.0005660057067871094, "__label__transportation": 0.0009226799011230468, "__label__travel": 0.0003108978271484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33438, 0.07918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33438, 0.11935]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33438, 0.8556]], "google_gemma-3-12b-it_contains_pii": [[0, 947, false], [947, 4174, null], [4174, 9337, null], [9337, 14126, null], [14126, 17723, null], [17723, 18925, null], [18925, 22113, null], [22113, 27244, null], [27244, 32341, null], [32341, 33438, null]], "google_gemma-3-12b-it_is_public_document": [[0, 947, true], [947, 4174, null], [4174, 9337, null], [9337, 14126, null], [14126, 17723, null], [17723, 18925, null], [18925, 22113, null], [22113, 27244, null], [27244, 32341, null], [32341, 33438, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33438, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33438, null]], "pdf_page_numbers": [[0, 947, 1], [947, 4174, 2], [4174, 9337, 3], [9337, 14126, 4], [14126, 17723, 5], [17723, 18925, 6], [18925, 22113, 7], [22113, 27244, 8], [27244, 32341, 9], [32341, 33438, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33438, 0.29114]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
b66bbb71828290b5847ecc0634d8155ad4f3f103
|
[REMOVED]
|
{"Source-Url": "https://pure.uva.nl/ws/files/1857149/118140_service_composition.pdf", "len_cl100k_base": 5432, "olmocr-version": "0.1.42", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 25733, "total-output-tokens": 7232, "length": "2e12", "weborganizer": {"__label__adult": 0.00024819374084472656, "__label__art_design": 0.0003709793090820313, "__label__crime_law": 0.0003819465637207031, "__label__education_jobs": 0.0009303092956542968, "__label__entertainment": 8.630752563476562e-05, "__label__fashion_beauty": 0.00011920928955078124, "__label__finance_business": 0.001369476318359375, "__label__food_dining": 0.00037026405334472656, "__label__games": 0.0004191398620605469, "__label__hardware": 0.000598907470703125, "__label__health": 0.00044345855712890625, "__label__history": 0.00025010108947753906, "__label__home_hobbies": 7.49826431274414e-05, "__label__industrial": 0.0003933906555175781, "__label__literature": 0.0002925395965576172, "__label__politics": 0.00029540061950683594, "__label__religion": 0.0002961158752441406, "__label__science_tech": 0.0447998046875, "__label__social_life": 0.0001105666160583496, "__label__software": 0.0263824462890625, "__label__software_dev": 0.9208984375, "__label__sports_fitness": 0.00019049644470214844, "__label__transportation": 0.0004682540893554687, "__label__travel": 0.0002505779266357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32641, 0.03984]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32641, 0.18156]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32641, 0.92299]], "google_gemma-3-12b-it_contains_pii": [[0, 1307, false], [1307, 3699, null], [3699, 7063, null], [7063, 10418, null], [10418, 13439, null], [13439, 16829, null], [16829, 19741, null], [19741, 20917, null], [20917, 23002, null], [23002, 26169, null], [26169, 27767, null], [27767, 30454, null], [30454, 32641, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1307, true], [1307, 3699, null], [3699, 7063, null], [7063, 10418, null], [10418, 13439, null], [13439, 16829, null], [16829, 19741, null], [19741, 20917, null], [20917, 23002, null], [23002, 26169, null], [26169, 27767, null], [27767, 30454, null], [30454, 32641, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32641, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32641, null]], "pdf_page_numbers": [[0, 1307, 1], [1307, 3699, 2], [3699, 7063, 3], [7063, 10418, 4], [10418, 13439, 5], [13439, 16829, 6], [16829, 19741, 7], [19741, 20917, 8], [20917, 23002, 9], [23002, 26169, 10], [26169, 27767, 11], [27767, 30454, 12], [30454, 32641, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32641, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-22
|
2024-11-22
|
72d2112c29906b059b9dc890b268126137338dba
|
Writing and programming might seem worlds apart: Writing is a creative profession, programming a technical one. Below the surface, however, there is one striking similarity: both writing and programming can be described as the translation of a high-level idea into low level sentences or statements. This paper compares writing and programming and uncovers similarities between some of the steps commonly considered part of the writing and programming workflows, such as information gathering and selection. We however also observe differences, like the attention that writers spent on formatting and styling, and the opportunity for feedback that programmers have by compiling and executing programs. We close the paper with a discussion of the impact of this finding, including educational methods that programming could take from writing education.
KEYWORDS
- General and reference → Surveys and overviews;
- Human-centered computing → Collaborative and social computing;
- Software and its engineering → Software organization and properties;
ABSTRACT
Writing and programming are often seen as different: writing a creative profession, programming a technical one. Below the surface, however, there is one striking similarity: both writing and programming can be described as the translation of a high-level idea into low level sentences or statements. This paper compares writing and programming and uncovers similarities between some of the steps commonly considered part of the writing and programming workflows, such as information gathering and selection. We however also observe differences, like the attention that writers spent on formatting and styling, and the opportunity for feedback that programmers have by compiling and executing programs. We close the paper with a discussion of the impact of this finding, including educational methods that programming could take from writing education.
CCS CONCEPTS
- General and reference → Surveys and overviews;
- Human-centered computing → Collaborative and social computing;
- Software and its engineering → Software organization and properties;
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
Programming ’17, April 03-06, 2017, Brussels, Belgium
© 2017 Association for Computing Machinery.
ACM ISBN 978-1-4503-4836-2/17/04...
https://doi.org/http://dx.doi.org/10.1145/3079368.3079413
1 INTRODUCTION
Writing and programming might seem worlds apart: Writing is a creative activity, with goals ranging from entertaining to persuading, from structuring the writer’s thoughts to passing a message to the reader. Programming, on the other hand, is a form of problem solving, in which the programmer starts with a problem, creates a design—a plan of how to solve problem—and then solves this by writing code that a machine can execute. But when one takes a look below the surface, there are also clear similarities to discover.
Most strikingly, both writing and programming can be described as the translation of a high-level idea into low level sentences or statements.
In this paper we explore the activities commonly performed in writing and programming with each other highlighting similarities and differences. For example, in programming, formatting and style receive less focus while explicit attention to these has the potential to make code more readable. In programming, code is executed and tested early in the process, while text is often proofread when more finished.
We believe that both fields can learn from a detailed comparison of activities. What does formatting mean in the context of code? Is it important? Can text writing be as iterative and test-driven as code? These are just a few of the questions raised by our side-by-side exploration. Does writing make you a better programmer? What skills underpin both? Can programming learn from writing education?
2 WHAT IS WRITING AND WHAT IS PROGRAMMING?
So, what is writing? Writing is a way in which humans communicate, using letters and symbols, forming words and sentences. It is used for various different reasons and purposes, including but not limited to storytelling, correspondence and reports of various kinds. The term ‘writing’ is broad, and can be used for activities varying from the motor skill of forming letters to formulating thoughts, feelings and opinions, and to be flawless in the spelling of words and use of grammar rules [22]. In this paper we focus on the activity of text composing, regardless the type or genre of the text.
And what is programming? Programming is commonly seen as the process by which a human formulates a problem in such a way that a computer can execute it. It involves understanding the problem, creating a design, writing the syntax of a program—sometimes referred to as coding—and performing maintenance on an existing program [34].
3 A HIGH LEVEL PLAN EXECUTED IN DETAIL
One of the most striking similarities between writing and programming is the fact that in both, there are high-level plans. A murder mystery writer imagines a killer that stabs blond men with stiletto heels; while a programmer imagines an iPad app to manage different bank accounts. These high-level plans are sometimes, but not always, formalized to a certain extent. Writers create designs before they start. A programmer might draw a UML diagram or an architecture plan, and writer creates a table of contents before writing, or uses character sheets and scene descriptions.
PROGRAMMING ’17, April 03-06, 2017, Brussels, Belgium
© 2017 Association for Computing Machinery.
ACM ISBN 978-1-4503-4836-2/17/04...
https://doi.org/http://dx.doi.org/10.1145/3079368.3079413
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
Programming ’17, April 03-06, 2017, Brussels, Belgium
© 2017 Association for Computing Machinery.
ACM ISBN 978-1-4503-4836-2/17/04...
https://doi.org/http://dx.doi.org/10.1145/3079368.3079413
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
Programming ’17, April 03-06, 2017, Brussels, Belgium
© 2017 Association for Computing Machinery.
ACM ISBN 978-1-4503-4836-2/17/04...
https://doi.org/http://dx.doi.org/10.1145/3079368.3079413
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
Programming ’17, April 03-06, 2017, Brussels, Belgium
© 2017 Association for Computing Machinery.
ACM ISBN 978-1-4503-4836-2/17/04...
https://doi.org/http://dx.doi.org/10.1145/3079368.3079413
Afterwards, these high-level designs need to be translated into very low level constructs: sentences and words for the writers, and methods and lines of code for the programmers. How to approach this is a topic for many methodologies in both writing and programming. Is it better to draft broadly and then iterate, or to take one chapter or feature and make it perfect before adding others? There are people on both sides of the argument in both writing and programming. In writing these two extremes even have terms: Pantsers and plotters\(^1\).
To help manage the complexity of the translation, in both fields, there are intermediate steps. A writer divides a story into chapters or an essay into sections. These in turn are divided into paragraphs and sentences. Likewise a programmer thinks of classes or objects to contain some parts of a program, which have methods and fields in them.
One aspect where the high-level idea into low-level implementation transformation can lead to problems is when changes need to be made. No text is perfect at the first try; books and stories are often reviewed and rewritten, sometimes assisted by formal reviews. Programmers review each other’s code and suggest changes, or fix bugs in existing code bases. In this adaptation, the high level translating again plays a big role. If writers decide to remove a character from a story, they need to make sure it is deleted from all chapters. If programmers change their architecture, this will result in changes to many classes and methods.
### 4 A DEEP DIVE INTO ACTIVITIES
So far, we observed the crux of the similarities between writing and programming: the translation of high level ideas into a lower level, the realm of words and letters. We now examine the steps of which the activities of writing and programming consist in more detail, drawing comparisons and highlight differences. The comparison of steps will help us gain insights into both processes better. Where are the procedures similar? Where can we learn from each other? Do we understand striking differences?
There are a number of models of the writing process available (e.g. [13], [22], [9], [28]), all similar in the general picture they paint. The writing process can roughly be divided in three phases: pre-writing, writing, post-writing, which in turn can be divided in two or more sub-phases. Similarly, the process of writing a computer program generally is divided in three phases, resembling the phases in the writing process: design, implement, test, or problem solving, implementing, maintaining \([7]\).
There are more fine grained models too, which we will use here to explore the commonalities between writing and programming as extensive as possible.
In this paper, we use two more fine grained models, for writing we use Huizenga \([13]\) and for programming Prata \([24]\), both consisting of seven steps, as shown in Table 1.\(^2\)
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1. Gathering information</td>
<td>1. Defining program objectives</td>
</tr>
<tr>
<td>2. Selecting information</td>
<td>2. Designing the program</td>
</tr>
<tr>
<td>3. Structuring information</td>
<td>3. Writing code</td>
</tr>
<tr>
<td>4. Translating</td>
<td>4. Compiling</td>
</tr>
<tr>
<td>5. Stylizing the text</td>
<td>5. Running the program</td>
</tr>
<tr>
<td>6. Formatting the text</td>
<td>6. Testing and debugging</td>
</tr>
<tr>
<td>7. Reflecting on the text</td>
<td>7. Maintaining and modifying</td>
</tr>
</tbody>
</table>
Steps are similar too. Selecting information and Structuring information are related to Designing the program, since in both decisions are made about the underlying, often invisible structure of text and program.
After these first three (writing) and two (programming) activities, the differences between the models seem to be getting bigger. Where in programming we see the third step Writing code, in writing, there are three separate steps related to getting the words in their final form: Translating, Stylizing the text and Formatting the text. This level of detail could be one of the areas where programming could learn from writing, maybe it is a good idea to regard these steps as different activities in programming too?
In programming on the other hand, there is Compiling and Running the program. This represents a difference between writing and programming, in programming, the programmer gets feedback very early on whether the program text is executable, during compiling. Furthermore, they get feedback on whether the program is working as intended. One could argue that, in writing, the first 'execution' of the program happens when someone else reads it, and that it is thus takes longer to know whether a text has the desired intend. Of course writers themselves can read the text and ‘execute’ it, but the question is if that really executes the text alone. If the writer reads the text, can they refrain from also taking into account the context and intentions?
In the remaining sections, we will first discuss the similarities in the first and last steps, and then the differences mainly occurring in the middle parts.
### 5 SIMILARITIES
In this section, we zoom in on the similarities in the steps, which mainly occur in the beginning and end of the process.
#### 5.1 Getting to know the context
The first steps in both fields concern the context of the text and code. What needs to go in? Who is the target audience? What do I want my text to convey, or my program to do?
When starting a new project, assignment, or exercise in writing or programming, the first step is setting the goals. What subject must the text be about, or what problem must the computer program solve? What kind of text does the writer want, or has, to write; a fictional story, recipe, or maybe a blog on his website? Who is the intended audience and what are the demands of a possible client or reader? With these goals in mind, writers start gathering information about the subject. They may use various kinds of sources like their own imagination and emotions, experiences of
\(^1\)http://thewritepractice.com/plotters-pantsers/
\(^2\)While there are other steps that could be distinguished, we compare these two as an exploration of the activities. We do encourage readers to find other sources, or even define their own steps on one of the two activities and compare those.
In programming, the gathering of information is a field of research in itself: requirements engineering. There are numerous different techniques for eliciting the requirements from a user [17, 23, 31].
In this context, we are mainly talking about the process in which software is made for a customer. Of course, there are also projects that start because programmers really want to make something for themselves, without fixed requirements. Even in that case, a first step will often be gathering information, such as which programming language, library or api to use, or what similar system might exist.
5.2 Making plans
After the exploration of the context, there is a focus on designing the artifact. What will the storyline be? What is the structure of the argumentation? What characters are going to appear in what chapters? These contemplations could be comparable to questions like: What will the architecture of our system be? What classes and methods will we have? or What programming paradigm fits our problem best?
From all the gathered information the writer selects the useful information for this specific task, after or even during the information gathering process. Writers choose which information is relevant for the reader to understand the story line and also fits the chosen subject of the text.
The selection of relevant information too is a skill important in programming, again, especially in the commercial setting of creating software for an internal or external costumer. Different features of the program to be created are classified by importance, for example using the MoSCoW model[6], or user stories are created, which are then grouped into sprints. By categorizing important features, programmers are deciding what information is most important for the program.
In writing, the gathering of information can be more vague, especially for writers of fiction, maybe the gathering is more of inspiration than of information.
After gathering information and selecting the relevant parts, the writer organizes all information in a way that suits his habits in writing and fits the requirements of the text identified by the goals.
Ideas can be collected and structured in various multi- or one-dimensional ways. For example using a mindmap which represents relations between concepts, arguments and/or characters. In case of a recipe or manual writers may use a flowchart to structure their ideas. They may also provide short descriptions for characters, situations or scenery. Once gathered and organized ideas, the writer has an abstract representation of the text in mind.
Structuring in programming means creating a high-level design for a program. In this phase, design decisions are made about the program, such as, for example: what programming language and database system will be used? What type of software architecture will we follow, for example a model-view-controller or a microservices setup. Lower level decisions are also made, such as what classes are needed and how they will relate to each other. That is often done using a class diagram or an entity relationship diagram when data is being structured.
5.3 Translating
In writing, the step following Structuring information is Translating: transferring abstract concepts to linear natural language. While putting the design into sentences and words, the writer has to abide by rules. These rules might be rules of the language, for example, words need to be spelled correctly and sentences must be correct grammatically, but may also depend on the context of the text, for example in a scientific article, references have to be correct and in an persuasive argument, the text structure should be logical.
Similarly, the programmer now moves from the whiteboard to the keyboard, to start produces lines of code which implement the high-level design. Like in writing, the programmer needs to do so while applying rules. For example, code must be syntactically correct in order to be compiled or interpreted. In addition, some languages have stricter rules about what is allow, like typing rules that the programmer must obey.
According to [9], in this process, the writer has to juggle different specific demands of written language varying from generic and formal, syntactic and lexical to the motor tasks of forming letters or typing on a keyboard. For example, when a writer has difficulties with the spelling of the words, this process will use up so much of their working memory that they have no room to think about the structure of a paragraph. This high ’cognitive load’ has been studied in the context of programming, and programming education, as well [12, 30, 32].
5.4 Reflecting and Reviewing
We will reflect on differences in the remaining steps in the next section, but for now, let’s move to the final step of writing and the sixth of programming: Reflecting on text and Testing and Debugging and Maintaining and modifying the program.
After the text is written — and often also during the writing process — the writer reflects on their process and product. Are the goals met? Have I lived up the expectations of the assignment or client? Is my text readable?
Likewise a programmer reflects: Does my program function as expected? Is this code well-structured and free of code smells? Here, of course, there is an interesting difference between writing and programming, since a programmer can party rely on the computer to validate their program. Firstly of course by the compiler and the type-checker in statically typed languages, and later by an interpreter and a runtime. Furthermore, programmers increasingly often use tests to ensure the correctness of their programs. To find bugs, but also to ensure code quality, programmers sometimes use static analysis tools [14].
Failure to compile, to pass all tests, or too many code smells might impose a need for a revision of the source code. This type of product-inherent warnings for review is not present in writing texts. While this is a clear benefit, this might also reduce the need for or interest in manually reviewing the source code, not for functionality, but for readability. Recent systems for collaborative programming, like GitHub and their support for code review via pull requests have spurred interest in code reviews for readability and maintainability.
These steps seem very similar, as they concern verifying that the text or program performs the task it needs to. However, programming has a seventh step, in which the program is maintained. Often, of course, programs are updated after they have been deployed, while books or articles typically remain the same.
6 DIFFERENCES
In this section we highlight differences between the models of Huizenga and Prata. As said, there exist other, different models, which would lead to different comparisons. These two list are our choice, but we encourage others to make more, and different comparisons.
In our lists, we observe that not all steps are as similar as the first and last few steps. In the middle, we observe more differences than similarities, which we will elaborate on in this section. The most important difference is that in programming, making the source code is just one step: Writing the program, while in writing, there are two additional steps related to the stylizing and formatting of the text. In this section we will elaborate on what these steps mean in writing and how they could be interpreted in programming.
6.1 Stylizing text
When the ideas of the writer have been translated into text, a writer will apply rules of style. For example, an essay has a formal style with longer sentences and advanced jargon, while a children’s book is written in a cheerful style using simpler words. It is generally agreed upon that in order to write a good document, the writer has to apply a style to an entire document consistently, otherwise the text might be harder to understand, or less enjoyable to read.
Figure 3: Queneau’s Exercises in Style: Hesitation
I don’t really know where it happened...in a church, a dustbin, a charnel-house? A bus, perhaps? There were...but what were there, though? Eggs, carpets, radishes? Skeletons? Yes, but with their flesh still round them, and alive. I think that’s how it was. People in a bus. But one (or two?) of them was making himself conspicuous, I don’t really know in what way. For his megalomania? For his adiropity? For his melancholy? Rather...more precisely...for his youth, which was embellished by a long...nose? chin? thumb? no: neck, and by a strange, strange, strange hat. He started to quarrel, yes, that’s right, with, no doubt, another passenger (man or woman? child or old age pensioner?)? This ended, this finished by ending in a commonplace sort of way, probably by the flight of one of the two adversaries.
I rather think that it was the same character I met, but where? In front of a church? in front of a charnel-house? in front of a dustbin? With a friend who must have been talking to him about something, but about what? about what? about what?
The most extreme form of this style might be the idea of literate programming introduced by Knuth[15]. He envisioned a style of programming in which program statements are interspersed with documentation in a natural language, to ease in understanding the program. However compelling the idea of literate programming, in practice it is not used widely, with the potential exception of notebooks like Mathematica or IPython.
Another area where a style can be expressed, is when programmers are selecting keywords. Choosing how to name a keyword can be seen as a literary activity, since the programmer is defining the role of a variable with meaningful words. Arguably, the programs \(\mathbf{x} := 5\) and \(\text{total} := 5\) are executed in the same way by a compiler, but not by the brains of future readers. A more elaborate example is shown in Figures 1 and 2. These two programs could be seen as different styles of the same programs, embodying the difference between simply presenting facts and taking the reader along in a story of what is happening.
While these differences might seem small, keywords are known to play a large role in source code, as about three quarters of characters in a code base consist of identifiers [18]. Better identifier names correlate with improved program comprehension. For example, [19] reports on a study performed with over 100 programmers, who had to describe functions and rate their confidence in doing so. Their results show that using full word identifiers leads to better code comprehension than using single-letter identifiers, measured by both description rating and confidence in understanding.
Here, observing differences between writing and programming leads to questions about stylizing programs. Could we envision a class of programs which, like fairy tales, always have a similar style in all their occurrences? What is a surprising program, or a hesitant one? Do they compile to the same output? And can we think of a group of programs which, like letters, share a goal, but could be stylized in different ways? What is the difference between a personal style and one imposed by the environment or audience?
We believe contemplating these type of questions will make programming as a field richer and we encourage readers to come up with style of programs they like to see.
6.2 Formatting text
Another step that is missing in Prata’s programming steps is formatting. In writing, formatting means the writer layouts text, for readability or aesthetic reasons. Formatting text is typically one of the last steps in the writing process. A few activities which are commonly performed while formatting are adding images and figures to make the text more attractive or easier to understand. Or, when the writer wants to draw attention to a specific part of the text, they can use add emphasis with font options, such as making text bold, italic or changing the font color.
Sometimes, for example when the text is being published by a publisher, the formatting step might be done, partially of fully, by someone other than the writer.
Figure 6 shows two versions of a diamond poem: in the first draft the text is formatted using practical constraints, the writer has only outlined the poem by its requirements (number of characteristics of the animal per line), the final version of the text has a different shape (diamond), different font, different background color and there are images added.
Formatting, like stylizing, is an particular interesting concept in programming, since it is often seen as an afterthought. This is underlined by the fact that there is no formatting step in Prata’s model.
Again of course the question arises that formatting means in the context of programming. In some languages, programmers have no freedom in some aspects of formatting. For example, in Python the indentation level of the statements is significant, meaning that code in which, for example, the body of a loop is not indented does not work.
Most other languages do not have formatting requirements that strict, but many have formal of informal code conventions, from which a deviation is seen as a bad habit. Some people argue that these now informal formatting rules should be made mandatory. For example, in “The best software writing”, Ken Arnold argues that:
For almost any mature language [...] coding style is an essentially solved problem. I want the owners of language standards to take this up. I want the next version of these languages to require any code that uses new features to conform to some style.[2]
More than tools used for natural language writing though, tools for programming, called integrated developments environments (IDEs) have features that format code automatically. Recently, researchers have successfully attempted to learn formatting conventions from a code base, in order to increase its consistency automatically[1].
Despite the existence of these required, advised or automated formatting measures, programmers do still have some freedom in...
the formatting of their source code. As an example, consider the
two code snippets in Figure 7.
Both programs are following code conventions, however, they
feel different. The distance between the declaration and the use
of a variable might influence the understandability of a piece of
source code, or simply the enjoyment with which someone would
read it, underlining that formatting can have on
6.3 Compiling and running code
A final, seemingly, difference we want to highlight is that in pro-
gramming, code can be type checked, compiled and ran during the
development process. While in writing of course writers themselves
can read the text, and distribute drafts to people, this is not as easy
and effortless as hitting a compile button.
This leads to the deep questions of what does it mean for a text
to run? Maybe this can only happen in the mind of a reader? Or
could we envision an algorithm that mimics this, and predicts the
thoughts and even emotions of future readers?
There are tools that attempt this somewhat, a simple spell checker
comes to mind, or the more advanced Hemmingway app4 which
highlights bad writing style like long sentences and passive voice.
While these are useful, they do not seem to resemble the execution
of a text in a human’s brain very closely yet.
7 IMPLICATIONS
The above of course raises the question how this all helps program-
ners or writers or both. Given enough layers of abstraction, all
things are similar. I am an object, the computer on which I type this
footnote 4: http://www.hemingwayapp.com/
is one too. What does that teach us? We however think there are some important takeaways from the comparison between writing and programming that we can learn from.
7.1 Metaphors shape thinking
Firstly, the way we view programming impacts the field. We personally have found the plotters and pantsers views very appealing also for programming, some people like planning, while others want to see where the code takes them. The same person might even be plotting sometimes and ‘pantsing’ in other situations. The way programming is currently seen by many, through the metaphor of software creation as “software engineering” feels as designed by and for plotters. Here’s a though: If we had viewed programming more alike writing from the start, would we have come to agile design methodologies sooner?
7.2 Impact on education
It is not just the way we think about programming, but also the way we teach it that is greatly influenced by the way we see our field, the research programmes we view it through. If we see programming as writing, can we learn from writing education? This question warrants a full paper, but there are a few directions we see we can learn from. Could we apply these methods to programming education too?
7.2.1 Observational learning. For example, the use of observational learning, where a teacher or peers demonstrates a task before learners attempt it. In writing education in fact, teacher modeling is the most prevailing way of using models for learning. Usually in the instructional phase [16]. In this teaching method the teacher thinks out loud, they explain and demonstrates parts of the writing task. Pupils are expected to adopt the line of reasoning while executing the writing task. It is shown an effective instructional method to teach writing strategies (see e.g. [8, 11, 16]).
Modeling is not limited to one or some parts of the writing process, but is a useful method for instruction in strategies for every step of the process, from gathering information to reviewing the text (see also [21]). Usually the teacher functions as a mastery model, although it is shown that observing coping models (which are not flawless, but experience difficulties in the execution of the task and show how they cope with these difficulties) raises the self-efficacy of the pupils and enhances their performance more effectively. For weaker pupils, observing coping models is more beneficial, for better learners observing better models is (4, 5, 16, 35).
This effectiveness of this method is explained by the existence of the mirror neuron system in our brain. This system makes the brain demonstrate identical neural activity when we observe others performing a task as if we perform the task ourselves (see e.g. [26, 27]). In this way, the brain already ‘learns’ how to perform a task, and primes the execution of similar tasks.
7.2.2 Course integration. Writing can be taught in isolation, but can also be taught in combination with other topics, for example, when pupils are writing an essay about modern history. Research has shown that this type of integration is beneficial for both of the topics taught. For example, Romance en Vitale combined science courses for grades 1 and 2 (ages 6 to 8) with reading and writing assignments, such as writing an overview of learnings and a diary, and reading are appropriate science materials. These children performed significantly better than a control group on both science and reading [33].
In another study compared the effectiveness of two different methods of teaching science: one aimed at just teaching science, a second on combined with reading and literacy. In this latter group the kids learned to think, speak, read and write like scientists. The control group, which is often used and designed by the same university was aimed at performing and learning about experiments. The results showed that the experimental group had a better understanding of what science is, a better understanding of the basic concepts and also they identified more as scientists [10].
This last study could prove especially interesting for programming education, as it also could help a broader group of kids identify as programmers!
8 CONCLUDING REMARKS
In this paper, we aim to draw a comparison between writing and programming by comparing their goals and challenges. Looking from a distance, both can be seen as having a very high level idea and representing that with low level constructs. We observe that some steps as defined by writing and programming authors are similar, Structuring information is like Designing program, in that for both the performers need to take in information and decide on how to structure it to fit their goal best. We would love to explore our beliefs further in the future, for example by conducting a think aloud study with people writing or programming, or by placing people in an fMRI scanner and measure their brain activity.
Other steps present in writing, like Stylizing and Formatting, are not commonly described and studied in programming, and we hope our paper leads to more discussion on these activities in programming. Is adding whitespace style or is it formatting? Having clearly, agreed upon definitions like in writing can ease teaching and communication on these type of topics. Can we learn from best practices in writing? The other way around, programming has explored the step of reflecting and adapting in more detail probably due to the collaborative nature of modern day programming projects. There writing could be inspired by ideas like pull requests and formal code reviews.
There are also places where writing could be inspired by programming: Programmers attempt to get feedback from the “environment” earlier than writers, by compiling and running their program. Can writers similarly somehow have a machine reflect on their text while they are still writing?
There are certain things that we consider out of scope for this paper. For example, in the above, we have followed [13] and [24] in their linear representation of writing and programming, but often, in both domains, the processes can also be represented as a cycle. For example, in writing the consensus is that writers continuously switch between the steps of the process as described above. In programming a cycle that is often referred to is Beck’s Test Driven Development cycle[3]. We presented the cognitive processes and skills in a linear way, but the reality is not so strict. The process is not even cyclic, although this fits reality more than a linear representation. In reality the writer or programmer switches
between writing or programming stages freely and uses different skills throughout the entire process.
Future work in exploring this comparison should surely examine the cyclic (or even messier) order of steps in more detail. There is one more observation: the fact that the activities are similar leads us to think that also the skills and the way we teach could learn from each other.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/46096944/programming_writing.pdf", "len_cl100k_base": 7696, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27576, "total-output-tokens": 9237, "length": "2e12", "weborganizer": {"__label__adult": 0.0008831024169921875, "__label__art_design": 0.0006833076477050781, "__label__crime_law": 0.0005793571472167969, "__label__education_jobs": 0.02252197265625, "__label__entertainment": 0.00018024444580078125, "__label__fashion_beauty": 0.0002963542938232422, "__label__finance_business": 0.0003840923309326172, "__label__food_dining": 0.0007948875427246094, "__label__games": 0.0013093948364257812, "__label__hardware": 0.0007562637329101562, "__label__health": 0.0006799697875976562, "__label__history": 0.000362396240234375, "__label__home_hobbies": 0.00018846988677978516, "__label__industrial": 0.000446319580078125, "__label__literature": 0.00258636474609375, "__label__politics": 0.0004818439483642578, "__label__religion": 0.000950336456298828, "__label__science_tech": 0.006473541259765625, "__label__social_life": 0.0004203319549560547, "__label__software": 0.005229949951171875, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.0005464553833007812, "__label__transportation": 0.0008368492126464844, "__label__travel": 0.000278472900390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40210, 0.05071]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40210, 0.92601]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40210, 0.94599]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 8466, false], [8466, 14723, null], [14723, 19407, null], [19407, 22697, null], [22697, 28848, null], [28848, 30407, null], [30407, 37030, null], [37030, 37417, null], [37417, 40210, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 8466, true], [8466, 14723, null], [14723, 19407, null], [19407, 22697, null], [22697, 28848, null], [28848, 30407, null], [30407, 37030, null], [37030, 37417, null], [37417, 40210, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40210, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40210, null]], "pdf_page_numbers": [[0, 0, 1], [0, 8466, 2], [8466, 14723, 3], [14723, 19407, 4], [19407, 22697, 5], [22697, 28848, 6], [28848, 30407, 7], [30407, 37030, 8], [37030, 37417, 9], [37417, 40210, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40210, 0.05056]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
e7cbed980dc18d89d0e522d4d7e54bc6565de9b6
|
[REMOVED]
|
{"len_cl100k_base": 4163, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20419, "total-output-tokens": 5126, "length": "2e12", "weborganizer": {"__label__adult": 0.0003740787506103515, "__label__art_design": 0.0003275871276855469, "__label__crime_law": 0.0005154609680175781, "__label__education_jobs": 0.0010986328125, "__label__entertainment": 0.00014960765838623047, "__label__fashion_beauty": 0.0002343654632568359, "__label__finance_business": 0.00031375885009765625, "__label__food_dining": 0.0005521774291992188, "__label__games": 0.0006232261657714844, "__label__hardware": 0.0022125244140625, "__label__health": 0.0018444061279296875, "__label__history": 0.0003020763397216797, "__label__home_hobbies": 0.00020623207092285156, "__label__industrial": 0.00079345703125, "__label__literature": 0.0002282857894897461, "__label__politics": 0.0004529953002929687, "__label__religion": 0.0006299018859863281, "__label__science_tech": 0.258056640625, "__label__social_life": 0.00017201900482177734, "__label__software": 0.0284423828125, "__label__software_dev": 0.701171875, "__label__sports_fitness": 0.0004780292510986328, "__label__transportation": 0.00045108795166015625, "__label__travel": 0.00023615360260009768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19239, 0.07808]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19239, 0.35285]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19239, 0.89859]], "google_gemma-3-12b-it_contains_pii": [[0, 846, false], [846, 1865, null], [1865, 4870, null], [4870, 7587, null], [7587, 9107, null], [9107, 9870, null], [9870, 12404, null], [12404, 15390, null], [15390, 17909, null], [17909, 19239, null]], "google_gemma-3-12b-it_is_public_document": [[0, 846, true], [846, 1865, null], [1865, 4870, null], [4870, 7587, null], [7587, 9107, null], [9107, 9870, null], [9870, 12404, null], [12404, 15390, null], [15390, 17909, null], [17909, 19239, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19239, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19239, null]], "pdf_page_numbers": [[0, 846, 1], [846, 1865, 2], [1865, 4870, 3], [4870, 7587, 4], [7587, 9107, 5], [9107, 9870, 6], [9870, 12404, 7], [12404, 15390, 8], [15390, 17909, 9], [17909, 19239, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19239, 0.24762]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
7a5e951e63c8e663428e08ebe7ae3380d8a713e1
|
From a Monolithic Big Data System to a Microservices Event-Driven Architecture
Nunes Laigner, Rodrigo; Kalinowski, Marcos; Diniz, Pedro; Barros, Leonardo; Cassino, Carlos; Lemos, Melissa; Arruda, Darlan; Lifschitz, Sérgio; Zhou, Yongluan
Published in: Proceedings of 46th Euromicro Conference on Software Engineering and Advanced Applications
Publication date: 2020
From a Monolithic Big Data System to a Microservices Event-Driven Architecture
Rodrigo Laigner
Department of Computer Science (DIKU)
University of Copenhagen, Denmark
rnl@di.ku.dk
Marcos Kalinowski, Pedro Diniz
Informatics Department
PUC-Rio, Brazil
{kalinowski,pfonseca}@inf.puc-rio.br
Leonardo Barros, Carlos
Cassino, Melissa Lemos
Tecgraf/PUC-Rio, Brazil
{lbarros,cassino,melissa}@tecgraf.puc-rio.br
Darlan Arruda
Department of Computer Science
Western University, Canada
darruda3@uwo.ca
Sérgio Lifschitz
Informatics Department
PUC-Rio, Brazil
sergio@inf.puc-rio.br
Yongluan Zhou
Department of Computer Science (DIKU)
University of Copenhagen, Denmark
zhou@di.ku.dk
Abstract—[Context] Data-intensive systems, a.k.a. big data systems (BDS), are software systems that handle a large volume of data in the presence of performance quality attributes, such as scalability and availability. Before the advent of big data management systems (e.g. Cassandra) and frameworks (e.g. Spark), organizations had to cope with large data volumes with custom-tailored solutions. In particular, a decade ago, Tecgraf/PUC-Rio developed a system to monitor truck fleet in real-time and proactively detect events from the positioning data received. Over the years, the system evolved into a complex and large obsolescent code base involving a costly maintenance process. [Goal] We report our experience on replacing a legacy BDS with a microservice-based event-driven system. [Method] We applied action research, investigating the reasons that motivate the adoption of a microservice-based event-driven architecture, intervening to define the new architecture, and documenting the challenges and lessons learned. [Results] We perceived that the resulting architecture enabled easier maintenance and fault-isolation. However, the myriad of technologies and the complex data flow were perceived as drawbacks. Based on the challenges faced, we highlight opportunities to improve the design of big data reactive systems. [Conclusions] We believe that our experience provides helpful takeaways for practitioners modernizing systems with data-intensive requirements.
Index Terms—big data system, microservices, event-driven
I. INTRODUCTION
Data has been generated at an increasingly higher pace over the last years. Social media interactions, sensors, mobile phones, and business processes are examples of sources. Surveys indicate that 2.5 quintillion bytes of data are generated each day, which will lead to approximately 79.4 zettabytes of data by 2025 [1], [2]. This context made the case for the design of big data systems (BDS), which arose to handle the collection and manipulation of large volumes of data in modern business applications.
Gorton and Klein [3] define BDS as “distributed systems that include redundant processing nodes, replicated storage, and frequently execute on a shared cloud infrastructure [...] employing a heterogeneous mix of SQL, NoSQL, and NewSQL technologies.” As a result, the development of BDS often imposes challenges to software engineers, as noted by Hummel et al. [4], which cataloged a set of challenges, such as steep learning curve and complex data processing. Besides, Laigner et al. [5] found that the major challenges on developing BDS are about software architecture design.
Event-driven systems and microservices have emerged as compelling architectural paradigms to the development of data-driven software applications [6], [7]. Microservices are small scalable units where each represent a bounded business capability that are often autonomously deployed. In contrast to traditional monolithic systems, microservices do not share resources, communicating mainly via message-passing semantics [8]. In line with microservices, an event-driven architecture (EDA) is comprised by a set of high-cohesive components that asynchronously react to events to perform a specific task [9].
In this paper, we report the complete replacement process of a legacy BDS to a microservice-based event-driven architecture. The replacement comprised a 19-month long development period that took place at PUC-Rio’s Tecgraf Institute, which provides technical and scientific solutions for a wide range of strategic industrial partners. One of the solutions, developed for a customer in the Oil & Gas sector back in 2008, concerns a monolithic BDS that monitors moving objects (MOs) and proactively detects events that incur in risks to the operation, such as vehicle route deviations. Over the years, the system evolved into a complex and large obsolescent code base that involves a difficult maintenance process. In this context, in 2018, with the advent of a new industrial partner interested in the outcomes of the previous project that employed the legacy BDS, Tecgraf’s managers decided to take advantage of a new contract to accommodate a complete rewrite of the legacy BDS by adopting current big data technologies, such as Cassandra and Kafka. Furthermore, based on the lessons learned of the legacy BDS, Tecgraf’s managers decided that the new project must adopt a microservice-based EDA.
Thus, we investigate the integration of microservices and EDA to support data-intensive requirements. The main contributions of this paper are: (i) an investigation on the motivation to adopt a microservice-based EDA; (ii) a 19-month experi-
ence report on replacing a legacy BDS with a microservice-based EDA; (iii) a discussion of the obtained results in form of challenges and lessons learned.
The remainder of this paper is organized as follows. Section II provides the background of this work. Next, the action research design is presented, describing the goal, research questions, and methodology. The results are presented in Section IV. Lastly, Section V presents the concluding remarks.
II. BACKGROUND
A. Big data systems
Chen et al. [10] explain that traditional software development is characterized by “structured, batch-oriented, relational small data [volume],” and straightforward development lifecycle and architecture design. Besides, Gorton and Klein [3] argue that traditional business systems are “relatively well constrained in terms of data growth, analytics, and scale.” On the other side, Gorton and Klein [3] synthesize BDS based on four requirements: (i) write-heavy workload; (ii) variable request loads (adding new resources and release them as necessary); (iii) computation-intensive analytics (diverse query workloads and varying latency demands); and (iv) high availability. These requirements represent a significant shift from traditional business systems.
B. Microservices
Software systems have traditionally adopted a monolithic architectural style, on which modules and/or subsystems are integrated and cooperate in a centralized manner. According to Bucchiarone et al. [8], in such architecture, “the modularization abstractions rely on the sharing of resources of the same machine [...], and the components are therefore not independently executable.” However, concerns related to the complexity involved on scaling monolithic architectures [8] and aspects related to change, such as evolutionary maintenance [11], have shifted interests in industry towards the adoption of decoupled architectures. Built on SOA principles of loosened-coupled services, microservices have emerged as an “organic implementation approach to SOA, encompassing” polyglot programming in multiple paradigms and languages, and design for failure; decentralization and automation” [12].
C. Event-driven architecture
Systems that adopt an EDA, also known as reactive systems, are a current subject of interest in the development of data-driven software systems [6]. According to Richards [9], EDA is a pattern “made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events”. Richards argues that “event processors are self-contained, independent, highly decoupled architecture components that perform a specific task in the application or system” [9]. Therefore, in EDA, each single-purpose service employs programming primitives for enabling reaction and response to a set of predefined events. To the best of our knowledge, literature does not clearly differentiates these from microservices.
III. RESEARCH DESIGN
Our study design follows the Action Research (AR) [13] methodology. The study context, goal and research questions, and methodology are presented hereafter.
A. Context
This study reports on the process of replacing a legacy big data system with a microservice-based EDA. The experience described herein occurred in the context of PUC-Rio’s Tecgraf Institute. Tecgraf is a mid-size non-profit research and development organization that conducts projects with industrial partners and government institutions. Our subject legacy system is a large-size BDS that had been under active development from 2008 to 2014. Figure 1 shows a high level view of the legacy system. MOs, such as vehicles, have tracking devices installed, so that positioning data (PD) are sent periodically. Every PD are sent to an information flow processing (IFP) engine [14], which analyzes them to uncover non-conformities, such as a vehicle route deviation. Then, the streams are enriched with domain data and presented to users.
Over the years, the system has undergone a natural process of corrosion, on which the large source code became difficult to maintain due to the complexity of the system. Besides, the technology stack became obsolete, outpaced by current technologies, and the monolithic structure undermined the introduction of new technologies. Thus, in the advent of a new industrial partner with a closely related problem context, Tecgraf’s managers realized that the process of recruiting and training new developers that would be able to implement a new instance of the legacy BDS was not feasible. Also, with the myriad of big data technologies that emerged in the last decade, such as Cassandra and Kafka, and the mentioned drawbacks found in the legacy BDS, Tecgraf’s managers decided that the best approach would be designing a new architecture from scratch, based on microservices and event-driven principles. Besides, the new architecture should embrace widely adopted open-source technologies instead of relying on in-house solutions.
B. Goal and Research Questions
Developing a BDS poses several challenges to developers, such as steep learning curves, lack of modeling and debugging support, and data consistency [4]. Moreover, the recent trend
towards adoption of microservices architectures has shown that without a careful design, drawbacks related to redundancy and data consistency may emerge [11]. Albeit there is substantial body of work reporting on microservices decomposition [8], [11], designing a microservice-based system without decomposing an existing monolithic system is not substantially covered in the literature [11]. Besides, to the best of our knowledge, there is no work that reports the challenges and lessons learned on replacing a legacy BDS with a new microservices EDA. Thus, our goal is to report the experience of replacing a legacy BDS with a microservices-based EDA without decomposing the existing system. To achieve our goal, we derived three Research Questions (RQs), which are detailed hereafter.
RQ1. What are the reasons that motivate the adoption of a microservice-based EDA to replace a legacy big data system? This first research question intends to comprehend the reasons that motivate the adoption of architectural alternatives different from the one found in the legacy big data system, particularly regarding EDA and MS architectural style. While there are different works on the motivations for migrating to MS-based architectures, we wanted to understand the specific motivations of our context.
RQ2. What are the benefits and limitations perceived on replacing a legacy big data system with a microservice-based EDA? The second research question explores the perceived benefits and limitations (post-development) related to the adoption of a new microservice-based EDA. We aim to uncover the technical decisions that the development team has taken that derived drawbacks and positive results.
RQ3. What are the challenges faced and lessons learned while replacing a legacy big data system with a microservice-based EDA? The third research question concerns unveiling the challenges and lessons learned that were perceived throughout the development process.
C. Method
This section presents the research method employed to answer our research questions. The organization of the method follows the template of Santos and Travassos [13], which suggest one AR stage per section. Figure 2 depicts the AR process based on Davidson et al. [15]. The process starts with a diagnosis of the problem, followed by a plan to address the issue being investigated. Then, the plan is put into practice in the intervention phase. Lastly, an evaluation and analysis is carried out. Although the AR process allows for iterating through phases to achieve results incrementally, this study reports a full cycle of the methodology.
1) Diagnostic: Santos and Travassos [13] argue that this stage “consists of exploring the research field, stakeholders and their expectations”. Thus, as this phase naturally maps to answering RQ1, we have designed an exploratory survey to collect the expectation of Tecgraf’s main stakeholders involved in the project. In this survey, we aim to report on the drivers of the technical decision on moving towards a microservices EDA and obtain a view about the drawbacks found in the legacy BDS. Besides, the survey results allow to better understand the problem context and to cross-validate the findings of subsequent steps of the AR process. The target population of this study is composed by product managers, software architects, and developers of Tecgraf institute that have contributed to the decision about replacing the legacy BDS. The following questions compose our survey:
(Q1) What are the drawbacks found in the legacy BDS that motivate the substitution?
(Q2) What are the drivers for defining event-driven microservices as a target architecture?
(Q3) What characteristics of the legacy BDS are important to remain in the target system?
(Q4) What challenges would you expect to encounter in replacing a legacy BDS to a microservice-based EDA?
(Q1)-(Q3) are defined with the goal to gather information on the legacy system and to extract requirements that must remain in the new architecture. (Q4) aims at gathering the perception of the stakeholders over the challenges incurred by the replacement process. Thus, we can cross-check if the expectations on challenges are met at the end. In addition, regarding the BDS, we conduct an analysis of the documentation, inspection of the source code and historical commits to uncover technical challenges faced by developers at the time.
2) Planning: This stage concerns the definition of actions to be taken onward. A component of this phase is conducting a literature survey for the purpose of examining the research theme [13]. Thus, we report our searching process on the aforementioned themes and the sequenced set of activities to carry out during intervention step.
3) Intervention: According to Santos and Travassos [13], this phase concerns the implementation of planned actions, which are depicted “in a chronological way, describing how the activities were performed during the research.” In this phase, data collection is a product of our experience playing the co-located role of software architects within Tecgraf/PUC-Rio. During this period, we have analyzed the results of several meetings, emails, interviews, and technical documents, such as use cases and user interface screen prototypes, in order to confirm our findings regarding the process of replacing a legacy BDS. The material collection was conducted between July 2018 and January 2020. Through intervention, we seek to unveil the challenges on adopting a microservice-based architecture from scratch (i.e., when no monolithic system is
refactored) with event-driven requirements. The results of this AR step enables us to partially answer RQ2 and RQ3.
4) Evaluation and Reflection: This phase regards analyzing the effects of the actions taken. Although our study provides the point of view of two interveners playing a software architect role, it is worthwhile to enrich our understanding on the effects of the intervention by collecting the perceptions of other developers that also played a role in the project, i.e., contributed to the project code base. Therefore, in order to mitigate risks related to the report of outcomes of the intervention from a single point of view, we designed a survey to gather the perception of other developers about the new architecture. This also allowed us to cross validate our findings to reduce limitations of the study. Through evaluation, we are able to complement our answer to RQ2 and RQ3.
IV. RESULTS
This section provides detailed discussions on the AR methodology employed at Tecgraf to answer the RQs.
A. Diagnostic
The diagnosis phase “enables the identification of primary causes and circumstances faced by an organization” [13]. In this section, we analyze the project context by inquiring stakeholders’ expectations and understanding the legacy BDS.
1) Survey with stakeholders: We applied the survey to four Tecgraf stakeholders (SH{1-4}) that were closely involved in the arrangements of the project. Regarding the drawbacks of the legacy BDS, the respondents unanimously agreed on the complexity and obsolescence of the source code. For instance, SH1 argues that the “code was poorly structured and documented,” and “the technology was obsolete.” Next, SH1 asserts that a driver for microservices adoption is that “data could grow rapidly and the performance could be a bottleneck; [...] an architecture able to escalate easily was very attractive.” Besides, SH2 highlights that an EDA “facilitate the processing of events from different sources,” “[supporting] the communication and isolation among microservices.” Next, the respondents agreed that the core requirements to remain are georeferenced tracking of MOs, definition of route and trajectory data of MOs. Finally, they agreed that the lack of technical maturity could impose challenges. For instance, definition of API interfaces, microservices decomposition, database transaction issues, and high coupling among microservices were concerns raised.
2) Legacy big data system: In this step, we describe the process carried out to further understand the legacy BDS. We have acquired access to the legacy system’s wiki, a repository for sharing information about the project. We found that the core functional requirements of the legacy system were about efficient ingestion and fast retrieval of positioning data from MOs and detection of non-compliance patterns in the streams, such as speed limit overrun. In production settings, problems related to data consistency started to arise. For instance, thread blocking and data races have led to slowness in the processing of events. The process for understanding issues was complex, once traceability information was insufficient due to the large code base divided in several modules. We found that the system has started with 500 MOs sending PD every minute and has grown up to 10000 MOs sending PD every 15 seconds.
We have also acquired access to the source code repository of the legacy BDS. Due to the large code base and minimal contact with developers of the legacy system, once they were not part of the organization anymore, the process lasted several weeks. In summary, we found that batches of streams were posted on queues placed in-memory, in a scheme similar to message queue systems. As multiple threads were employed to concurrently process the streams, this solution often led to data races and thread blocking. Misuse of Java concurrency primitives was the main reason for such issues. Lastly, we found that a custom-tailored IFP engine was engineered to monitor MOs (retrieved from in-memory queues) and detect events such as route deviations. Lastly, the legacy BDS LOC count for 400K, divided into application code and libraries.
B. Planning
The foundations that guide our planning stage were elucidated in the diagnosis stage. In this stage, we sought to gather knowledge on the research themes by searching the literature. We have submitted searches on Scopus digital library regarding microservices and event-driven architecture. In summary, although studies on microservices are prevalent [8], [16], we found that the problem of defining a microservice (MS) architecture for a new system still an ongoing problem in literature, with no clear guidelines [11]. Hence, we defined a sequence of phases to be followed in a defined timetable. We start with the conception, describing the requirements elicitation process. After, the architecture and design phase are conducted. Lastly, with the architectural blueprint of the system, we were ready to start the implementation phase.
C. Intervention
In this section we provide an in-depth discussion over the process conducted to replace the legacy BDS with a microservices EDA. As suggested by Santos and Travassos [13], the intervention is described in a chronological way.
1) Conception: The project has started with a researcher (the first author), a project manager (fourth author), a senior developer, and two requirements analysts. Soon, the requirements started to change. By working closely with the industry partner, the requirements team realized that their needs were different from the ones described in the contract. As we aimed a microservice-based architecture, this context has played a role on the process of defining our services.
Although there are general guidelines on migration patterns and deducing microservices from a monolithic system [17] [18], by the time we were investigating approaches to support the process of defining our services, we have not found studies focused on how to define a microservices architecture from scratch, i.e., when no monolithic system is decomposed.
Literature [17] [18] often cites Domain-Driven Design (DDD) [19] as a compelling technique to identify subdomains of the business, where industry practitioners advocate that each subdomain maps to a bounded context (a deployable unit). However, Evans [19] advocates first for a discovery process of the application domain, where its “understanding comes from diving in, implementing an initial design based on a probably naïve model, and then transforming it again and again.”
Hence, we followed the advice of starting with a naïve model based on the business capabilities (BCs) [20] identified so far (1-month period). A BC, also referred as a bounded context [16], “is something that a business does in order to generate value,” often representing a delimited domain business model. We documented the conducted requirements gathering meetings and identified four major BCs of the domain, as shown in Figure 3. Our reasoning for defining the BCs is explained as follows.
Analysts plan a patrol (trip) composed by a route to be followed and a set of inspections (stop points) to be performed along the work journey. The structure of patrol and verification trips are no different, however, a verification corresponds to an unforeseen inspection triggered by the reception of a denounce, while a patrol is previously defined and scheduled in advance. Also, distinct analyst teams handle patrols’ planning and verifications. Therefore, we defined both as distinct BCs.
The Alerts BC is responsible for the ingestion, processing, and exposition of alerts coming from any source. For instance, instruments installed in oil pipelines periodically send alerts concerning suspicious activities, such as manual excavations. Besides, through the mobile app, in-field operators can communicate messages and alerts to analysts in real-time.
The Tracking BC is responsible to ingest and process real-time PD of patrols and verifications. An in-field team is assigned to a patrol or a verification and tracking is automatically enabled by the the mobile app they carry in operation. In addition, vehicles assigned to a team also send tracking data. Lastly, the Tracking BC is responsible for storing and serving all trajectory data history of mobile devices and vehicles.
2) Architecture and design: This section discusses how the requirements elicited were translated to an architectural design, which is exhibited in Figure 4.
Defining a target stack. The intervention process started at a slow pace due to frequent requirements changes. This context has made the case for focusing on the analysis of suitable technologies for the target architecture. Tecgraf has long-lasting expertise in developing distributed systems with Java. Besides, as the developers were proficient in the language, Java was a natural choice to be the main back-end programming language. As the project comprised the development of a web application with a distributed architectural style, in order to meet stakeholders expectations over embracing open-source technology, instead of writing custom-tailored infrastructure solutions (e.g., logging, data access, and dependence management) from scratch, a reasonable choice was relying on a well-adopted framework to support our development. Thus, we listed a set of capabilities that the framework must deliver: support for the development of REST APIs [21]; support for dependency injection [22]; support for hot reload; embedded support for database access; integrated support for message queuing systems; and support for reactive programming model. Although there are a number of feasible web framework options for Java platform, we decided to select Spring due to its rich ecosystem, composed by multiple integrations built by a supportive community. Besides, support for Spring in Question&Answers communities and extensive online documentation played a role in the decision.
As scalability is a major driver of the target architecture, we opted to adopt the database per service pattern [23]. Besides, as the requirements were being progressively elicited, we aimed to get rid of schema changes for each new version. Then, we listed three important features for a default persistence technol-
ogy for our services: (i) flexible-schema model, (ii) geospatial query support, (iii) high industrial adoption. Then, MongoDB was selected due to its support to geospatial indexes, replication, load balancing, file storage, and the representation of complex associations within a single record.
Defining services corresponding to business capabilities. Patrol Planning and Verifications, as depicted by Figure 3, comprehend distinct BCs. This comprehension led to designing both as distinct microservices, as shown in Figure 4. In-field teams are either assigned to a planned patrol or a verification, and, through the mobile app, they are able to retrieve data from the respective service in order to support daily operation. As a verification is spawned due to a denounce (i.e., it is not planned daily), the mobile app is programmed to proactively request the existence of an assigned verification in a time interval. In case the team is assigned to a verification, the patrol is paused until the end of the verification operation.
Next, we chose to define the Tracking BC as a microservice due to two reasons: (i) guarantee PD durability and (ii) enable retrieval of historical PD. As scalability of PD ingestion and retrieval is a central concern in the architecture, we listed essential quality attributes a data store must deliver in this case: (i) write-heavy workload support, (ii) availability, (iii) scalability, and (iv) consistency. Thus, we surveyed DBMS to compare additional quality attributes. The work of Lourenço et al. [24] gave us a starting point, as shown in Table II. Although not considered a database, but rather a pattern, we found worthwhile to also analyze CQRS [25] as a candidate solution. Given the superior write-performance, we selected Cassandra as our solution to intensive PD ingestion. From the point of view of event sourcing pattern [26], we modeled PD as an event, thus the state of a MO is represented as a sequence of state-changing events (i.e., a historical track).
Following the advice of Balalaie et al. [17], which recommend “to start with a low number of services [...] and incrementally add more services” as the team understands requirements better, we realized that letting Tracking MS also deal with the reception of data from external sources (e.g., vehicle and mobile app) could compromise its performance on serving tracking historical data. Thus, in order to provide a singular interface for reception of PD, we defined a MS (Signals) that is responsible for abstracting the receiving of PD through a RESTful API by guaranteeing the conformance of the API defined and communicating it to interested services. This choice proved to be right due to scalability requirements entailed by the application, on which we could increase the number of Signals instances to cope with growing number of MOs sending PD without affecting serving historical data.
Lastly, we defined a specific MS (Alerts) responsible for reception, processing, and serving alerts. In other words, alerts are events communicated to the system that should be stored consistently and informed to interested services. Thus, we applied the transactional outbox pattern [27], which advocates that a service that publishes a domain event [28] (in our case, an alert) must atomically update the database and publish an event. In the case of a growing number of different classes of events and interested services, it is important to have a unit of scalability which is represented by this MS.
Defining domain events. Domain events [28] are often employed in EDA to communicate services about a change in the state of a domain object. A domain event, when received by a service, may trigger actions to be performed. A domain event is often published through a communication channel (a.k.a. topic) that is subscribed by interested parties, allowing a low coupled and non blocking message passing [29]. This approach is particularly important in data-intensive systems to avoid polling mechanisms, which may become a bottleneck with time. In our case, domain events were elicited along brainstorming reunions to evolve the domain knowledge, and, due to space constraints, we summarize the main ones in Table I. Based on the work of Brandolini [30], the columns are explained as follows: an Actor is the source object of an action; a Command represent an action triggered by an actor; and an Event is the consequence of an action.
As mentioned earlier, a PD received by the system is a domain event that represents that the state of a MO (mobile app or vehicle) is updated. When a PD is received by Signals, it checks the conformance of the PD object and publishes it to a Kafka topic called signals. Although we do not adopt transactional outbox pattern [27] in Signals, we allow for faster communication of the new state to interested services (since we do not wait for a synchronous database operation) by relying on Kafka consistency guarantees, configuring a suitable trade-off when it comes to a (quasi) real-time system. On the other side, every alert processed by the MS Alerts, after stored in the database, is published in a topic called alerts. After analysis (by an analyst), an alert may result in the assignment of a verification to an in-field team. This assignment event is then delivered to the given in-field team through the mobile app.
Serving domain events to end-users. A user interface application (UI-APP) was developed in Angular [31] to enable analysts to visualize domain events (e.g., alerts) in real-time. We chose not to include programming polling mechanisms in our UI-APP because we envisioned that real-time domain events retrieval from our microservices could experience high latency as the number of users and amount of data evolve. A technology that could intermediate domain events coming from Kafka topics to web browser clients was necessary. Thus, we found that WebSocket [32] make a suitable protocol to handle real-time event-driven communication in the front-end layer by avoiding polling the server for data.
<table>
<thead>
<tr>
<th>Actor</th>
<th>Command</th>
<th>Event</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mobile app</td>
<td>Start patrol</td>
<td>Patrol started</td>
</tr>
<tr>
<td>Analyst</td>
<td>Assign verification</td>
<td>Verification assigned</td>
</tr>
<tr>
<td>Mobile app</td>
<td>Start verification</td>
<td>Patrol paused</td>
</tr>
<tr>
<td>Mobile app</td>
<td>Finish verification</td>
<td>Patrol resumed</td>
</tr>
<tr>
<td>MO</td>
<td>Update positioning</td>
<td>MO state changed</td>
</tr>
<tr>
<td>Processing</td>
<td>Register route deviation</td>
<td>Route deviation detected</td>
</tr>
</tbody>
</table>
TABLE I: DOmain EVENTS IDENTIFIED
Information flow processing. Cugola and Margara [14] refer to systems that “require processing continuously flowing data from geographically distributed sources […] to obtain timely responses to complex queries” as information flow processing (IFP) applications. At first we investigated Flink [33] as our IFP engine due to its ability to compute operations over stream data, like our real-time PD. However, as asserted by Orleans documentation [34], these systems present a “unified data-flow graph of operations that are applied in the same way to all stream items,” thus hindering applying filtering or aggregation operations over different data items in the same computation. For example, as part of the process of checking real-time trajectory data of MOs against their respective planned route, we found limited support to integrate an external call to Patrols MS API in order to retrieve the planned route. Thus, we built a tailored solution (Processing in Figure 4) that takes advantage of reactive primitives of Spring and in-memory data processing. Thereby, based on a 5-minute time-window, Processing retrieves PD associated with each in-field team (from signals topic) and triggers the route deviation detection computation. If a deviation is detected, a route deviation is registered and the respective event is triggered. The Alerts MS acknowledges the event as a new alert and publishes it in the alerts topic. This separation of concerns allows us to scale separated parts of the system independently.
3) Implementation: In total, an overall effort of more than 7500 hours were employed by the development team, resulting in a system of around 30,000 lines of Java code, 14,000 lines of TypeScript code, and 29 data tables and documents. In average, each MS has 2,500 lines of code. Due to the large number of microservices and supportive technologies, and the lack of knowledge on state-of-the-art DevOps tools, a great effort was put into deployment. For instance, Docker containers were used to package our services. Besides, several fixes that were not expected earlier were implemented only to adapt to Docker deployment. This context makes the case for introducing DevOps earlier in the development process.
D. Evaluation and Reflection
This section reports the results of survey conducted with developers and discusses challenges and lessons learned. Although the lessons learned are related to the specific action research project context described in this paper, we believe most of them are generalizable to other industrial settings.
1) Survey with developers: As mentioned in Section III-C4, we designed a survey to collect the point of view of three developers that collaborated in the development process. First, we have defined a set of challenges collected along the intervention. Then, we questioned the developers on their agreement and also inquired them about additional challenges. Their perceptions are summarized along the next section. Due to space constraints, survey details are found online [35].
2) Challenges and lessons learned:
Defining microservices. Although Patrols and Verifications represent different domain concepts, which lead to different domain events, designing them as distinct microservices caused the problem of duplicate concepts [19], on which duplicate efforts on each requirement change (as a result of new knowledge acquired) was observed along the development life cycle. As a lesson learned, we suggest following the advice of Fowler [36], which advocates for the Monolithic-First approach, on which a project “shouldn’t start […] with microservices, even if you’re sure your application will be big enough to make it worthwhile.” Waiting for requirements to mature is essential to define microservices properly. However, emerging research discusses model-driven development of microservice-based systems, which may help mitigate some of the impedance on microservices design [37].
Data modeling. The fast-paced development process altogether with the adoption of novel technologies imposed a challenge on getting data modeling right. The distributed architecture forced us to adopt schema-less and denormalized data models, encapsulated through APIs, rather than normalized data models and data consistency guarantees usually found in monolithic systems, in line with Gorton and Klein’s discourse over BDS [3]. Furthermore, even though designing services communication based on domain events augments the expressiveness of the domain, from the point of view of developers, the myriad of services and technologies led to difficulties in troubleshooting problems. The complex data flow entailed by the application often led to misunderstandings and slowed the process of identifying the root cause of errors.
Selecting an IFP engine. Attempts to translate our IFP requirements to Flink were unsuccessful (see Section IV-C2). A second problem was relying on a short time window for implementing our IFP use cases. As the number of technologies employed were already large and the team had no previous experience with IFP engines, we realized that the learning curve could compromise subsequent sprints. As a lesson learned, we highlight that the selection of an IFP solution is an architectural decision. It means that the chosen IFP engine should be adherent to the architecture, and not the opposite. Furthermore, we consider Orleans streams [34] a promising candidate for expressing computations that span different items due to its flexible processing engine.
Embracing failure. Some MS-oriented frameworks (e.g., Spring) fail to present extensive support for failure handling in workflows spanning multiple microservices. For instance, in the absence of distributed transactions, the developer should hard-code logic related to recovering from failures in such workflows. Furthermore, given the low granularity nature of MS instances and the difficulty on reasoning over each MS’ local state globally, we advocate for a programming model that specifies fault-tolerance properties that we can reason about on requests spanning multiple microservices.
V. CONCLUDING REMARKS
This study reports an industrial experience regarding the replacement of a legacy monolithic BDS to an event-driven microservice-based architecture. Microservices promise to automatically react to failure and changing workloads, provide independent deployment, and support for polyglot technologies [7] [12]. EDA commit to enable a reactive programming model among high-cohesive components that proactively react to incoming events by performing a computation or triggering it in another component [9].
However, the joint use of microservices and EDA has not been previously discussed in the context of BDS. Moreover, we present how microservices can be defined without refactoring a legacy monolithic system. From requirements elicitation, through architecture design, and implementation, we provided an example on how a system with data-intensive requirements can benefit from microservices and event-driven principles.
The main takeaways from our experience are as follows. Defining microservices too early in the development process may yield into a wrong definition. Besides, in a fast-paced development scenario, waiting for requirements to mature is essential in getting microservices right. On one hand, microservices support for easier maintenance and fault-isolation were perceived as benefits to the architecture. However, the complex data flow entailed by the number of microservices, as well the myriad of technologies were perceived as drawbacks.
REFERENCES
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/245635593/SEAA_2020.pdf", "len_cl100k_base": 8114, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26492, "total-output-tokens": 10590, "length": "2e12", "weborganizer": {"__label__adult": 0.000324249267578125, "__label__art_design": 0.0005002021789550781, "__label__crime_law": 0.0002522468566894531, "__label__education_jobs": 0.0008726119995117188, "__label__entertainment": 6.192922592163086e-05, "__label__fashion_beauty": 0.00016117095947265625, "__label__finance_business": 0.0003120899200439453, "__label__food_dining": 0.0002970695495605469, "__label__games": 0.00046706199645996094, "__label__hardware": 0.0010156631469726562, "__label__health": 0.0003993511199951172, "__label__history": 0.00033211708068847656, "__label__home_hobbies": 7.867813110351562e-05, "__label__industrial": 0.00045871734619140625, "__label__literature": 0.0002256631851196289, "__label__politics": 0.0002734661102294922, "__label__religion": 0.0004224777221679687, "__label__science_tech": 0.0252838134765625, "__label__social_life": 7.277727127075195e-05, "__label__software": 0.0052490234375, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.00022935867309570312, "__label__transportation": 0.0005655288696289062, "__label__travel": 0.00020456314086914065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47294, 0.01751]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47294, 0.25581]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47294, 0.92468]], "google_gemma-3-12b-it_contains_pii": [[0, 698, false], [698, 6094, null], [6094, 11296, null], [11296, 16888, null], [16888, 23005, null], [23005, 27225, null], [27225, 33768, null], [33768, 39305, null], [39305, 47294, null]], "google_gemma-3-12b-it_is_public_document": [[0, 698, true], [698, 6094, null], [6094, 11296, null], [11296, 16888, null], [16888, 23005, null], [23005, 27225, null], [27225, 33768, null], [33768, 39305, null], [39305, 47294, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47294, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47294, null]], "pdf_page_numbers": [[0, 698, 1], [698, 6094, 2], [6094, 11296, 3], [11296, 16888, 4], [16888, 23005, 5], [23005, 27225, 6], [27225, 33768, 7], [33768, 39305, 8], [39305, 47294, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47294, 0.05]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
f54b947bcd003f264cf04defbfb87ee300724289
|
CS352 Lecture - Concurrency
Last revised March 29, 2021
Objectives:
To introduce locking as a means of preserving the serializability of concurrent schedules.
1. To briefly introduce other approaches to this problem (timestamps, validation, multi-version and snapshot schemes)
2. To introduce other important issues (phenomena related to insert/delete; weak levels of consistency; concurrency issues with index structures)
Materials:
1. Projectables of examples of locking and unlocking (4)
2. Projectable of compatibility matrix for shared and exclusive locks
3. Projectable of example of deadlock
4. Projectable of example of how locking does not guarantee serializability
5. Projectable of precedes graph for the above
6. Projectable of a non-serializable schedule that can be made serializable by the use of a multiversion scheme
7. Projectable of a transaction that can give rise to phantom rows
I. Introduction
A. We have previously considered the notion of a transaction, and noted that, if properly formed transactions are executed serially, with one completing before the next starts, we are guaranteed to preserve consistency in the database. In particular, a SERIAL SCHEDULE composed of consistent transactions always preserves database consistency.
B. There are many practical reasons for not wanting to be restricted to serial execution, thus allowing two or more transactions to be processed in parallel. (The individual steps are still done serially, but steps of one transaction can be interwoven with steps of another.)
1. Better use of system resources through overlapping IO and computational activities.
a) Being able to perform useful computation on the CPU while waiting for a disk access to complete.
b) If the system has multiple disks, then being able to do multiple disk accesses in parallel, given that better than 90% of the time consumed in a disk access is spent on head movement and rotational latency.
2. Support for access to the database by multiple users simultaneously.
3. If a transaction is interactive, and there is some delay while a human responds to some aspect of it, we do not want the system to be totally idle.
4. Not making the database totally inaccessible while processing a long-running transaction (e.g. posting interest to all savings accounts.)
C. We have seen that we can permit concurrency and still preserve consistency provided that we schedule overlapping transactions in a way that is SERIALIZABLE - i.e. equivalent to a serial schedule, and if we ensure that any schedule we produce is RECOVERABLE - i.e. does not allow a transaction to fully commit until any transaction whose data it has read has already fully committed.
D. In this lecture, we look at various strategies that might be used to ensure serializability. (Some also deal with recoverability).
II. Ensuring serializability by the use of locks
A. Thus far, we have discussed methods for testing a schedule to see if it is serializable. Since a concurrent system seldom "knows" ahead of time what sequence of transactions it will receive, a more useful goal is the development of a set of rules that ensures that any schedule that develops
over time is serializable. One way to do this is by using a LOCKING
PROTOCOL of some sort.
1. We extend the DBMS primitives beyond the basic read and write to
include primitives lock and unlock. Basically, a lock operation excludes
other transactions from certain kinds of access to a data item, and an
unlock operation releases that hold.
If a transaction needs a lock on a data item that it cannot get because
some other transaction holds an incompatible lock, it is forced to wait
until the lock can be granted.
2. These primitives do not appear as explicit DML statements, however.
Instead, certain kinds of database operations result in appropriate locks
being acquired as part of their implementation by the DBMS.
Example: a SQL UPDATE involving a computation on some column will
lock the column value at least from before it reads it to after it writes the
updated value (and perhaps longer).
Note: DBMS locking differs from the kind of locking we have met in
Java, where the programmer is responsible to explicitly request it by
using the keyword synchronized. DBMS locking is managed
automatically as operations that require locks are executed.
3. In particular, locking is tied in with the notion of transactions, in the sense
that sometimes locks acquired during a transaction will need to be held
until the transaction is fully committed, as we shall see.
4. In the discussion below, we will show locking / unlocking operations
explicitly. But this is only for pedagogical purposes - in reality, locking is
done by the DBMS when an operation that requires a lock is started, and
unlocking is done by the DBMS at the end of a transaction (or sometimes
earlier) (There is no such thing as a LOCK or UNLOCK statement in
SQL!)
B. Granularity of locks
1. One important issue in any locking system is the GRANULARITY of the locks - i.e. what size objects can be locked.
a) The coarsest granularity is locking at the table level - any transaction wishing to access the table must lock the entire table. This is seldom an adequate solution since it severely limits potential concurrency.
b) More desirable is locking at the row or even column level. Either an entire row can be locked, or possibly just some columns of a row. (Row locking is usually fine enough, though, and involves much less overhead.)
c) However, since data is usually read and written in physical blocks or pages, not by rows, locking is often implemented at the disk block or page level. Thus, a transaction wishing to lock a particular row will, in fact, end up locking the entire page on which it is stored. (This granularity works well with crash control schemes based on shadow paging which we will consider under the topic of crash recovery).
2. Some systems allow multiple lock granularities - e.g. the possibility of locking an entire table, or just one part (row or database page). This can be useful if, for example, a transaction is performing some update operation on most or all of the rows in a table - in which case locking the entire table is more efficient than locking the rows one by one (especially if consistency requires that all the rows remain locked until the transaction completes.)
C. Locking protocols are generally based on two kinds of locks:
1. A SHARED lock is used when a transaction wants to read an item, but does not want to change it. It allows other transactions to also obtain shared locks, so they can read the item, provided they too do not change it. The transaction obtaining a shared lock on an item must hold it as long as it wants to be sure that the item is not altered.
We will use the notation lock-s(item)
Example: A transaction that simply prints a user's current balance on some one account need not hold a lock on that balance after it has read it.
We will show this as:
\text{lock-s(balance)}
\text{read(balance)}
\text{unlock(balance)}
\textbf{PROJECT}
Example: A transaction to print the total balance of all accounts held by a given user must hold a shared lock on each individual balance until it has read all of them. This will ensure that any simultaneous transfer of money between accounts does not result in an erroneous total amount being displayed.
We will show this as:
\text{lock-s(balance1)}
\text{read(balance1)}
\text{lock-s(balance2)}
\text{read(balance2)}
\text{....}
\text{unlock(balance1)}
\text{unlock(balance2)}
\text{...}
\textbf{PROJECT}
2. An EXCLUSIVE lock is used when a transaction wants to write an item (but also will allow it to read the item).
a) A transaction wanting to obtain an exclusive lock on a given item will be forced to wait until any other transaction holding any kind of lock on the item releases it.
b) While the exclusive lock is held, no other transaction can obtain any kind of lock on the item.
c) If the desired operation is a read-modify-write on an item, then the transaction must obtain the exclusive lock before doing the read; or must obtain a shared lock before the read which is then upgraded to exclusive before the write. (That is, it must hold some kind of lock on the item at all times between the read and the write.)
d) An exclusive lock must remain in force until the transaction either commits or is rolled back, to prevent lost or fallacious updates.
We will represent the obtaining of an exclusive lock by lock-x - e.g. if a transaction gives 5% interest to an account, the transaction might be represented as:
```
lock-x(balance)
read(balance)
[ add interest amount ]
write(balance)
unlock(balance)
```
PROJECT
3. To increase concurrency, some systems allow for the upgrading of locks. The book, for example, suggested the possibility of allowing a read-modify-write transaction to obtain a shared lock before its read and to upgrade it to an exclusive lock just before doing its write, instead of holding an exclusive lock the whole time.
Example - the above case (though here this probably wouldn't be beneficial):
lock-s(balance)
read(balance)
[ add interest amount ]
lock-upgrade(balance)
write(balance)
unlock(balance)
PROJECT
4. When a transaction holds a lock some item and another transaction needs to acquire a lock on the same item, we say that the locks are compatible if both transactions can hold locks at the same time and incompatible if the later transaction must be forced to wait until the earlier transaction releases their lock. In the case of the two types of locks we have considered, the rules of compatibility are as follows:
PROJECT Compatibility matrix for shared and exclusive locks
(Some locking systems use more than these two types of lock - in which case the compatibility matrix may have more rows and columns.)
D. One problem that can arise with the use of locking protocols is DEADLOCK.
1. Two transactions T1 and T2 are deadlocked if T1 holds a lock on a resource R1 and needs to obtain a lock on another resource R2 before it can release the lock on R1, and if T2 similarly holds an incompatible lock on R2 and needs an incompatible lock on R1.
Example: Consider two transactions: one to transfer $50.00 from a customer's checking account to his savings account, and another to print the total balance in his two accounts.
One possible schedule looks like this:
Transfer transaction T1 Inquiry transaction T2
lock-x(checking balance)
read(checking balance)
calculate new balance = old - 50
write(checking balance)
lock-s(savings balance)
read(savings balance)
lock-s(checking balance) -- MUST WAIT
lock-x(savings balance) - MUST WAIT
PROJECT
Under this schedule, the the inquiry transaction will only execute as far as the lock-s(checking balance) before being forced to wait and the transfer transaction will only execute as far as the lock-x(savings balance) before being forced to wait. Now neither transaction can proceed, and thus cannot unlock the resource the other needs.
In this case, the deadlock caused by locking actually prevents an unserializable schedule from being created. If T2 were able to proceed to read the checking balance and then print the sum, the result would be wrong since the checking balance has already been reduced but the savings balance has not yet been increased.
2. In the operating systems portion of the software systems course, we discuss several ways of dealing with deadlock:
a) Deadlock prevention - design a scheme in such a way that deadlock can never occur. (This is not always possible)
b) Deadlock avoidance - before any lock is granted, check to see if granting it might lead to a situation in which deadlock could occur. If so, delay it. (For example, in the above scenario, the system could force the inquiry transaction to wait when it requests the lock on
savings balance. This would allow the transfer transaction to complete and release its locks; then the inquiry transaction could run.)
Deadlock avoidance also generally requires some advance knowledge as to how a transaction will behave.
c) Deadlock detection - we allow deadlock to occur; but when it does, we deal with it by choosing one of the deadlocked transactions, rolling it back, and then restarting it after first allowing the other transaction to proceed past the point of the deadlock.
3. DBMS's often use the third approach of deadlock detection and recovery by rollback, since deadlock prevention may limit concurrency too much, and deadlock avoidance requires advance knowledge of a transaction's behavior that may not be available. Deadlock detection and recovery is generally NOT regarded as a good solution for operating systems, because the rollback costs may be too large; but it is acceptable for DBMS's because:
a) Most transactions are relatively small - thus, the cost of a rollback is relatively small.
b) DBMS's deal with large numbers of resources (e.g. each row in the database); thus, the probability that two transactions will deadlock over a given resource is relatively small and deadlocks will be relatively rare.
c) DBMS's have to be prepared to deal with transaction rollback in any case.
Example: A DBMS might well allow the deadlock described above to occur - at which point the inquiry transaction could be rolled back and restarted after the transfer transaction completes
4. When a transaction is rolled back because of a deadlock, it is typically automatically restarted from scratch, so that all that happens is that its completion is delayed. Some systems also allow partial rollbacks, in which a transaction is rolled back to the point where it requested the lock that was involved in the deadlock. (This, of course, requires that the system have provision for "remembering" the internal details of the transaction.)
E. Simply insisting that a transaction obtain an appropriate kind of lock on an item before accessing it, however, does not guarantee serializability.
Example: Consider the parallel execution of a transaction to transfer $50 from a checking account to a savings account (owned by the same person) and a transaction to print the total balance in the person's accounts. Obviously, we don't care whether the total balance is computed on the basis of the balances before the transfer or after the transfer, since the total balance is the same in either case.
1. However, the following is a non-serializable schedule that yields the wrong result, even though the locking rules we have discussed thus far are used. Here we assume an initial balance in checking of $100 and $200 in savings - so the printed sum should be either $50 + $250 or $100 + $200 = $300 in either case.
Transfer transaction T1 Total balance inquiry T2
lock-s(savings balance)
read savings balance (200)
lock-x(checking balance)
read checking balance (100)
write checking balance (100-50=50)
unlock(checking balance)
lock-s(checking balance)
read checking balance (50)
unlock(savings balance)
unlock(checking balance)
lock-x(savings balance)
read savings balance (200)
write savings balance (200+50=250)
unlock(savings balance)
print sum (250)
Note that each transaction obtains an appropriate lock before it does a read or write operation, and the inquiry transaction retains its lock on the savings balance until after it has read the checking balance, too, in an attempt to ensure that an update does not come along and alter one of the values and so mess the result up. However, this does not protect it from error, since the transfer transaction has left the sum of the balances momentarily incorrect, but with no locks held.
2. To prevent such inconsistencies, some protocol must be used to govern the ORDER in which a transaction acquires and releases locks.
3. One strategy that is widely used is called the TWO-PHASE locking protocol. In this protocol, we require that a transaction cannot acquire any new locks after it has once released a lock that it has held. Thus, a transaction will execute in two phases:
a) In the GROWING phase, it may acquire new locks, but may not release any. (Converting a lock from a lower level to a higher level - eg from shared to exclusive - is also allowed in this phase.)
b) In the SHRINKING phase, it may release locks it holds, but not acquire new ones. The shrinking phase begins, then, with the first unlock operation done by the transaction.
Example: The inquiry transaction above is two-phase, but the transfer transaction is not. Notice how a non two-phase transaction is able to destroy the consistency of another transaction, even though it is two-phase. To ensure consistency, EVERYBODY must obey the protocol
c) Why does the two phase protocol work?
(1)Observe that, for any schedule S, we can construct a directed graph using the precedes relationship: there is an edge from Ti to Tj iff in S Ti acquires a lock on some resource A BEFORE Tj acquires an
incompatible lock on the same resource. We say that Ti PRECEDES Tj in S (written Ti -> Tj).
If this graph is acyclic, then the schedule is serializable.
Example: The precedes relationship for the above schedule:
![Graph Diagram]
PROJECT
This graph is not acyclic, and the schedule is not serializable.
(2) The two-phase protocol guarantees serializability because the precedes graph for any set of two-phase transactions is acyclic. To see this, consider two transactions T1 and T2 that must both hold incompatible locks on the same item or items. The only way the precedes graph could contain a cycle involving these two transactions is if T1 locks some item T2 needs before T2 does, and T2 locks some item T1 needs before T1 does. But in this case, T1 and T2 will deadlock, since neither can release the lock it holds while it still needs to acquire another. One will then be rolled back, and the cycle in the precedes graph will be destroyed.
d) The two-phase protocol can be extended to also ensure recoverability, as follows:
(1) The STRICT two-phase protocol requires that all exclusive locks acquired by a transaction be held until the transaction commits.
(2) The RIGOROUS two-phase protocol requires that all locks (both shared and exclusive) acquired by a transaction be held until the transaction commits.
Both guarantee cascadeless recoverability, because no transaction can ever read data written by an uncommitted transaction.
e) The latter two variants of the two-phase protocol are widely used, though they must be accompanied by some scheme for deadlock detection and recovery, since two-phase locking can lead to deadlock.
(1) In this case, deadlock serves to prevent the system from completing an unserializable schedule.
(2) Deadlock is not a major problem, because a transaction that is rolled back due to deadlock can generally be restarted - usually with minimal cost unless it is a long-running transaction (in which case measures may be implemented to prevent other transactions from getting in its way.)
4. While the two-phase locking protocol is widely used, it is not the only possibility. The book discusses an alternative tree-based approach which we will not pursue here. (This has the additional advantage of preventing deadlock, but it allows less concurrency.)
III. Other Ways of Ensuring Serializability
A. Locking is a widely-used way of ensuring serializability. But it is not the only possibility.
B. Another approach is based on TIMESTAMPs. As each transaction enters the system, it is given a unique timestamp which is some sort of clock reading or serial number that is different for each transaction.
1. We refer to the timestamp of some transaction T as TS(T).
2. We associate with each data item (e.g. each row in a table or each database page) two timestamps - R-timestamp, which is the timestamp of the last transaction that read it, and W-timestamp, which is the timestamp of the last transaction that wrote it.
3. We observe the following rules:
a) A transaction may not read a data item if the W-timestamp of that item is greater than the transaction's timestamp. If it attempts to do so, it must be rolled back.
b) A transaction may not write a data item if either the R-timestamp or the W-timestamp of that item is greater than the transaction's timestamp. If it attempts to do so, it must be rolled back.
There is a variant of this known as Thomas's write rule (discussed in the book) which instead ignores a useless write in the case where only the W-timestamp of the item is greater than the transaction's timestamp - in which case the write is just ignored instead of rolling the transaction back.
c) If a transaction is rolled back, it is restarted with a fresh timestamp (which is necessarily larger than the timestamps of the item is failed to access) and it is started over from the beginning.
4. The protocol operates to ensure that the actual schedule is equivalent to a serial schedule in which $T_i$ completes before $T_j$ starts iff $TS(T_i) < TS(T_j)$.
C. Both locking and timestamp protocols operate by STOPPING a transaction from doing a read or write that would result in a non-serializable schedule. Alternately, we might allow a transaction to read and write as it needs to, without interference; but then, before it commits, we check to see if the outcome is serializable. This can be very advantageous if the majority of transactions are read-only ones that do not interfere with one another. Such an approach is known as VALIDATION.
We will not discuss the details of this approach (which are quite complex) beyond noting is that the basic idea is that when a transaction partially commits a validation test is performed to see if it has read or written anything it should not have in light of the timestamps. If so, it is rolled back.
D. Yet another approaches ensures serializability by a Multi-Version scheme.
1. Consider the following excerpt from a non-serializable schedule:
<table>
<thead>
<tr>
<th>T1</th>
<th>T2</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>write(A)</td>
</tr>
<tr>
<td>read(A)</td>
<td>write(B)</td>
</tr>
<tr>
<td></td>
<td>read(B)</td>
</tr>
</tbody>
</table>
As it stands, this is a non-serializable schedule. The write(A) in T2 followed by the read(A) in T1 requires that, in any equivalent serial schedule, T2 must precede T1; but the write(B) in T1 followed by read(B) in T2 requires the opposite.
2. Suppose, however, that when T1 writes B we retain the OLD value of B along with the new value. (We keep two versions of B in the database.) Then, when T2 reads B, we can give it the OLD value, producing a schedule equivalent to the serial schedule T2; T1. The effect is the same as if the read(B) in T2 preceded the write(B) in T1, though the opposite was actually the case.
3. This approach to concurrency control is called a MULTIVERSION SCHEME. It uses timestamping in two ways:
a) Each transaction has a timestamp, as in previous schemes, recording its START time. The scheme will ensure that any schedule is equivalent to a serial schedule in which $T_i$ precedes $T_j$ if $TS(T_i) < TS(T_j)$.
b) Each VERSION of each item has two timestamps, as in previous schemes:
(1) $W$-Timestamp(Q) is the timestamp of the transaction that wrote it.
(2) $R$-Timestamp(Q) is the highest timestamp of any transaction that has read this version of item Q.
c) When a transaction T does a read for some item Q, the relevant version of the item is the one with the greatest $W$-Timestamp such that $W$-Timestamp(Q) $\leq$ TS(T).
d) When a transaction T does a write for some item Q, but a “later” transaction has already read Q, it must be rolled back and restarted. (In fact, cascading rollback is now possible).
E. Snapshot Schemes
1. Another approach that has become fairly widely used is called snapshot isolation.
2. In effect, each transaction is given its own version of the database to use, by using a multiversion approach. Each transaction is isolated from all others by having its writes be deferred by recording them in memory until the transaction commits, at which point the writes are all done atomically to the actual database.
3. At the time a transaction becomes partially committed, validation is used to determine if the transaction is allowed to commit or must be rolled back if its actions were inconsistent with the actions of some other committed transaction - e.g. it wants to write something that another committed transaction has already read an earlier version of.
4. There are a fair number of complexities that must be dealt with to ensure serializable results in all cases - as the book discusses. But this approach has gained popularity because it maximizes opportunities for concurrency while conflicts that force rollbacks are - in fact - relatively rare.
IV. Further Issues (Omit if not enough time)
A. Concurrency Issues Arising from Delete and Insert
1. Thus far, we have dealt with concurrency-related issues arising from the operations of read() and write() on some item in the database.
a) The SQL UPDATE statement always does a write, and often does a read as well (e.g. SET SALARY = SALARY*1.05). The SELECT statement does a read, which may be preparatory to a write either to the same item or another item. The same is true for corresponding operations using other data models.
b) What about the DELETE and INSERT operations?
2. The book shows that a delete operation is similar in principle to a write operation in terms of the table row being deleted.
3. The book shows that an insert operation is similar in principle to a write operation in terms of the table row being inserted.
4. However, another issue that arises with delete or insert is the phenomenon of “phantom rows”.
a) Consider a query like the following, which - in general - looks at multiple rows:
\[
\text{select count(*) from borrower}
\]
\[
\text{where last_name = 'Aardvark'}
\]
PROJECT
At the implementation level, the DBMS uses some sort of iterator to iterate over the relevant rows. (This is similar in principle to the cursor of embedded SQL, but internal to the DBMS)
b) Now suppose a concurrent transaction does an insert (or delete) of a row with name = 'Aardvark', or updates an existing row to change the name to Aardvark or to change 'Aardvark' to something else.
c) If the insert, delete or update happens ahead of the cursor (e.g. before it gets to the row in question), the operation impacts the count. But if the insert, delete or update happens behind the cursor (e.g. after it has passed the row in question), the operation has no impact on the count.
d) In terms of serialization order, if the insert, delete or update happens ahead of the cursor, then the insert/delete/update transaction in which it occurs must be BEFORE the count transaction; if the insert, delete, or update happens behind the cursor, the the insert/delete/update transaction must be AFTER the count transaction. Our concurrency control mechanism must take this into account in terms of locking or testing for serializability.
However, in the case where the insert/delete/update occurs behind the cursor, the changes to the row in question do not actually affect the way we count the row in the count transaction, so we have a case where the two transactions may conflict over a row that is not even part of one of them! (A “phantom row”).
e) How can we expect a transaction to lock (or test the timestamps) of a row that is not even part of it?
1) One solution involves in allowing an additional kind of lockable entity - the right to insert, delete, or update rows in a table.
a) The insert, delete, or update transaction must obtain an exclusive lock on this right before doing its operation.
b) The count transaction, or any transaction like it that scans rows in the table must obtain a shared lock on this right to ensure that no rows are inserted, deleted, or update while it is iterating over the table.
(c) Note that this lock is NOT the same as a multi-granularity lock on the table itself. We do not need to lock other transactions totally out of the table, if they do not depend on the table not changing under them (e.g. they are accessing a specific row based on primary key.)
(2) The book discusses an alternative based on locking B+ tree index leaves rather than the whole table, which allows more concurrency.
B. Weak Levels of Consistency
1. We have looked at a number of techniques for ensuring that any actual schedule produced by executing transactions concurrently is serializable. Basically, such techniques rely on one or the other of the following strategies:
a) Making a transaction wait for some lock to be released before it can proceed.
b) Rolling back a transaction that attempts an operation that would otherwise make the resulting schedule non-serializable.
Of necessity, either sort of measure reduces the rate at which transactions can be processed - either by making a transaction wait, or by making it start over. Thus, concurrency control and maximizing throughput can conflict with one another.
2. Often times, there will be transactions for which an approximately correct answer is close enough. In such cases, we may choose to forgo strict enforcement of serializability to improve throughput. The book discusses this at some length, but we will not discuss this further, except to recall that SQL incorporates an ability to specify that a given query is to be run with weakened serializability requirements being enforced.
Db2 allows the specification of different serializability levels at the level
of individual packages of compiled code in the database. The terminology and precise levels of isolation provided is a bit different from that discussed in the book - see the manual for details. (You will see output about this when binding packages to the database for your programming project)
C. Locking and Index Structures
1. All of our discussion of locking has focussed on issues related to the actual locking of data in the tables themselves. In the case of a table that has one or more indexes, we also need to consider the impact of various operations on the index structure.
2. In the case of a dense index, every insert or delete operation necessarily impacts the index as well. Updates to a column that is the basis for an index also impact the index. (In effect, an update involves deleting the old entry and inserting a new one.)
3. In the case of a sparse index, insert, delete and update operations may or may not impact the index.
4. The book discusses locking issues involved with indexes. We will not pursue these here.
|
{"Source-Url": "https://www.cs.gordon.edu/courses/cps352/lectures-2023/Concurrency.pdf", "len_cl100k_base": 6627, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 35538, "total-output-tokens": 7515, "length": "2e12", "weborganizer": {"__label__adult": 0.0002651214599609375, "__label__art_design": 0.0002052783966064453, "__label__crime_law": 0.00029540061950683594, "__label__education_jobs": 0.002918243408203125, "__label__entertainment": 4.839897155761719e-05, "__label__fashion_beauty": 9.846687316894533e-05, "__label__finance_business": 0.00025010108947753906, "__label__food_dining": 0.00030493736267089844, "__label__games": 0.0004346370697021485, "__label__hardware": 0.0006890296936035156, "__label__health": 0.0004351139068603515, "__label__history": 0.00020754337310791016, "__label__home_hobbies": 9.900331497192384e-05, "__label__industrial": 0.00040030479431152344, "__label__literature": 0.00021445751190185547, "__label__politics": 0.00018668174743652344, "__label__religion": 0.0003688335418701172, "__label__science_tech": 0.0203704833984375, "__label__social_life": 0.0001169443130493164, "__label__software": 0.01042938232421875, "__label__software_dev": 0.9609375, "__label__sports_fitness": 0.00020503997802734375, "__label__transportation": 0.0004818439483642578, "__label__travel": 0.00016260147094726562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30596, 0.01066]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30596, 0.67953]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30596, 0.93386]], "google_gemma-3-12b-it_contains_pii": [[0, 1545, false], [1545, 3185, null], [3185, 4927, null], [4927, 6801, null], [6801, 7721, null], [7721, 9139, null], [9139, 10428, null], [10428, 11888, null], [11888, 13857, null], [13857, 15175, null], [15175, 16963, null], [16963, 18414, null], [18414, 19940, null], [19940, 21809, null], [21809, 23105, null], [23105, 24719, null], [24719, 26062, null], [26062, 27903, null], [27903, 29553, null], [29553, 30596, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1545, true], [1545, 3185, null], [3185, 4927, null], [4927, 6801, null], [6801, 7721, null], [7721, 9139, null], [9139, 10428, null], [10428, 11888, null], [11888, 13857, null], [13857, 15175, null], [15175, 16963, null], [16963, 18414, null], [18414, 19940, null], [19940, 21809, null], [21809, 23105, null], [23105, 24719, null], [24719, 26062, null], [26062, 27903, null], [27903, 29553, null], [29553, 30596, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30596, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30596, null]], "pdf_page_numbers": [[0, 1545, 1], [1545, 3185, 2], [3185, 4927, 3], [4927, 6801, 4], [6801, 7721, 5], [7721, 9139, 6], [9139, 10428, 7], [10428, 11888, 8], [11888, 13857, 9], [13857, 15175, 10], [15175, 16963, 11], [16963, 18414, 12], [18414, 19940, 13], [19940, 21809, 14], [21809, 23105, 15], [23105, 24719, 16], [24719, 26062, 17], [26062, 27903, 18], [27903, 29553, 19], [29553, 30596, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30596, 0.01992]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
e2c2df393d7fd2c355d621ee47b838002c3cd2c3
|
Porting Defensive Aid Suite to Vehicle Control System
Jonas Jonsson
Luleå University of Technology
MSc Programmes in Engineering
Computer Science and Engineering
Department of Computer Science and Electrical Engineering
Division of EISLAB
Porting Defensive Aid Suite to Vehicle Control System
Jonas Jonsson
Luleå University of Technology
Dept. of Computer Science and Electrical Engineering
EISLAB
Mar 22th, 2010
This Master’s Thesis describes an initial attempt to reimplement the Defensive Aid Suite functionality, used in the latest version of Combat Vehicle 90 from BAE Systems Hagglunds AB. The current implementation uses an internally developed software platform based on VxWorks from WindRiver. Because this software platform is only used by the Defensive Aid Suite, BAE Systems Hagglunds AB wishes to retire it. The implementation described in this report will be based on the currently used software platform which is based on Rubus OS from Arcticus Systems AB.
The work presented in this thesis was done during fall 2009 at BAE Systems Hägglunds AB, Örnsköldsvik. I would like to thank the department of embedded systems for the great time, Magnus Berglund for the fruitful discussions and especially my supervisor Jimmy Westerlund for aiding me during my work.
Jonas Jonsson
5.5.1 Serial Spy ............................................... 20
5.5.2 Simulated Serial HAL Windows .................... 21
CHAPTER 6 – EVALUATION ....................................... 23
6.1 The Work ..................................................... 23
6.2 Reflection ................................................... 24
6.3 Future Development ......................................... 24
CHAPTER 1
Introduction
1.1 Defensive Aid Suite
Combat Vehicle 90, CV90, is a family of armoured combat vehicles developed by BAE Systems Hägglunds AB. The latest version of CV90, CV9035 MKIII, is equipped with a feature known as Defensive Aid Suite, DAS. It consists of a Laser Warning System, LWS, that aids the crew elude threats which use laser to guide weaponry to its target.
The laser warning system is bought from a subcontractor while the system responsible for countermeasures is developed by Hägglunds.
During initial development of DAS, it became clear that the intended hardware platform lacked support for floating point arithmetics. This made it too slow to be usable. However, suitable hardware was available from a previous project but it was based on a processor which at the time was unsupported by Vehicle Control System, VCS. Therefore, both the software platform known as Extended Control System, ECS and hardware was reused from the previous project in order to cut costs and meet the deadline. Meanwhile, support for this hardware was added to VCS.
1.2 Aim
This thesis aims to reimplement DAS on VCS. The focus should lay on the communication stack used to interact with the LWS. In the current solution this part is provided by the subcontractor.
1.3 Scope
A complete reimplementation of DAS is beyond the scope of this thesis. This work is limited to communication with laser warning system and core countermeasures functionality.
2.1 Defensive Aid Suite
In figure 2.1 an overview of the currently deployed DAS system is shown. It consists of multiple laser sensors connected to a controller. By processing sensor data, the controller can determine the direction and if it is considered a threat or not. When a threat is detected, a message is sent to IM16\(^1\) using a serial communication interface.
What actions to perform when a threat is detected is determined by the IM16. Some actions like storing information about the threat in a log file and
\(^1\)IM16 is the internal name for the micro controller node running the software.
displaying it on the Vehicle Information System, VIS, are always performed, while others are configurable by the crew.
### 2.2 Extended Control System
During development of a previous project, neither hardware nor software supported floating point arithmetics, Ethernet, IP, TCP, UDP or Corba. A new hardware and software platform was therefore developed around a PowerPC processor and VxWorks.
Initially it was intended that ECS would be a more powerful alternative to VCS but as time went more features was added to VCS making ECS redundant. Today, VCS can run on this hardware giving it floating point arithmetics. It also supports Ethernet, IP, TCP and UDP but not Corba since it is no longer in use.
### 2.3 Vehicle Control System, VCS
VCS is a software platform for real-time applications used and developed by Hägglunds. VCS forms the basis for applications that control and supervise functions in the vehicle. These functions ranges from rather simple things such as brake lights to more complex things like the rear ramp on CV90 which involves many conditions, several actors and more than one node.
VCS and the application is built in a monolithic way. This means that VCS and the application is compiled into a single binary. This single binary is then the only code executed on the hardware.
Both applications and VCS follows a set of coding rules designed by Hägglunds but based on MISRA-C:2004\(^2\)[1]. For example, one rule states that all memory must be statically allocated. This reduces the risk of memory leakage which could lead to a situation where the system runs out of memory.
In figure 2.2 an overview of the major layers of VCS is shown. An application is split into three different parts, Common units, Product family units and Product specific units. This split makes it easier to reuse code between different projects. Common units is a general part that can be used in different products, such as a navigation system. Product family units are parts only relevant to a specific product family like the CV90. Product specific units are parts only relevant to a specific variant of a product such as a customer specific communication system. Applications use an Application Programming Interface, API to access features provided by VCS OS.
---
\(^2\)MISRA-C:2004 is a guideline for the use of the C language in critical systems.
VCS OS includes support for commonly used features such as CAN\(^3\), digital and analog I/O, Ethernet, TCP/IP, log files, basic graphic primitives, RS-232, RS-422 and many of the functions usually provided by the standard C library. The standard C library is not used due to potential bugs and undefined behaviour in different implementations. Thread support is provided by Rubus OS which is further described in section 2.3.1.
The basic graphic primitives are used to create an end user interface. This interface can be used by the crew of a vehicle to configure and monitor certain aspects of the system.
Both CAN and basic I/O are not often used directly. Instead an abstraction layer with signals is used. Applications define what signals they use and VCS OS hides the details of reading and writing to I/O pins or creating CAN frames.
A textual user interface, known as Monitor, is used to interact with the application as well as VCS OS itself. It provides menus and commands. A menu is a view of internal information of the system. The view is constantly updated and can be used to monitor a signal, threads or current TCP connections. A command is used to interact with the system. This can be used to change values or turn off certain features. This interface can be extended with specific commands and menus by applications through the API. Monitor is used during the whole life time of the system and is a very valuable tool during development and real world testing.
VCS OS uses a hardware abstraction layer, HAL, to ease porting to other hardware platforms. Currently there are six supported platforms MPC5567, MPC8250, XC167, C167 and a simulated environment supporting both Windows and Linux.
\(^3\)Controller Area Network is a message based protocol, designed primarily for the automotive industries but is used by others as well.
2.3.1 Rubus OS
Rubus OS is a real-time OS from Arcticus Systems AB. It consists of three kernels and basic generic functionality used by them.
The red kernel manages red threads which are strictly time driven. These execute at a predefined periodicity with known execution times. This information is used when all red threads are ordered into an offline created execution schedule. A node can have multiple schedules to switch between, of which only one can be active at a single time. The kernel can optionally measure best, worst and average execution time for threads which can be used to analyse performance.
The green kernel is used to handle external interrupts which is defined with a period time and priority. This information together with execution time of the thread is used when building the schedule for red threads. The green threads can preempt both red and blue threads. Like the red kernel it is possible to measure best, worst and average execution time of the threads.
The blue kernel is used for event-based threads and runs when the red kernel is idle. The threads are defined offline with a specific priority and can be started and stopped at run-time. Two blue threads always exists, the blue idle thread with lowest priority and blue kernel with highest priority. Other blue threads can use any priority between these.
Blue threads support basic synchronization primitives, mutexes and semaphores, signaling, message queues, time based waiting. A blue thread can wait for either of these events to occur and both red and green threads can signal or post messages to a blue thread.
2.3.2 Rubus Integrated Component Environment
Component based development is a way to develop applications by assembling pieces of software units, components, into a complete system. A component is a unit which encapsulates functionality with a defined interface. Rubus Integrated Component Environment, ICE, is a graphical developer tool used to create components or complete systems according to Rubus Component Model version 3, RubusCMv3[2].
This model’s basic building block is called Software Circuit. A circuit consists of an interface and one or more behaviors each with its own entry function. Only one behavior can be active at once and can be changed at runtime. The interface is a description of ports used to interact between circuits. There are two supported types of ports, data ports for data flow and triggering ports for control flow. A circuit receives data through its input data ports and produces output to its output ports.
Multiple circuits can be connected and encapsulated in an assembly or
composition. These are ignored by Rubus when the model is deployed and thus only provides abstraction and an hierarchal structure. The differences between the two is that a composition can be split and parts of it deployed on different nodes. An assembly on the other hand is indivisible and must be deployed on a single node. A composition can be seen as a grouping of related objects and is useful when creating a library of reusable objects.
2.3.3 VCS Config
VCS Config is a graphical tool developed by Hägghunds which is used to create configurations for nodes. What to configure is controlled by something called a baseline. It consists of an XML file describing how the configuration tree looks and behaves in VCS Config. Accompanying it, is a set of Python scripts used to do validations of the configuration and generation of files.
Currently there is only one baseline in use, for VCS OS. But there is ongoing work to incorporate configuration of other parts of the application into a baseline as well. This is a consequence of splitting the application as described in section 2.3. In order for the parts to be reusable there must be an easy way configure them.
The configuration of VCS OS and its HAL are linked in such a way that an application can be compiled for multiple targets at once. If a node needs a serial connection, for example, a serial channel is created in VCS OS. It includes the name, size of buffers and communication related options like baud rate. Each target then links this channel to a serial port. This can be either a real physical port, a simulated one or a mixture thereof as described in section 5.5.2.
The baseline for VCS OS is used to generate Visual Studio Solution files and building Rubus ICE models besides the source and header files with the actual configuration. The solution file contains a project for each configured target with an external compiler, tool chain and building system. Not depending on external tools makes VCS easier to distribute among the developers.
CHAPTER 3
Current implementation
3.1 Overview
In figure 3.1 an overview of the current implementation is depicted. It is based upon a software platform developed by Hägglunds known as Extended Control System, ECS. More information about ECS can be found in section 2.2.
The main application is built using a mixture of C and C++. Most of the
Figure 3.1: Overview of current implementation, grey part is supplied by the sub-contractor.
Current implementation code, which is developed by Hägglunds, uses C++ but the communication stack that is provided by the subcontractor uses C.
3.2 Communication Stack
The provided communication stack is rather advanced with many layers performing specific tasks, such as queueing, calculating checksum or routing. This design provides a somewhat easy way to add, remove or change any of the layers. The existence of a routing layer with support for multiple serial channels suggests that it is supposed to be used in a much larger network rather than a simple point to point connection currently used. Most layers contain statistic information that can be used to supervise the overall health of nodes and the network as a whole.
Documentation on how to use the communication stack was provided along with the actual code. But since it was never intended for Hägglunds to implement this stack on their own, it is missing important details about the protocol.
3.3 LWS Handler
On top of the provided communication stack is a message based protocol used by the LWS. It consists of a number of request and result messages, whereas some result messages can be sent without a prior request. Each message contains a header and data of various length.
The LWS Handler manages this protocol and maintains an overall status of the LWS. Messages related to incoming threats are parsed, stored in a log file and passed along to the Threat Manager.
3.4 Threat Manager
The Threat Manager maintains a priority queue with current threats. These threats can be updated based on new information, removed if it is old, merged into a single threat when two threats are very close, or simply adding a new threat. The threat with the highest priority is passed along to the Defence Manager.
In addition to queuing, the Threat Manager also provides VIS with information used to display an overview of the current threats.
3.5 Defence Manager
The configuration set by the crew affects how the Defence Manager responds to the most prioritized threat. The response is controlled by a state machine, handling possible user interaction and dependencies on other systems like smoke grenade launchers.
3.6 Config Manager
To ease configuration management in other parts of the application, the Config Manager maintains the current configuration. The configuration can be changed in two ways, either by the application itself or by other nodes using CAN.
For example, when a smoke launcher is loaded, the Config Manager receives a CAN message and updates the configuration.
CHAPTER 4
Communication Protocol
4.1 Overview
As there were no documentation about the protocol used for serial communication it was necessary to investigate how the protocol worked. The information provided here is based on the API documentation and source code for the current implementation. The communication is divided in two parts, a generic serial communication bus protocol, described here, and an application specific protocol for the LWS, described in section 5.2. Based on the design described below it seems likely that the generic part is used in many other applications.
Communication with the LWS uses a common serial standard. On this connection, Serial Line Internet Protocol, SLIP[3] is used. It is a very easy protocol to encapsulate a payload of various length on a serial link. It consists of an END marker, decimal number 192, which marks the end of a transmission. Beside this it also includes ways to transmit this END marker in the payload. Historically, this protocol was designed to work with modem connections over plain old telephone lines. These connections were not perfect and suffered from noise which could cause erroneous bytes corrupting the packet. Beginning every message with an END marker would flush any erroneous bytes received during idle time. This method is used by the LWS.
The SLIP payload contains an application specific payload and a frame header. This is shown in figure 4.1.
4.2 Frame Header
At the end of the application payload is a 10 byte frame header which is used by the communication stack. An overview of the message is shown in 4.1 and the frame header is further described below.
**Packing**
Sometimes it is desirable to transmit data stored in data types larger then needed. The API documentation describes such case when a Digital Signal Processor, DSP, stores bytes word aligned. So for each byte, 3 more are wasted. If packing has a value of 8, it means that the stack should only use the least significant byte. A value of 16 means it should use the two least significant bytes and a value of 32 effectively means no packing.
**Idle Request, IDR**
This field is split in two parts, control type and sequence number. The control type is a field of the two most significant bits describing what type
of message this is. A value of 0 or 1 means it is an unacknowledged message, 2 means it is a normal message while 3 means its an acknowledge message. The last 6 bits are used as a sequence number and is only used in normal and acknowledge messages. Every normal message sent is supposed to be acknowledged with an acknowledge message with the same sequence number. If there is no acknowledge received after a certain time, it should be resent with the same sequence number.
**Source Node**
Denotes the node that sent the message, the LWS identifies itself with a value of 1 and the application is supposed to use 0. This is specified in the source code as well as in documents provided by the subcontractor.
**Source Code**
Denotes the part of the node that sent the message. The value 0 is reserved for stack to stack communication while 1-255 is available for the application. Value 0 and 1 is the only values used in this implementation.
**Destination Node**
Denotes the node supposed to receive the message. In the current implementation, this is used to route a message in a network of multiple nodes. The new implementation uses only the same values as in source Node.
**Destination Code**
Denotes the part of the application that is supposed to receive the message. Similar to source Code.
**Frame Check Sequence, FCS**
To reduce the risk of erroneous message a checksum is used. The checksum is calculated using the application payload as well as the previous 8 byte frame header. The checksum is a standard cyclic redundant checksum know as CRC-16-CCITT, it is also used in other protocols, PPP in HDLC-like Framing[4].
The new implementation is divided in a similar way as the current implementation, described in chapter 3, but tries to use Rubus CMv3 described in section 2.3.1.
5.1 Communication Stack
The communication stack consists of two threads, a red and a blue one. The red thread can issue three signals that the blue thread is waiting for. A receive signal if there is data available, a transmit signal in case of ongoing transmission or a timeout signal.
The blue thread is an infinite loop that most of the time is waiting for a signal. The important signals, receive and transmit are described further down. A timeout signal means that a message was sent but no acknowledge was received. The message will be sent again or removed if it has been sent too many times.
5.1.1 Message Reception
Handling of received data begins with the SLIP receive function provided by VCS OS. It will read available data from a serial channel and remove SLIP related information. This is a non-blocking operation since it will not wait for more data if the message is incomplete. Instead it will return a state variable which must be passed as an argument each time the function is called. When a whole message has been received it will be further processed by the stack.
Two important checks are done before the message is parsed. First, the message length is ensured to be large enough for the header. Secondly,
the FCS must be correct. If these are correct, further checks are done to ensure that the source node and destination node are correct. If any of these checks fails the message will be discarded. The packing is only verified for data messages as it was discovered during testing that acknowledge messages had invalid values. It makes sense for the sender to ignore this field since it is useless for the receiver.
When a data message is received, its payload is extracted and queued in the message receive queue. If a normal message is successfully queued, an acknowledge message is queued for transmission. If the queue is full then no acknowledge will be sent and the sender should retry. Section 5.1.3 contains more details about the queues.
When an acknowledge message is received, its sequence number is compared to the last transmitted message. If they are equal, it means that the transmitted message has been received and can be removed. The transmit state described in section 5.1.2 is also updated to reflected the success of transmission.
5.1.2 Message Transmission
There are two types of messages supported by the stack, normal messages and acknowledge messages, each with its own queue. When a normal message is sent, a timeout value is set. If an acknowledgement is not received during that time the message will be sent again a predefined number of times before discarded. Only one normal message can be waiting for an acknowledgement at the same time.
The stack will try to send normal messages before acknowledge messages and it will transmit acknowledge messages while it waits for an acknowledgement of a previously sent normal message.
The SLIP transmit function provided by VCS OS is non-blocking which forces the stack to handle partial transmissions. This is solved with a global transmit state that indicates if a transmission is partially done. The red thread will then signal for continued transmission if needed.
5.1.3 Message Buffering
In order to buffer messages that pass through it, the stack uses three queues. They queue received messages, unsent messages and unsent acknowledge messages. The queues are implemented as circular buffers using a statically allocated array of messages. The queue is realized as a structure containing four members, a boolean flag indicating if it is full, indexes for start and end position and the actual message array. The flag is used to distinguish between an empty or full queue which both is indicated by start and end
being equal. Another way to distinguish would be to only let them be equal when empty. This would in this case waste more memory, since a message is larger than the flag.
5.2 LWS Manager
While the communication stack only serves as a transportation of data, the real work is done by the LWS Manager. It implements the specific protocol transported by the communication stack. It is used to interact with the LWS. This protocol is well documented compared to the one used to implemented the communication stack.
The protocol consists of a number of request and result messages. Each request should be replied to with one result message. Besides this, the LWS can send result messages without a prior request which is important for certain types of messages. Threat detection, threat timeout, status and continuous built in tests all send unsolicited messages. Each message is prepended with a header containing command identifier, sequence number\(^1\), length of payload and time stamp. The sequence number is used to match request and result by the requester. For unsolicited message the sequence number will be zero to indicate that it was not requested.
Parsing of messages are divided in two steps, header and payload. The first step consists of a single circuit that each time tries to receive a message from the communication stack, parse the header, verify the length and write payload and command identifier to output ports.
The second step involves one circuit for each message. Each circuit parses and acts upon the payload supplied with the message. Threat related messages are passed to the Threat Manager described in section 5.3. Messages related to the status of the LWS and its sensors could be used to evaluate if it is functional or not.
5.3 Threat Manager
The two threat related messages received from the LWS Manager are incoming threat and delete threat. The latter is sent by the LWS when the threat is no longer detected while the first is sent when new threat information is available. This could be a new threat or an old threat with updated information.
The Threat Manager maintains a priority queue of active threats. The most prioritized threat is used by the Defence Manager described in section
\(^1\)This should not be confused with the sequence number described in section 4.2.
5.4 while the three most prioritized threat are presented to the crew through VIS.
Besides keeping track of threats the Threat Manager can also merge threats that are close. These merges results in small adjustments of countermeasures in order to improve effectiveness.
5.4 Defence Manager
The Defence Manager handles countermeasures based on information about the threat with highest priority and crew configuration. Due to limited time this part has not been implemented. A suggested implementation can not be discussed in detail without revealing restricted information.
5.5 Other Parts
5.5.1 Serial Spy
\[
\begin{array}{c}
\text{LWS} \\
\text{Serial Spy} \\
\text{DAS Application} \\
\text{Host Computer}
\end{array}
\]
Figure 5.1: Overview of Serial Spy usage.
During the first communication attempts with the LWS it became evident that it would be necessary to spy on the serial communication. It was decided, after failing to find a solution, that an application was to be developed. An overview of the solution is depicted in figure 5.1. It consists of an application that forwards information between two serial devices while at the same time showing it to the user.
The application is built using Python\(^2\) and a module called pySerial\(^3\). The application consists of two threads, one for each device, that reads from
\(^2\)Python is a dynamic programming language, see [http://www.python.org/](http://www.python.org/).
\(^3\)pySerial is a library that encapsulates access to serial ports in an easy way, see [http://pyserial.sourceforge.net/](http://pyserial.sourceforge.net/).
one device and writes to the other. The information passed between the
devices is then parsed and shown. It is shown in color using an ANSI es-
cape sequence\[5\] for easier reading. The default textual console in Windows,
Windows Command Processor, does not understand this. Instead a termi-
nal emulator known as RXVT\[4\] was used. In order to use it on Windows,
Cygwin\[5\] is required.
5.5.2 Simulated Serial HAL Windows
Development has been performed solely in a simulated environment which
works rather well for initial testing. In order to simulate input and output, a
system known as Data Distribution, DD is used. This is basically a defined
way for applications to communicate using shared memory. While this is
good for connecting simulated nodes it is not designed for use with real
hardware. It would be possible to write an application that bridges between
the shared memory and the hardware. This approach was investigated but
it did not turn out to be a good or easy solution. It was decided that the
best solution would be to implemented a driver for the simulated HAL that
uses a real serial device.
The driver uses the WIN32 serial API[6] to interact with the hardware,
two queues to buffer data and two threads, one that sends and another that
receives data.
When an application sends data, it uses the VCS OS API which will buffer
the data before using the Hardware Programming Interface, HPI to start
the transmission. This approach is used on all hardware platforms. This
driver will then buffer it another time and signal to a waiting thread that
will interact with Windows in order to send it.
Reception of data is almost the same but reversed. A Windows thread
waits for incoming data, buffering it and triggers a simulated interrupt. The
interrupt service routine will then use the HPI to buffer the data in VCS
OS. The application will then poll VCS OS for data.
Since the Rubus kernel schedules its threads inside one Windows thread
it was necessary to protecte the data shared between the three Windows
threads as well as between different Rubus threads.
\[4\]RXVT is a virtual terminal emulator for X11, http://rxvt.sourceforge.net/.
CHAPTER 6
Evaluation
6.1 The Work
No one really knew how the current implementation looked like, except that it contained a supplied communication stack and a rapidly developed application. The reason for this lack of knowledge is that it was developed by consultants who has left the company.
Because of this lack of knowledge, the work begun with a couple of days source code browsing. It was then decided to begin with the communication stack. Before this work could begin, an introduction to VCS and VCS Config was needed.
At this point, it became clear that testing would require a real LWS Controller. This created another problem, the code would be running in a simulated environment which had no way to communicate with a real device. It could only communicate with other simulated nodes using DD. After working with DD a couple of days the problem was postponed in order to move forward.
When the basics of the protocol had been found, the implementation of the stack begun. Later that week, a real LWS controller was searched for. It was not really hard to find one. However, it would take six weeks before becoming available for use. This was a real setback but some testing software for the LWS Controller was found. This software looked promising since it appeared to contain a simulated LWS Controller. Despite looking good, the most of the software proved rather useless. It was undocumented and did not communicate using the serial protocol. Instead, it used some TCP based protocol. Even though the application payload looked to be correct, it was not useful for testing the serial communication.
Almost three weeks after the first search, an older LWS Controller was
found. Custom cables needed to be made and a day later the testing was to begin but the postponed problem with the mixture of simulated and real environment remained. The solution was to create a driver for the HAL used by the simulated environment connecting it to a real serial device. This implementation is described in section 5.5.2.
When the communication was up it did not really work and debugging was hard mainly because there was no way to inspect the traffic. A tool, Serial Spy, was developed to inspect the serial communication. This tool is described in section 5.5.1. With this tool the bug was easy to spot. It was a faulty assumption about the protocol made earlier during the initial investigation.
Now that the basic building block was constructed, work continued with the LWS Manager and Threat Manager. These proved to be rather easy to implement and focus shifted towards the Defence Manager. It turned out to be more complex than first anticipated and was not investigated further.
6.2 Reflection
A lot of time was spent on analysing how the current implementation worked which influenced the new implementation. Despite the fact that the two implementations use rather different design approaches, object oriented versus component object model, they share the big overall structure. A completely new design would probably been better.
A rather complex system like DAS really needs some prior experience with Rubus CMv3 in order to make the best use of it.
While not accomplishing a complete implementation of DAS, the work that has been done will certainly help. Mostly because the major part that was provided by the subcontractor, the communication stack, is reimplemented, documented and reduced in size. The stack is about one tenth in size. This is accomplished by removing features that are useless such as routing, statistics and compile time layer configuration.
6.3 Future Development
A future implementation should extend the communication stack with a monitor menu covering some basic statistics such as how many discarded messages and how many messages that have been received. It also needs more testing with a higher load. No sensors were ever used during testing due to limited time and availability of hardware.
The new implementation of the LWS Manger only deals with some basic messages and does not handle diagnostic messages sent by the LWS. This
part should be expanded and interact with the diagnostic system used in the vehicle.
REFERENCES
|
{"Source-Url": "http://www.diva-portal.org/smash/get/diva2:1025138/FULLTEXT01.pdf", "len_cl100k_base": 6879, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 54522, "total-output-tokens": 8626, "length": "2e12", "weborganizer": {"__label__adult": 0.0007185935974121094, "__label__art_design": 0.0005369186401367188, "__label__crime_law": 0.0007052421569824219, "__label__education_jobs": 0.0018796920776367188, "__label__entertainment": 0.0001430511474609375, "__label__fashion_beauty": 0.0002512931823730469, "__label__finance_business": 0.0003786087036132813, "__label__food_dining": 0.0005130767822265625, "__label__games": 0.001621246337890625, "__label__hardware": 0.01186370849609375, "__label__health": 0.0005850791931152344, "__label__history": 0.0005960464477539062, "__label__home_hobbies": 0.00018870830535888672, "__label__industrial": 0.0020542144775390625, "__label__literature": 0.0004220008850097656, "__label__politics": 0.0003709793090820313, "__label__religion": 0.0006995201110839844, "__label__science_tech": 0.10992431640625, "__label__social_life": 0.00013148784637451172, "__label__software": 0.0108184814453125, "__label__software_dev": 0.84814453125, "__label__sports_fitness": 0.0005412101745605469, "__label__transportation": 0.00670623779296875, "__label__travel": 0.0002987384796142578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34741, 0.02961]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34741, 0.65373]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34741, 0.94164]], "google_gemma-3-12b-it_contains_pii": [[0, 241, false], [241, 418, null], [418, 418, null], [418, 977, null], [977, 977, null], [977, 1293, null], [1293, 1293, null], [1293, 1293, null], [1293, 1695, null], [1695, 2973, null], [2973, 3159, null], [3159, 3768, null], [3768, 6135, null], [6135, 7988, null], [7988, 10617, null], [10617, 12643, null], [12643, 12643, null], [12643, 13083, null], [13083, 14993, null], [14993, 15640, null], [15640, 15640, null], [15640, 17072, null], [17072, 17915, null], [17915, 19554, null], [19554, 19554, null], [19554, 20951, null], [20951, 23448, null], [23448, 25767, null], [25767, 27371, null], [27371, 29621, null], [29621, 29621, null], [29621, 31313, null], [31313, 33713, null], [33713, 33798, null], [33798, 33798, null], [33798, 34741, null]], "google_gemma-3-12b-it_is_public_document": [[0, 241, true], [241, 418, null], [418, 418, null], [418, 977, null], [977, 977, null], [977, 1293, null], [1293, 1293, null], [1293, 1293, null], [1293, 1695, null], [1695, 2973, null], [2973, 3159, null], [3159, 3768, null], [3768, 6135, null], [6135, 7988, null], [7988, 10617, null], [10617, 12643, null], [12643, 12643, null], [12643, 13083, null], [13083, 14993, null], [14993, 15640, null], [15640, 15640, null], [15640, 17072, null], [17072, 17915, null], [17915, 19554, null], [19554, 19554, null], [19554, 20951, null], [20951, 23448, null], [23448, 25767, null], [25767, 27371, null], [27371, 29621, null], [29621, 29621, null], [29621, 31313, null], [31313, 33713, null], [33713, 33798, null], [33798, 33798, null], [33798, 34741, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34741, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34741, null]], "pdf_page_numbers": [[0, 241, 1], [241, 418, 2], [418, 418, 3], [418, 977, 4], [977, 977, 5], [977, 1293, 6], [1293, 1293, 7], [1293, 1293, 8], [1293, 1695, 9], [1695, 2973, 10], [2973, 3159, 11], [3159, 3768, 12], [3768, 6135, 13], [6135, 7988, 14], [7988, 10617, 15], [10617, 12643, 16], [12643, 12643, 17], [12643, 13083, 18], [13083, 14993, 19], [14993, 15640, 20], [15640, 15640, 21], [15640, 17072, 22], [17072, 17915, 23], [17915, 19554, 24], [19554, 19554, 25], [19554, 20951, 26], [20951, 23448, 27], [23448, 25767, 28], [25767, 27371, 29], [27371, 29621, 30], [29621, 29621, 31], [29621, 31313, 32], [31313, 33713, 33], [33713, 33798, 34], [33798, 33798, 35], [33798, 34741, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34741, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3fc466db108d96d109ef3eb6df511d1d76101f49
|
Building Domain Representations from Components¹
Peter Clark and Bruce Porter
Dept. Computer Science
Univ. Texas at Austin
Austin, TX 78712, USA
{pclark, porter}@cs.utexas.edu
Abstract
A major cause of the knowledge-engineering bottleneck is the inability to transfer representational fragments from one knowledge base to another due to the idiosyncratic nature of domain-specific representations. In this paper, we show that representations can be built automatically by composing abstract, reusable components. Moreover, we describe how representations of specific situations, that arise during problem solving, can be assembled ‘on demand’, guided by a query for a particular piece of information. Our work integrates ideas from dynamic memory, conceptual graph theory, compositional modeling and graph unification.
1 Introduction
A major cause of the knowledge-engineering bottleneck [11] is that building one representation contributes little to building the next because each is idiosyncratic. We claim this problem is not inherent to knowledge engineering; rather, it is a limitation of our current technology. Our goal is to develop a new suite of methods suitable for: specifying domain representations as compositions of abstract, reusable components; and assembling these representations, on demand, to answer questions.
Our work is motivated by several observations from our eight year experience building a large-scale knowledge base (KB) in botany [15], and more recently another KB about distributed computing systems:
1. Many domain-specific concepts share similar abstractions, reflected by their representations sharing similar substructures. For example, the general pattern describing production recurs in the representation of many concepts in the botany KB, such as photosynthesis, mitosis, growth, and germination. Moreover, many domain-specific concepts appear to be composites of multiple abstractions. Germination, for example, includes conversion, production and expansion.
2. There isn’t a single ‘right way’ of representing a concept: it can be represented in a variety of ways, depending on the intended use of the representation. Again, these variations share similar structure, and there is systemancy in the way substructures are added and removed to form a variant.
3. In our recent work on the distributed computing KB, we achieved very little transfer from the ‘top levels’ of the botany KB to the computing domain via standard isa-inheritance (henceforth, ‘inheritance’).
¹Support for this research is provided by a grant from Digital Equipment Corporation and a contract from the Air Force Office of Scientific Research (F49620-93-1-0239).
Figure 1: ‘The’ restaurant script can be viewed as a superposition of more abstract structures.
These observations highlight the importance of developing better methods for representing abstractions and composing concepts from them. As an example of a composite concept, consider the canonical RestaurantVisit script (Figure 1). This stereotypical sequence of events can be viewed as a superposition of various domain-independent abstractions: a purchase, a service, filling a container (the diner's stomach), emptying a container (the diner's wallet), and so on. Informally, it appears the script could be built by connecting together, in some way, these more abstract structures, rather than building a representation from scratch. Moreover, if the representation were an assembly of components, it could be usefully disassembled, to focus on (or ignore) selected aspects of it. Under this view, there is no longer ‘the’ RestaurantVisit, but rather there are many variants built from various components.
It is also clear from Figure 1 that composition is more than inheritance, as different abstractions overlap and interact. The RestaurantVisit script is not simply a set of abstract scripts, but rather a single script in which the different abstractions have been merged. Although inheritance can collect feature lists from multiple generalizations, it does not offer a generic method for combining structured information. A more sophisticated composition operator is needed.
As Section 2 describes, building representations from components has received widespread attention. Building on this work, Section 3 describes a representation for components, and a method for assembling them, based on a variant of conceptual graphs and graph unification. We attach a semantics to graph structures such that the syntactic operation of graph unification corresponds to the semantic operation of gathering and simplifying multiple constraints, which achieves our goal of integrating information from multiple abstractions.
Having described how representations can be specified as compositions, Section 4 describes how such specifications can be used for reasoning. Following Clancey [7], we view question-answering as constructing a \textit{situation-specific model}, based on initial information and these specifications, about the particular problem being solved. Section 5 shows how this model can be built in a lazy fashion, generating only those parts required to answer questions, rather than attempting to generate the representation in full detail, which is intractable.
All of this work has been implemented (except where noted) and is currently being used to build a knowledge base and question answering system in the domain of distributed computing.
2 \textbf{Prior Work on Components and Composition}
Assembling representations from components has received considerable attention in AI, as evidenced by this partial, historical account.
\textbf{Frames and Inheritance} Minsky proposed representing information in frame-systems [14]. The components (frames) are organized in a taxonomy, and the composition operator (multiple inheritance) collects the properties of an object, when they are needed, by ascending the hierarchy. However, inheritance does not provide an adequate mechanism for \textit{integrating} this information. For example, if various generalizations provide different values for some relation, inheritance typically returns only the first value, or the set of all of them. What is often needed, however, is some integration of those values into a new, compound value.
This shortcoming of inheritance is well recognized in current software engineering, where software components can inherit methods from multiple abstractions, but cannot combine them. Batory [4] provides an illustration of this in the Booch C++ component library [5]: in which the \texttt{guarded\_bounded\_ordered\_deque} and \texttt{guarded\_unbounded\_unordered\_queue} classes share only one superclass (\texttt{deque}), even though they also use the same concurrency control method (\texttt{guarding}). Consequently, the component writer must repeat the code for guarding in both classes. Multiple inheritance would not help because what is needed is an integration, not a concatenation, of the algorithms for deque and guarding. Code repetition is common in libraries [4], and many researchers are studying additional methods for software reuse [3, 20].
\textbf{Cliches} Motivated by Minsky’s work, Chapman proposed another approach to assembling representations from components [6], which was partially implemented in the Programmer’s Apprentice [16]. Chapman described components, called cliches, as “patterns commonly found in representations” — such as Containment, Propagation, and Resistance — from which more specific representations can be assembled. In the Programmer’s Apprentice, cliches were represented as parameterized templates, and composition involved instantiating a template’s parameters (called roles) with other templates. The composition was quite sophisticated; a template passed as a parameter could be fragmented, and the fragments used in different places in the parent template. Although the design and use of templates was complex, and the composition process had to be carefully guided by the user, the Programmer’s Apprentice was a significant advance.
\textbf{MOPS} A third approach to assembling representations, proposed by Schank, resulted from his dissatisfaction with the rigidity of scripts [18]. Under this proposal, scripts are assembled, as needed, from more general components called Memory Organization Packets (MOPS). Although several projects devised representations for MOPS, they largely used existing techniques for composing them. CHEF [12], for example, reverted to standard methods such as inheritance, and
Compositional Modeling A fourth approach to assembling representations comes from work on compositional modeling, the task of building a model of a physical system adequate for answering questions about the system [10, 13, 17]. In compositional modeling, a component (called a model fragment) contains a set of constraints\(^\text{2}\) with well-defined semantics, rather than being represented in syntactic terms as a partial data structure. For example, a component may represent a resistor by a set of constraints relating the resistor’s voltage, resistance and current. The composition of a set of model fragments is simply the union of their constraint sets, and reasoning involves constraint satisfaction. We similarly view a component as specifying constraints on a final representation, but expressed as a single data structure rather than a set of statements. This allows much of the work for combining constraints to be performed by a syntactic operation which merges such data structures, rather than using a constraint satisfaction engine.
Conceptual Graphs A final approach to assembling representations, which forms the basis of our approach, is the use of conceptual graphs (CGs) [19] and psi-terms [2] to represent components. Both are graph-like data structures containing labeled nodes (concepts) and arcs (relations). Conceptual graph theory provides a variety of logical semantics for different types of graphs. Psi-term theory provides a well-defined syntactic operation called graph unification (or join, in CG terminology) for merging psi-terms together. Our approach is based on combining these two theories: By attaching an appropriate CG-like semantics to psi-terms, we can ensure that the syntactic graph unification operation is also logically valid, and hence will properly integrate information from components together, as we describe below.
3 Components and their Composition
Informally, a component is a description of an object, event, or state, represented as a system of concepts and relations that are packaged together and manipulated as a single unit. Although components might be expressed at any level of abstraction (from concrete instances to class prototypes), we focus on abstract ones, such as container, that can contribute to many domain representations. In the spirit of cliches, the container component describes an object that:
partitions a space into two regions, inside and outside, permitting only two operations, get and put, to transport objects between these regions, such that the objects pass through the container’s portal, subject to size and capacity constraints.
Although there are containers that violate this description (e.g. sponges containing water, disks containing files), this does not reduce the need for reusable representations of typical containers, it only intensifies the need for methods that adapt representations.
We represent components with ‘KM-terms’\(^\text{3}\), a variant of conceptual graphs. Figure 2 illustrates their three most common forms. As with conceptual graphs, KM-terms are graphs of concepts and relations (called sorts and features in psi-terms). Concepts are also organized in a taxonomic hierarchy. The root node of a KM-term (e.g. A in Figure 2b), called its head, must be
\(^2\) also called a set of ‘relations’ [13] or ‘behavior conditions’ [10]
\(^3\) for ‘Knowledge Manager’, the name of the software managing our KB.
Figure 2a: A KM-term denoting three instances and their relationships. A001, B001, and C001 are anonymous identifiers (Skolem individuals) to indicate the nodes are instances. They are under A, B and C respectively in the concept hierarchy.
Figure 2b: A KM-term with a concept as its head denotes universal quantification and a logical implication ("All As are ... "). Note the use of a path for coreference from B to C. Note also ‘UniqueExists y’ means ‘there exists exactly one y in relation r1 to x’.
Figure 2c: A KM-term with the subterm B flagged as definitional, and shown in bold in the graph, denotes a different quantification pattern ("All As in relation r1 to a B are...").
Figure 2: The most common KM-terms and their semantics.
either a concept or a conjunction of concepts — such as {pet & fish} — in the taxonomy. Paths, such as “the r3 of Self” in Figure 2b, express co-reference of sub-terms in a KM-term, where Self denotes the head concept and r3 denotes a relation from it to the shared subterm.
The semantics of KM-terms is based on first-order logic, as illustrated by the examples in Figure 2. Individuals are called instances, and concepts are unary predicates over instances. To denote an instance, we use a unique identifier (Skolem individual), such as A001 in Figure 2a. To denote a concept (i.e. a class), we universally quantify over its members by using its name (rather than a Skolem individual) at the head of the KM-term, as shown in Figure 2a. Other quantification patterns can be expressed by tagging concepts within a KM-term with a special definitional relation (Figure 2c), handled in a special way during inference. We restrict tagging such that tagged concepts are either connected to the head concept or, recursively, to other tagged concepts.
The main challenge for a component-based representation falls on the composition operator: it must be capable of integrating information from various components, not simply collecting it. We implement this capability with graph unification, defined in Table 1, based on a similar algorithm for psi-term unification [2]. Our objective with unification is to syntactically merge graphs in a way that corresponds to merging semantically related information. This requires an additional
PROCEDURE UNIFY (Term1, Term2)
BEGIN
1. Find the head concept of the new term by
(a) unioning the head concepts of Term1 and Term2
(b) remove any concept which has a subconcept in that union
(eg. car&vehicle-->car, pet&fish-->{pet,fish}, {pet,fish}&animal-->{pet,fish})
2. For all relations which are present in both Term1 and Term2:
Unify recursively the values (themselves terms) of those relations
END
Table 1: The Procedure for Unifying Two KM-Terms.
Figure 3: Unification of the KM-term’s Car and Car01 (a subterm of the term describing Joe).
Note how unification integrates information from the two graphs, eg. Joe’s skills, Car01’s color.
(The names Joe, Computing, Driving, and Red all denote instances).
constraint on KM-terms: a relation in a KM-term has a unique value for its second argument,
given its first argument (ie. \( \forall xyz R(x, y) \land R(x, z) \Rightarrow y = z \)). We denote this using the quantifier
UniqueExists, rather than Exists, as in Figure 2b. Under this interpretation, the syntactic
operation of unification is semantically valid: nodes can be merged because they refer to the same
individual.
Under this semantics, we use partial sets and partial sequences as the fillers for multivalued
relations, such as parts and parents. For example, we write Pete parts: \( \{ \text{head} \} \) rather than
Pete parts: head, where \( \{ \text{head} \} \) denotes a partial set, ie. a unique, but incompletely specified,
set containing head, and possible other members. Partial sets are unified by unioning their known
members — eg. \( \{ a \} \) and \( \{ b \} \) unify to \( \{ a, b \} \) — and then removing members that subsume others
— eg. \( \{ a, b \} \) would become \( \{ a \} \) if \( b \) is a superconcept of \( a \). This is semantically valid, as the most
general, common specialization of a set containing at least \( a \) and a set containing at least \( b \) is a
set containing at least \( a \) and \( b \). Note we are not unioning two distinct sets, but rather merging
constraints on a single set; the unified set has all the constraints (ie. known members) as the
initial sets.
KM-terms represent sequences of events in a similar way. The partial sequence \( <a, b> \) denotes
a sequence containing \( a \) and \( b \) and possibly other events, and where \( a \) precedes \( b \). Two partial
sequences unify to a non-linear partial sequence. For example, the unification of \( <a, b> \) and
\( <c, d> \) is some sequence of steps containing \( a, b, c, \) and \( d \), in which \( a \) precedes \( b \) and \( c \) precedes
\( d \). Again, this operation combines constraints. While we should represent the resulting non-linear
sequence using a graph, in practice we restrict ourselves to a linear form\textsuperscript{4} by finding the first total ordering consistent with any known constraints (eg. \(\langle a, b, c, d \rangle\)).
It is logically valid to unify two KM-terms that have co-referential nodes. One way in which nodes might be co-referential is as follows: the head of one KM-term is a superconcept of an instance in the other (eg. Car and Car01 in Figure 3). In this case, the nodes can be unified (because a superconcept's graph applies to all its instances – see Figure 2b), thereby collecting the information in the KM-terms into one graph.
4 Assembling Representations from Components
Given this brief description of components (KM-terms) and composition (unification), we now illustrate how domain representations can be built with components. We continue with the RestaurantVisit, as this concept is familiar, is rich in structure, and is known to pose problems for composition [9, 18]. Figure 4a shows a KM-term representing a simplified 'restaurant script', as might typically be encoded in a knowledge base. There is a diner, a waiter, and a meal. During the visit, the diner selects his/her meal, requests it from the waiter, receives it, eats it and finally pays.
We can identify numerous abstractions within this structure — service, purchase, containment, transfer, exchange — and Figures 4b and 4c illustrate components for the first two. A Service is an event in which a client requests, then receives, some item from a server. Similarly, a Purchase is an event in which a buyer receives then pays for some item from a seller. Each component represents a theory about some abstract system of objects. Purchase, for example, tells us that the amount paid is the value of the purchase, the seller is the donor, the seller must have the item in order to sell it, and so on. Note that the abstractions are not specific to restaurants, and hence are general in nature.
We now have an alternative to building a domain-specific representation from scratch: we can specify which abstractions it is composed of, and unify those components. To do this, we must specify a mapping between terms in the domain and terms in each component, to show how the abstraction applies. For example, in the restaurant visit, a RestaurantVisit is a Purchase in which the diner is the buyer and the waiter is the seller. We can think of each component as having an interface, namely the set of terms (eg. buyer, the value of the purchase) it includes; the mapping connects the interfaces of components when their terms are not identical. Figure 5 illustrates this, by specifying the restaurant visit as a composition of components (just showing two, for simplicity).
Assembling representations from abstract components simplifies knowledge engineering, and also improves the representation's flexibility:
- we can represent atypical visits (eg. carry-out) by varying the components and mappings.
- we can construct the simplest representation adequate for the current problem-solving situation (eg. determining the price of the meal does not require the visit component).
- we can elaborate parts of the representation, as needed (eg. incorporating the service component adds information about servers performing actions on behalf of clients, information that is needed to explain why the waiter delivered the meal to the buyer.)
\textsuperscript{4}This is a minor limitation of our implementation.
Figure 4: Abstract component structures can be identified within a specific representation of a restaurant visit.
Figure 5: The restaurant visit can be expressed as a composition of abstract components. Note that the mapping among components is expressed as qualifications on the fillers of the isa relations.
People routinely exercise this flexibility. For example, in our distributed computing domain, technical manuals describe processes, such as a client-server interaction, in many different ways, depending on context. One description will include security aspects (e.g. transmitting authorization credentials), another will ignore security but include addressing aspects (e.g. locating the server), and another will ignore both but include transmission protocol information. These variants are analogous to selectively ignoring aspects of the RestaurantVisit (e.g. paying, meal production) for a particular restaurant description.
5 Building a Representation “On-demand”
We have shown how we can describe domain-specific representational structures by specifying them as unifications of more abstract, general structures. In fact, it would be prohibitive to explicitly build the domain-specific representation in all its detail, as many components can potentially contribute to the representation. For example, in RestaurantVisit, not only do Service and Purchase components apply but also components describing Agent, Meal, Pay, Money, etc. Every node in a graph can potentially ‘pull in’ another graph, and hence the process is potentially endless. In fact, building the domain-specific representation in all its detail amounts to exhaustive deduction using the knowledge-base.
It is obviously essential to elaborate a representation in a controlled fashion, in response to the demands that a problem-solver (or user) places on it. To exert this control, we use two related mechanisms: path following, where paths through the graphs are used to guide inference, and lazy unification, where we only unify fragments of graphs required to find the answer to a query.
5.1 The Query Interpreter
Before describing these mechanisms, we first present the context in which inference occurs using the knowledge base. While the knowledge base describes general concepts such as RestaurantVisit, Service etc., inference is centered on a specific problem-solving situation (e.g. John eating Lobster at a specific restaurant). This initial set of facts is represented by a graph of instances that we call the instance graph. The instance graph is elaborated by unifying it with components from the knowledge base, each unification constituting a single step of inference.
A query interpreter mediates between a problem-solving algorithm (or the user) and the knowledge base. Inference is triggered when the interpreter receives a query. A query asks
for information from the instance graph (or the deductive closure of it), and is expressed by a path (Section 3), eg. the server of the meal of RestaurantVisit01. The interpreter then performs the necessary inferences to answer the query (we say it evaluates the query) using the mechanisms described below.
5.2 Path Following
One mechanism the query interpreter uses for guiding search while evaluating a query is path following. A path, eg. the server of the meal of RestaurantVisit01, does not just refer to a node in the instance graph: it also expresses additional information about how the value of that node can be computed, namely, first find the meal of the restaurant visit, then find its server. The query interpreter follows paths in this way, using them as specifications of a sequence of inference steps (here, unifications) that result in the desired value. Without this, there is a potentially large search required to identify a chain of inference which will find the answer.
Similarly, the value found at the end of a path may itself be a path (recall that paths are used to express co-reference within KM-terms (Figures 2b and 2c)). For example, consider the KM-term describing Purchasing:
Purchasing
-------
script: Script
-----
events: <... Pay >
----
payer: (the buyer of Self)
amount: (the cost of the purchase of Self)
If the query interpreter is asked for the amount of the pay event (of some instance of Purchasing), it will find (from this KM-term) a path for this value, namely the cost of the purchase of Self (where Self refers to that instance). The query interpreter evaluates this new path (using loop-checking to avoid cycles), until it finds a non-path value or fails. This illustrates how path-following can trigger further path-following. It also illustrates that paths within KM-terms have a second function besides expressing co-reference: they provide guidance to the query interpreter about how a value can be computed, by specifying a sequence of inference steps leading to that value.
Of course, this efficiency trades off completeness – there could be other sequences of inferences that reach the conclusion, but are not explored because they are “off the path”. However, we retain a useful, albeit weakened, type of completeness called Socratic completeness [8]. Socratic completeness guarantees that any deductive consequence of the KB is deducible via some sequence of queries. Thus, no logically implied fact is inherently indeducable, and hence the knowledge base designer has full control over where and where not inference effort should be expended by his/her choice of paths in the knowledge base.
5.3 Lazy Unification
A second, and related, mechanism the query interpreter uses to control inference is lazy unification. Unifying two KM-terms involves merging their graph structures. However, when evaluating a path, the query interpreter only needs the result of unifying the branch of the graph specified by the path with its counterpart in the other graph. In this way, lazy unification avoids the unnecessary work of unifying the entire graphs.
Figure 6: Lazy unification involves just unifying graphs along a particular branch. Pointers are maintained in case a subsequent query requires the partial unification to be completed further.
However, note that lazy unification requires one extra step. Answering subsequent queries may require unifying other portions of the KM-term’s that were skipped due to laziness. To prepare for this eventuality, the query interpreter installs pointers from nodes in the unified graph to the components (KM-terms) from which they came. This gives the query interpreter a handle on the partially unified KM-terms in case the unification must proceed further.
This is illustrated in Figure 6, in which graphs for Car, Engine and Gas have been partially unified along the path the fuel of the engine of a Car. Note that the pointers from Gas001 to the structures Car and Engine provide handles on the information those components can still contribute, such as the type or contents of Gas001. This information would be explicitly added to the instance graph should a subsequent query request this information. Note too that the type of Gas001 could not be concluded by normal inheritance (via Gas001 isa Gas) as the answer resides on the Car component, illustrating unification’s ability to integrate information from multiple sources (here Car, Engine and Gas).
5.4 Extended Example from the Restaurant Visit
We now present a more detailed example of answering a query to illustrate path following and lazy unification. The example is intended to show a real-world operation, namely scanning a sequence of events for possible failure points, but is couched in terms of our ongoing restaurant example. In the tutoring system for distributed computing that we are building, this operation is frequently used in the diagnostic component to identify failure points in distributed computing scenarios, in order to explain an error the user has observed.
In the context of the restaurant visit, a diagnostic problem-solver might ask many questions to identify potential failures, such as: “What are the preconditions for the events in the script?” Expressed as a path, this query would be:
[1] -> the preconditions of the events of the script of a RestaurantVisit with diner = John and food = a Lobster?
This query is first broken down to evaluate its innermost expression, namely the description of this particular restaurant visit:
[2] -> the events of the script of a RestaurantVisit with ...?
[3] -> the script of a RestaurantVisit with ...?
[4] -> a RestaurantVisit with diner = John and food = a Lobster?
This expression tells the query interpreter to create an initial instance graph representing this restaurant visit, as illustrated in Figure 7.
[4] <- RestaurantVisit01
;; return the new instance
Now the interpreter can re-evaluate step [3], substituting in the value of the restaurant visit:
[3] -> the script of RestaurantVisit01?
The instance graph currently has no script arc from RestaurantVisit01, causing the interpreter to incorporate, with lazy unification, those components of RestaurantVisit that contain a script arc. Following Figure 4, two are found, namely Service and Purchase. A new instance Script01 is created, denoting the script of RestaurantVisit01, and pointers are added from it to the Script nodes in Service and Purchase.
[3] <- Script01
;; from Service and Purchase
[2] -> the events of Script01?
Similarly, the interpreter incorporates components providing events for this script. Both Service and Purchase provide sequences, which are interleaved, as described in Section 3, to return a single sequence. Again, instances are created and stored in the instance graph.
[2] <- <Select01 ... Pay01>
;; from Service and Purchase
[1] -> the preconditions of <Select01 ... Pay01>?
For brevity we show only the Pay01 branch:
[1] -> the preconditions of Pay01?
Again, Pay01 has no precondition relations in the instance graph, so the interpreter finds components that can contribute that information. Purchase does not specify preconditions, but Pay itself is specified as including a Give component, in which the giver (payer, as instantiated in the Pay component) must have the given (mapped to the amount in the Pay component).
[1]<- (payer of Pay01) has (amount of Pay01) ;; from Pay
This returned value is itself a structure consisting of two paths and a relation (has). Each path must be evaluated.
[2]-> payer of Pay01?
[2]<- the buyer of RestaurantVisit01 ;; from Purchase
[2]-> the buyer of RestaurantVisit01?
[2]<- the diner of RestaurantVisit01 ;; from RestaurantVisit
[2]-> the diner of RestaurantVisit01?
[2]<- John ;; from the instance graph
From Purchase (Figure 2c), the payer in the Pay is the buyer in the Purchase. Then from RestaurantVisit (Figure 5), the buyer in this Purchase is mapped to the diner in the Restaurant. Similarly for the amount of Pay01:
[2]-> amount of Pay01?
[2]<- the value of the meal of RestaurantVisit01 ;; from Purchase
[2]-> the value of the meal of RestaurantVisit01?
[3]-> the meal of RestaurantVisit01?
[3]<- Lobster01 ;; from the instance graph
[2]-> the value of Lobster01?
[2]<- $20 ;; from Lobster
Again, a simple graph describing Lobster (including its typical price) supplies this information.
[1]<- John has $20
Finally, this returns (one of) the script’s preconditions, that John has $20, which identifies a potential failure mode of this script, that John lacks $20. The final instance graph is illustrated in Figure 7.
There are several important points to note from this example. First, computing the answer is not trivial: preconditions for restaurant events are not specified on the RestaurantVisit component, but instead are distributed among other, more abstract, components (eg. Give) comprising the restaurant visit’s specification. Second, the answer is situation-specific; if the meal had not been lobster then the required amount would have been different. Third, the instance graph has been elaborated along only those arcs required to answer the query, illustrating ‘lazy unification’.
5.5 Application of the Representational Framework
We are currently applying this representational framework in two ways. First, we are constructing an automated assistant for users of distributed computing systems, capable of answering novice users’ questions. The assistant contains three simple problem-solving algorithms. The first generates definitions of computing terminology by assembling component-based representations of domain concepts and converting them to text. The second performs diagnosis by constructing computing scripts and scanning them for failure points that explain the user’s observations (iterating between data-gathering from the user and reasoning with information in the KB). The third generates short plans for achieving a user’s goals (eg. “how do I reduce the minimum allowed password length”) with a standard means-ends analysis algorithm, using planning operators built compositionally from the knowledge base. All three systems reason about a user’s specific computing situation, represented as an instance graph in the knowledge base. This includes representation
of the user's particular computing environment (e.g. machines, their connectivity, processes running on them), and any particular activity which is being reasoned about. For example, if the user is asking about possible failures in a binding event (say), then an instance of binding event is created in the instance graph. The application systems query the knowledge base for particular pieces of information about the user’s situation, and the query interpreter answers those queries by (lazily) unifying components with the instance graph.
Second, and more importantly, we are in the early stages of constructing a small library containing reusable components such as communication, containment, exchange, and information. While the library’s contents are primarily being used as building blocks for the computing knowledge base (e.g. a database is composed of container, secure-item and resource), our goal is to formalize the components in domain-general ways.
6 Summary
A major cause of the knowledge engineering bottleneck is the difficulty of building domain representations from more abstract components. Although many domain-specific concepts are composites of many abstractions, it is difficult to represent such abstractions and automatically compose them together.
To address this problem, we have presented a novel representation of components, evolved from a combination of conceptual graph theory and psi-term unification. We have illustrated how graph unification, used as a composition operator, can properly integrate components into a single structure. This approach offers a way to build domain-specific representations from reusable components.
Finally, we have described how situation-specific representations can be assembled from components on demand, guided by a query for a particular piece of information. The query interpretation algorithm utilizes two novel techniques – path following and lazy unification – to guide and constrain inference to just that required to answer a query.
References
|
{"Source-Url": "http://www.cs.utexas.edu/ftp/AI-Lab/tech-reports/UT-AI-TR-06-332.pdf", "len_cl100k_base": 7279, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 86021, "total-output-tokens": 9088, "length": "2e12", "weborganizer": {"__label__adult": 0.00031375885009765625, "__label__art_design": 0.0005955696105957031, "__label__crime_law": 0.0003764629364013672, "__label__education_jobs": 0.001728057861328125, "__label__entertainment": 9.85860824584961e-05, "__label__fashion_beauty": 0.00018870830535888672, "__label__finance_business": 0.0003619194030761719, "__label__food_dining": 0.0003960132598876953, "__label__games": 0.0005040168762207031, "__label__hardware": 0.0009303092956542968, "__label__health": 0.0005888938903808594, "__label__history": 0.00036716461181640625, "__label__home_hobbies": 0.00013935565948486328, "__label__industrial": 0.0005383491516113281, "__label__literature": 0.0005741119384765625, "__label__politics": 0.0002770423889160156, "__label__religion": 0.0005426406860351562, "__label__science_tech": 0.12078857421875, "__label__social_life": 0.00014328956604003906, "__label__software": 0.014312744140625, "__label__software_dev": 0.85498046875, "__label__sports_fitness": 0.0002264976501464844, "__label__transportation": 0.0006394386291503906, "__label__travel": 0.0002117156982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38563, 0.03003]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38563, 0.83305]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38563, 0.9221]], "google_gemma-3-12b-it_contains_pii": [[0, 2688, false], [2688, 4710, null], [4710, 8556, null], [8556, 11989, null], [11989, 14264, null], [14264, 16961, null], [16961, 20445, null], [20445, 20559, null], [20559, 23297, null], [23297, 26450, null], [26450, 28937, null], [28937, 30716, null], [30716, 33652, null], [33652, 36838, null], [36838, 38563, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2688, true], [2688, 4710, null], [4710, 8556, null], [8556, 11989, null], [11989, 14264, null], [14264, 16961, null], [16961, 20445, null], [20445, 20559, null], [20559, 23297, null], [23297, 26450, null], [26450, 28937, null], [28937, 30716, null], [30716, 33652, null], [33652, 36838, null], [36838, 38563, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38563, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38563, null]], "pdf_page_numbers": [[0, 2688, 1], [2688, 4710, 2], [4710, 8556, 3], [8556, 11989, 4], [11989, 14264, 5], [14264, 16961, 6], [16961, 20445, 7], [20445, 20559, 8], [20559, 23297, 9], [23297, 26450, 10], [26450, 28937, 11], [28937, 30716, 12], [30716, 33652, 13], [33652, 36838, 14], [36838, 38563, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38563, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
065c46fc3e035958a1a6dfd412f9a2cc4476698d
|
Attendio: Attendance Tracking Made Simple
Benjamin L. Greenberg
*University of Tennessee, Knoxville*, bgreenb3@vols.utk.edu
Spencer L. Howell
*University of Tennessee, Knoxville*, showel17@vols.utk.edu
Tucker R. Miles
*University of Tennessee, Knoxville*, tmiles7@vols.utk.edu
Vicki Tang
*University of Tennessee, Knoxville*, wph612@vols.utk.edu
Daniel N. Troutman
*University of Tennessee, Knoxville*, dtroutm1@vols.utk.edu
Follow this and additional works at: [https://trace.tennessee.edu/utk_chanhonoproj](https://trace.tennessee.edu/utk_chanhonoproj)
Part of the Software Engineering Commons
**Recommended Citation**
This Dissertation/Thesis is brought to you for free and open access by the Supervised Undergraduate Student Research and Creative Work at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Chancellor's Honors Program Projects by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [trace@utk.edu](mailto:trace@utk.edu).
Attendio: Attendance Tracking Made Simple
Benjamin Greenberg Spencer Howell Tucker Miles
Vicki Tang Daniel Troutman
Detailed Design Report
ECE402/COSC402 Senior Design Practicum
Tickle College of Engineering
The University of Tennessee
Knoxville, Tennessee
May 3, 2020
Executive Summary
In this report, we discuss the need for an attendance tracking solution and how we built one to fill this niche. Many student organizations on campus use a plethora of different websites, software, or even written paper to keep track of attendees. People almost always have their smartphones, so we implemented this with a cross-platform app. From the attendee's perspective, our app is as simple as scanning an event QR code and receiving a checked-in notification. From the event manager's perspective, they can generate new events and see who is checked in. Our app can enable both parties to maintain a reliable communication channel for events. Our app revolves around our five engineering characteristics to become a great solution for growing student organizations: security, usability, maintainability, adaptability, and aesthetics.
Table of Contents
Executive Summary ............................................................................................................................ 2
Problem Definition & Background ...................................................................................................... 4
Requirements Specification ................................................................................................................ 4
Technical Approach .......................................................................................................................... 5
Design Concepts, Evaluation & Selection .......................................................................................... 5
Embodiment Design .......................................................................................................................... 7
Test Plan ........................................................................................................................................... 8
Project Deliverables ......................................................................................................................... 9
Project Management ......................................................................................................................... 9
Budget ............................................................................................................................................ 10
References ....................................................................................................................................... 10
Appendix ......................................................................................................................................... 11
<table>
<thead>
<tr>
<th>Table Number</th>
<th>Description</th>
<th>Page Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>List of Tables</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>List of Figures</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>Customer Requirements</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>Engineering Characteristics</td>
<td>5</td>
</tr>
<tr>
<td>4</td>
<td>Platform-Benefits Matrix</td>
<td>6</td>
</tr>
<tr>
<td>5</td>
<td>Framework-Benefits Matrix</td>
<td>6</td>
</tr>
<tr>
<td>6</td>
<td>Backend-Benefits Matrix</td>
<td>7</td>
</tr>
<tr>
<td>7</td>
<td>Deep Linking Technologies</td>
<td>8</td>
</tr>
</tbody>
</table>
Table 0: List of Tables
<table>
<thead>
<tr>
<th>Figure Letter</th>
<th>Description</th>
<th>Page Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>Business Model Canvas</td>
<td>11</td>
</tr>
<tr>
<td>B</td>
<td>Gantt Chart</td>
<td>11</td>
</tr>
<tr>
<td>C</td>
<td>Screenshots</td>
<td>12</td>
</tr>
</tbody>
</table>
Table 1: List of Figures
I. PROBLEM DEFINITION AND BACKGROUND
A. What is the problem? Why is the current situation unsatisfactory?
Our technology is attempting to solve a problem that exists in numerous places, but we are primarily focusing on student organizations in the beginning stages. The problem arises from the fact that there exists a great amount of inconsistency in the way that student organizations track events and meeting attendance. As an example, many organizations will have attendees scan a QR code that links to a Google form. Then, attendees will have to spend much more time than necessary just to prove that they have attended an event. This leads to valuable time being lost in what are already typically short meetings and workshops, and could also lead to lowering attendance in the future.
B. Who is having this problem? Who are the would-be customers for a solution?
As mentioned previously, student organizations are having this issue. If thought about much more broadly, these same issues exist in numerous places including workplaces, classrooms, and other similar events. The customers for our solution would be the leaders of these organizations. Our solution would allow them to easily track attendance.
C. What basic functions must the design perform?
The solution must allow organization officers to easily set up events, track attendance at these events, and view overall attendance statistics for each member. This process needs to be seamless, so organizations can quickly take attendance and move on with their meeting. The individual members need to be able to launch the app on their phone, scan the QR code, and receive a success message as they sign in. Other convenience functions, such as removing members from the roster, and marking members as “excused” from meetings, should be supported as well.
D. How will the design be used by the customer(s)?
The app will be used in meetings that occur on campus. Most of these meeting rooms will have projectors that display the slides for the meeting, but they might not be available in all rooms. The students who are present at the meeting will likely have either a phone or a laptop to use. In the case that a projector is not available, a QR code could be displayed on a laptop screen, or a link could be shared to an organization group chat. Since meeting time is short and valuable, our solution should integrate seamlessly with the current practices of the organization and take less time than existing solutions do to take attendance.
E. What is the underlying theory or background that needs to be understood in order to address this problem?
To address this problem, a background in app development and/or UI/UX would be helpful for quickly and effectively building an app that is both secure and usable.
F. What prior work has been done on this problem?
There are several companies that have made attendance tracking software. Many of these only have paid options and offer features that can make the program seem complicated.
G. What products, currently available, were not designed or intended for this particular application but could be used to perform a similar function?
Other products that could be used to perform a similar functionality of our technology would be employee timesheet software, survey softwares, Google Forms, any type of spreadsheet software, VOLink, and clickers. However, the majority of the attendance tracker softwares made by other companies are more focused on employees and employers rather than student organizations and instructors.
II. REQUIREMENTS SPECIFICATION
The final product of our team’s development is an attendance tracking app that works for both web and mobile devices. The primary user base is student organizations, so we targeted our initial efforts for them directly. However, our product should also be capable of supporting other use cases, such as professors and students in classrooms and other events where attendance is tracked.
Keeping in mind these varied user segments, our customer requirements are as follows, ranked from most important to least important:
These customer requirements come from our own experiences as members and leaders in student organizations, as well as general knowledge of app usability and best practices.
In addition to the customer requirements, we have generated a list of Engineering Characteristics, based on these requirements, that will allow us to measure our progress and ensure that it aligns with the needs of the customer. Each of these characteristics is either a constraint, that we cannot change, or a variable that we can experiment with. Below in Table 2 is a list of our characteristics.
<table>
<thead>
<tr>
<th>Security</th>
<th>Number of security vulnerabilities</th>
<th>Constraint: This number must remain at zero, or as close to zero as possible.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Usability</td>
<td>Number of actions required to perform desired tasks (including mistakes)</td>
<td>Variable: We will measure this characteristic when observing users with the app.</td>
</tr>
<tr>
<td>Maintainability</td>
<td>General ease / difficulty of adding new components or features to the application</td>
<td>Constraint: Must be stable and upgradable whenever new features are wanted.</td>
</tr>
<tr>
<td>Adaptability</td>
<td>Number of devices able to run the application</td>
<td>Variable: Must be flexible to as many different platforms and hardware options as possible.</td>
</tr>
<tr>
<td>Aesthetics</td>
<td>User feedback responses</td>
<td>Variable: We will survey our users on their opinion of the interface, and seek to maximize their opinion of it.</td>
</tr>
</tbody>
</table>
TABLE 3
ENGINEERING CHARACTERISTICS
III. TECHNICAL APPROACH
In order to create an application that conforms to all of our stated criteria, the team will need to have a specified technical approach. This approach consists of what tools we use, what practices we employ, and what processes we follow during development.
The first choice that our team must make for our technical approach is what development platform we build our solution on top of. As described in the Design Concepts, Evaluation & Selection section below, our team has decided to use the Flutter framework as it best suits our business needs, as well as aligning with our team’s previous experience.
While building a Flutter application, our team will follow best practices as defined by Google in their Effective Dart guide [2]. This will ensure that our code is free from language-based vulnerabilities and is easily refactorable and maintainable, helping us achieve our desired Engineering Characteristic measurements. In addition, we will follow Flutter’s defined best practices for performance [3] to ensure that our app is not bloated and will perform at acceptable levels on a wide range of devices.
Developing an app with a team of our size requires coordination and a defined strategy. Therefore, we will follow proven development processes in order to organize our efforts and respond to feedback. Our team is adopting an Agile development philosophy, as defined by the famous Manifesto for Agile Software Development [4]. Our team plans to utilize GitHub’s Agile Board features to create tickets for each feature or implementation to be developed, which will then be divided up into defined “sprints” for development tracking. By breaking up our development into pieces, we have the opportunity to test out our ideas with customers and change our priorities if required. This will help us to serve the needs of the customer and remain flexible in our work.
As a part of this Agile process, our team will participate in a code review process, where all submitted code is vetted and learned from by other members of the team. This will help ensure our app is secure, as potential vulnerabilities are caught and corrected before even being merged into the codebase. Further, it will allow our team to learn from each other and develop a shared code style for the project through discussion. This process will help ensure the security and maintainability of our application.
By following this technical approach, our team can ensure that development efforts are well-directed and as productive as possible. We can write maintainable and secure code, iterate quickly, and change direction if needed. All of these benefits will directly serve our customers as outlined in the Customer Requirements and Engineering Characteristics.
IV. DESIGN CONCEPTS, EVALUATION AND SELECTION
There are many ways we can create an attendance tracking application. While we hope to expand our application and port it to as many platforms as possible, we have to start with one for our project. Once our first platform was decided, we needed to pick a language to develop in. Since our application must communicate and store shared information, we also needed to find a backend that integrates with all of our tools and software. For each decision, we have a description of each option as well as their strengths and weaknesses. We also have a decision matrix with ratings for several categories. The higher the rating, the better that option is to the user or us.
A. Platform
Since the customers will be using this application on their device, it is important for us to decide which type to target first. The first thought that comes to mind may be a website where people can sign into their accounts and check into
events. This has the benefit of being extremely accessible as it doesn’t matter what kind of device you have, as long as it can access the internet. There is no app to install so new people will be able to jump on board very quickly. As far as experience goes, not many of us are experienced in developing web applications, but we were able to include a website as an option for users. We will also address this platform a bit in the frameworks decision section.
Our next possible platform is a desktop application. Several of us have experience with desktop app development in ElectronJS from our COSC 340 class. A severe downside of this platform is the lack of portability and accessibility. Most of the time, people will not be bringing their laptops with them and if they do, it might be in a powered-off state. The installation of an app like this would also be the most intensive of the three on a desktop platform. Users would have to visit a website, download the installer, then install the application.
Our third option would be a mobile application. Like the desktop platform, we have members of our group with mobile app development experience gained from COSC 340. Since virtually everyone has a smartphone on them, a mobile app will have great accessibility. While there is something to install on the device, the process is not as burdensome as the desktop platform.
<table>
<thead>
<tr>
<th></th>
<th>Ease of installation</th>
<th>Availability</th>
<th>Easy access</th>
<th>Developer experience</th>
</tr>
</thead>
<tbody>
<tr>
<td>Web</td>
<td>3</td>
<td>3</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>Desktop</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>Mobile</td>
<td>2</td>
<td>3</td>
<td>3</td>
<td>2</td>
</tr>
</tbody>
</table>
TABLE 4 PLATFORM BENEFITS MATRIX
Our final decision was to go with a mobile and web application approach. We feel it has the best features to start off with and we have the necessary experience to implement it. While the other platforms can fill other niches, we wanted to pick the ones that have the best availability. When the app was sufficiently fleshed out, we ported it to the web along with some changes that best fit the platform.
C. Backend
Along with the platform and framework for our project’s app, we also needed to choose a suitable backend for authentication, hosting, storage, etc. The options we considered for this project were AWS, Google Firebase, and possibly a custom backend written in Django, Flask, etc. As we are developing our app for this project, a custom backend seemed unnecessary as we would have to handle hosting the backend as well as handle any scalability issues if the user base of the app grew in the future. Therefore our options for a backend were down to AWS and Firebase. The deciding factors between AWS and Firebase came down to the following options: database requirements, complexity, free services vs. paid services, and support for the Flutter SDK.
After comparing both AWS and Firebase and determining which backend is more suitable for our app, we decided to go with Firebase for our backend. Even though Firebase only offers NoSQL databases while AWS allows for a choice of which database to use, Firebase seemed more appropriate for our app’s needs as we don’t expect our database requirements to grow too complex. As we need a backend that’s relatively easy to set up and communicate with, we found that with Firebase it was easier to both set up and interact with callable functions than in AWS. At the moment, Firebase provides more services for free such as user authentication than AWS does. Firebase also looks to be easier to set up than AWS as even though it may offer fewer services out of the box, it is enough for our app at this stage. Flutter even has official support for the Flutter SDK whereas AWS does not. Due to these reasons, we will be using Firebase as the backend for our Flutter-based mobile app.
Now that we have all the major decisions out of the way, we can get started with learning all about these services and software. Since we decided on a mobile platform first, we...
created our wireframes and mock-ups to fit the cell phone format. We also were able to get a general idea of our web application and created a wireframe for it. We have decided on the overall layout, but were not able to fully implement the design we created in our wireframes as we were more focused on building a MVP under time constraint. Flutter is our chosen platform due to its simple integration with the web, Android, iPhone, and our backend. Our development experience with the language is also a big plus and we started with brushing up on various concepts to begin the application development. Finally, we chose our backend with Firebase due to its ease of setting up and overall experience. By already deciding these major decisions together, we had a smooth transition to implementation and these choices gave us the best chance of success with planning and the logistics of the app.
V. EMBODIMENT DESIGN
A. Product Architecture
1) Modules:
a) Material UI: Material UI is a design language that was released by Google. It uses grid-based layouts, responsive animations and transitions, padding, and depth effects such as lighting and shadows. This more or less defines the “look and feel” of our application, and will provide users with a familiar interface when using our application, as Material UI is a standard used by many.
b) Firebase Authentication: To make the adoption of our application easier for users, we decided to use Firebase Authentication as our authentication solution. Firebase Authentication allows for a relatively simple implementation and integration process to add authentication from sources such as Google with ease. Firebase also allows for easy integration with other authentication providers in the future such as Facebook. As Firebase and Flutter are both products from Google, they are more likely to work together reliably than other authentication solutions for Flutter. This works well for our use case as it saves us the time and effort of trying to develop our own authentication solution while making sure that it is secure and works with multiple authentication providers.
c) Cloud Firestore: Cloud Firestore is one of the options that Firebase provides to use as a backend database for our application. We chose Cloud Firestore because of its high flexibility, scalability, and ease of use. It acts as a place for us to store data, keeping it in sync across all of our applications (iOS, Android, and web) while providing real-time listeners which allow us to interact with it. Additionally, Firebase and Flutter are both products from Google, and staying in this environment gives developers some peace of mind when it comes to implementing this, as it’s a very plug-and-play solution that integrates seamlessly into our application.
d) Dynamic Links: Dynamic Links is an implementation of the deep link concept. A deep link is like a hyperlink, but it will send you to a specific part of an app instead of a website. Other behavior like opening the app store or redirecting to a web version is also possible if you do not have the app installed. Dynamic Links is a service provided by the Firebase platform that is already in our app. We use the service to create links that redirect users to the check-in page for a particular event in our app. Each dynamic link is unique because they encode event information that our app will use to look up the event.
e) Riverpod: Riverpod is a state management solution based on the Provider package by the same developer. Riverpod offers improvements over the Provider package such as catching programming errors at compile-time instead of runtime, it removes the need for nesting when listening or combining providers, and helps simplify testing state management. Riverpod benefits our app directly by simplifying the creation of state providers for the entire app and allowing us to create them outside of the widget tree. Riverpod also simplifies combining providers and abstracting data from one provider into another. Another benefit of Riverpod is that any widget within the provider scope tree can access the state of any provider just by importing the provider. This removes the need for us to nest the provider through every widget, cleaning up and making our code more maintainable.
2) Inputs & Outputs:
a) Material UI: Material UI exists in every single component of the front-end of our application. The Material UI of Attendio is codified through defining trees of widgets, showing how they interact, and styling them accordingly. Flutter looks at this code, analyzing the sizes, layouts, and logic that we’ve defined, and draws the user interface onto the screen.
b) Firebase Authentication: The Firebase Authentication components of our app are seen in several places. First off, it acts as the gateway to Attendio. When you launch the app, you are greeted with a login screen. You give our Firebase Authentication module your Google Credentials, and Google sends back a token that uniquely identifies you and grants you access to several resources. Additionally, this is used in the profile section of Attendio. We are able to retrieve a user’s name, profile photo, and unique ID from this communication.
c) Cloud Firestore: Cloud Firestore is primarily used in the “Events” section of Attendio. We send our backend a user’s Firebase ID token which was retrieved from the Firebase Authentication SDK, and this allows us to make requests on behalf of the user. This token is passed to the backend along with what we are requesting, and the backend sends back whatever data we’ve requested.
d) **Dynamic Links:** Our dynamic link module is in several parts of our app. When an event is created in Firestore, the event id is passed to the dynamic link module and it includes it in a link. A QR code is generated that encodes the dynamic link for easy scanning on mobile devices. When someone scans the QR code or goes to the dynamic link in their browser, Attendio is opened and the check-in page is loaded. When the check-in page is loaded, it will extract the event id from the dynamic link that launched the page. The event id will then be used to find and display the event information.
e) **Riverpod:** Our Riverpod code is in two parts of our app. The first part is in our providers folder where we initialize the providers and develop any logic related to purely state management. The other part is within the UI widgets. By using providers, we can have our app update whenever the state changes. In our application, we used providers to share the authentication state and information of the current user to all widgets in the app that required either the state or information. We were also able to use it to keep track of the current tab the user was on so we could build the UI based on the currently selected tab in the bottom navigation. Our app also has a provider for our Dynamic Links. We use this provider to share the same instance of our Dynamic Links services class throughout the app, allowing us to access this functionality without the need for nesting.
B. Configuration Design
1) **Component Selection:**
a) **Material UI:** Material UI was chosen for our application for its simplicity to implement as well as its popularity among users. Material UI is much more minimalistic than other design options, making it much simpler to implement. This allowed us to focus more on adding functionality to our application rather than getting stuck on defining the look and feel. Additionally, Material UI is common among many applications, and following these design recommendations will allow us to greet our users with an application that seems familiar.
b) **Authentication (Google Sign-in):** We used Google Authentication because of how easy it was to integrate into our application. To get this up and running in its most basic way, the work consisted of nothing more than adding the necessary packages to our application and using them. Secondly, this is in the Google environment, and since Flutter is in this same environment, integration is seamless. Lastly, using Google Sign-in keeps us from having to store a bunch of user data, such as ids, photos, and any other metadata that is attached to a user. All of this combined allowed us as developers to spend time on other critical components of the application and not get caught up in the hurdles that can exist with authentication.
c) **Cloud Firestore:** The biggest contributor to why we chose Cloud Firestore as our backend was its flexibility as well as the simplicity of its implementation in Flutter. Flutter and Cloud Firestore are both Google products, and this allows for a very smooth integration process. To use it, the overhead consisted of nothing more than adding the package to our application. To speak on Cloud Firestore’s flexibility, it is a NoSQL database which means it is not modeled through tabular relations as seen in relational databases. This allows us to place our data however we want it, where we want it, and in the exact structure we want it, all without having to worry about relationships.
d) **Dynamic Links:** There are several reasons we went with Dynamic Links over the other deep linking protocols. Android and iOS use different ways that they interpret links and offer multiple libraries. iOS can use custom URL schemes or Universal Links while Android can use App Links or Deep Links. There are many differences between all of these with the main one being specific hosts and files [5].

**TABLE 7**
<table>
<thead>
<tr>
<th></th>
<th>Requires Specified Host</th>
<th>No Specific Host</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>iOS</strong></td>
<td>Universal Links</td>
<td>Custom URL schemes</td>
</tr>
<tr>
<td><strong>Android</strong></td>
<td>App Links</td>
<td>Deep Links</td>
</tr>
</tbody>
</table>
We want to support as many platforms as possible but having specific platform channels that handle it differently would make the app very complicated and hard to maintain. With Dynamic Links, all of the OS-dependent operations are abstracted away so you get a single link that works with all platforms. Since we already had Firebase in our app, the decision to use this service was even simpler.
e) **Riverpod:** Although Riverpod is still relatively new and not quite ready for production-grade applications, the benefits it offers over the Provider package outweigh this risk for the scope of this project. Being based in Dart instead of Flutter, any Riverpod provider can be used outside of the widget tree. Riverpod simplifies combining providers by providing a reference parameter when creating the provider that can read the state of other providers. This package also provides an observer that can be very helpful for debugging any issues with Riverpod providers and state management. Riverpod also offers a variant package called hooks_riverpod that further simplifies using providers by providing a custom “hook” function that can be used with the Flutter Hooks package.
VI. TEST PLAN
A. Security Test
The first engineering characteristic we listed is security. Security is very important to us because we want users to trust us with their information. If any user information got stolen, it would be very embarrassing and hurt the app’s reputation. Adversaries could also use user data to gain knowledge about people that they may not want to be public. To minimize vulnerabilities, we should try to patch as many bugs as we can. A simple way to find bugs is just by providing a bug reporting system within the app. Once the app is more stable and there are fewer vulnerabilities found, we could add some sort of
bug bounty program. This program would reward people for finding vulnerabilities in our service. There are several ways to go about this but we could start with a crowdsourcing platform like Bugcrowd. Researchers on this website will try to find parts of apps to break and they get a reward for exposing them. If crowdsourcing was not appealing, we could have a security engineer probe the app to see if they can find any revealing information or exploits that could compromise our app. One member of our team currently works in this type of position, so locating the right person would not be a challenge. No software is perfect so if there is ever a time where a vulnerability is found, we hope to fix it and update as soon as possible.
B. Usability Test
If our app is not very user-friendly, then people will not want to use it for their events. One way we can test usability is with a small group of volunteers. These participants will have a few tasks to do and can be observed doing them. We can note what parts of the app are confusing and what mistakes are made. This feedback is great to see how usable Attendio is to the average user. The app should be simple enough to not need a tutorial, but it may be necessary as more features are added.
C. Maintainability Test
To make sure the application is robust and is reliable with every update, we can use automated testing. Since it is hard to test every function in the app for potential breaking, we can write unit tests to take advantage of our app’s modularity. These tests can tell us if functions no longer behave as they should. Since it is all automated, we can run tests after every feature that gets added. While automated testing is not perfect, it can be a great way to ensure parts of our app do not get overlooked. As Attendio gets more complicated, we can write more complex integration tests to make sure the entire modules are working accordingly.
D. Adaptability Test
This engineering characteristic is a bit more abstract. We want to be very adaptable and support as many devices as possible. We used a cross-platform language that makes this easier for us to support many devices. While the language gives compatibility with the underlying systems, we still need to make sure devices have an easy way to update. This can be accomplished with CI/CD or continuous integration and continuous deployment. Whenever we have updates to the app, we can set it up so all platforms will receive the latest versions that are ready to be installed. Easy updates are important to make sure users apply security patches and have access to the latest features.
E. Aesthetics
The last engineering characteristic is more subjective. While everyone has their own opinions on how an app should look, there are best practices to make sure content is presented clearly and pleasing. We can get feedback from our app by sending out a survey on MTurk. MTurk would let us crowdsource feedback for a relatively inexpensive price. The survey can include several screenshots of Attendio pages and ask various questions about the looks. If there is significant concern about colors, buttons, layout, or other UI features, we may need to reconsider how the app looks. We can even have an area for feedback to see if any user has ideas that would improve the design.
VII. Project Deliverables
As a starting point our project has two main deliverables, a phone application and a web application. These products can be seen as our turn-in deliverables, but our true deliverable is to create a tool which provides users with our service through whatever means necessary. Fortunately, we have picked tools which allowed us to reach this objective, and our methods are described in detail within this proposal as well. These deliverables are applications which possess the functionality that has been previously mentioned in the background sections of this proposal. Our first deliverable was in the form of an Android mobile application. Developing a web application for our project is built into the timeline; however, in the event of setbacks, this portion of the project would have been the first item to be withdrawn from the project requirements, as our main focus was initially on completing our minimum viable product as a phone application.
However, we were able to successfully build not only a phone application, but also an adaptive web application that allows our users to access our service using any device with a web browser. Lastly, our chosen framework allows us to create our mobile application in both an Android and an iOS environment with minimal code changes. While our team was not able to complete this deliverable due to time and hardware constraints, we plan on continuing development on the project and including an iOS build in future releases.
VIII. Project Management
During the planning phase of the project, our team created a blueprint for the application in terms of user design and user experience by using Figma, a vector graphics editor and prototyping tool. We discussed various design problems, created solutions, and developed a unique look and feel for our application.
We used this initial product design to list out milestones of the project that are significant for the minimum viable project. After organizing our work and dividing it up, we began active development in early January 2021. Our development schedule was organized into a series of sprints with bi-weekly and weekly goals in mind.
We assigned a variety of roles to each team member during the development process. As we only have five members on the team, we each had to take on a variety of roles with different skill sets required. All team members actively participated in the software development lifecycle, but certain team members specialized in areas of the process.
Benjamin focused on setting up a Continuous Integration system to test our builds as we made changes, as well as implemented the Riverpod state management solution to ensure the quality and maintainability of our software. Daniel created the dynamic link system that allows a user to scan our QR codes from any camera app and be automatically redirected to our application. Vicki acted as lead mobile designer for the application, including implementing many of the user interfaces on the mobile application. Spencer was responsible for the web design mockups, as well as implementing a variety of screens on the mobile and web apps. Finally, Tucker served as the project manager for the duration of the project, organizing team meetings and managing tasks on the agile board, as well as development tasks including the user authentication system.
Due to the fact that we are developing a software application, we did not require any external funding for our application. We used free software to build our project and the cost will be based on maintaining our database and information within the application. We have not yet reached a scale where we need to pay for our database usage, and therefore did not need to acquire external funding. In the future, we might need to begin using a paid data plan if our storage requirements increase. In addition, in order to release on the Apple App Store, our team would need to acquire an Apple Developer License, which would cost $100.
Security protection is one aspect that posed the greatest risk in our application. For authentication, we ended up using Google since it is widely used and well vetted. Our databases are stored in the Firebase platform so they are protected by our development credentials and not stored in Attendio.
Overall, the project went smoothly due to a detailed plan and a great development team. We completed all the team and class milestones to achieve project viability. We might continue work on this project to finish up some additional features and polish it around the edges. Attendio was a great learning experience where we practiced agile app development and team collaboration.
IX. BUDGET
A. Google Firebase - Free
Although we don’t expect for our app to have enough traffic to warrant fees from Firebase initially, we have a budget of $100 for any possible fees we may encounter should our app’s traffic grow large enough. With our small amount of testing and basic use, we did not get anywhere near the thresholds that the Firebase free tier constrained us to.
B. Github - Free
To host our code and enable collaboration between group members, we used Github. By using Github, we could work on different parts of the same project at the same time while keeping the repository private if we need to.
C. Flutter - Free
We based our codebase on the Flutter SDK by Google as this allowed us to develop our app both at a low cost in terms of money and time. The Flutter SDK also has native support for Material design guidelines as they’re both by Google, allowing us to easily create a desirable UI/UX. With Flutter, we will also be able to easily bring the app to the desktop if we decide to go in that direction.
D. Android Studio/Visual Studio Code - Free
Our group used both Android Studio and Visual Studio Code to develop our app. This means each developer could choose which IDE they use as long as they could work on the app without affecting the codebase itself.
X. REFERENCES
Appendix A: Business Model Canvas
This model helps give our team and project a direction in respect to our target audience and define its purpose.
Appendix B: Gantt Chart
This visual shows the timeline of tasks we have completed throughout this semester as build our project.
Appendix C: Screenshots
These screenshots are from various pages within the mobile and web app.
|
{"Source-Url": "https://trace.tennessee.edu/cgi/viewcontent.cgi?article=3476&context=utk_chanhonoproj", "len_cl100k_base": 7957, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 32614, "total-output-tokens": 8620, "length": "2e12", "weborganizer": {"__label__adult": 0.0005431175231933594, "__label__art_design": 0.0006265640258789062, "__label__crime_law": 0.0002416372299194336, "__label__education_jobs": 0.003795623779296875, "__label__entertainment": 0.00011205673217773438, "__label__fashion_beauty": 0.0003027915954589844, "__label__finance_business": 0.0005254745483398438, "__label__food_dining": 0.0005083084106445312, "__label__games": 0.0007305145263671875, "__label__hardware": 0.0008692741394042969, "__label__health": 0.0003247261047363281, "__label__history": 0.0003285408020019531, "__label__home_hobbies": 0.00012612342834472656, "__label__industrial": 0.00033092498779296875, "__label__literature": 0.00034499168395996094, "__label__politics": 0.0002837181091308594, "__label__religion": 0.00034880638122558594, "__label__science_tech": 0.00199127197265625, "__label__social_life": 0.00019800662994384768, "__label__software": 0.00421905517578125, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.00042557716369628906, "__label__transportation": 0.0005793571472167969, "__label__travel": 0.00023829936981201172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41127, 0.01295]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41127, 0.02716]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41127, 0.94214]], "google_gemma-3-12b-it_contains_pii": [[0, 1347, false], [1347, 1623, null], [1623, 2483, null], [2483, 5307, null], [5307, 9418, null], [9418, 14743, null], [14743, 18924, null], [18924, 24559, null], [24559, 30586, null], [30586, 36427, null], [36427, 40752, null], [40752, 41031, null], [41031, 41127, null], [41127, 41127, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1347, true], [1347, 1623, null], [1623, 2483, null], [2483, 5307, null], [5307, 9418, null], [9418, 14743, null], [14743, 18924, null], [18924, 24559, null], [24559, 30586, null], [30586, 36427, null], [36427, 40752, null], [40752, 41031, null], [41031, 41127, null], [41127, 41127, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41127, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41127, null]], "pdf_page_numbers": [[0, 1347, 1], [1347, 1623, 2], [1623, 2483, 3], [2483, 5307, 4], [5307, 9418, 5], [9418, 14743, 6], [14743, 18924, 7], [18924, 24559, 8], [24559, 30586, 9], [30586, 36427, 10], [36427, 40752, 11], [40752, 41031, 12], [41031, 41127, 13], [41127, 41127, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41127, 0.16393]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f80d73e8ca9e67025147205602cb7be4770a2fda
|
Uhruski, Piotr; Grochowski, Marek; Schaefer, Robert
Octopus-Computation Agents Environment
Asociación Española para la Inteligencia Artificial
Valencia, España
Available in: http://www.redalyc.org/articulo.oa?id=92592807
Octopus - Computation Agents Environment
Piotr Uhruski, Marek Grochowski, Robert Schaefer
Institute of Computer Science
Jagiellonian University
Krakow
{grochowski,schaefer,uhruski}@ii.uj.edu.pl
Abstract
This paper presents a platform named Octopus that facilitates the building and execution of mobile agent based applications. It presents the key ideas of how agents embed the computational task and how they cooperate to find the solution. The Octopus is presented with its key mechanisms used to sustain and execute the agents. The cornerstones of the Octopus design are described in detail, giving readers an overview on how to implement a computation problem within the platform. Finally, actual application examples are shown with a short discussion of the Octopus based implementation properties.
Keywords: mobile agents, migration, computational tasks, local scheduling.
1 The Structure of Agent Computational System
Parallel computing systems based on distributed network resources become the most powerful tools in high performance computing. To effectively utilize often heterogeneous network resources, a universal solution allowing for the easy design, implementation, deployment and finally control of the application is required. We propose an agent based computing system with two layer architecture that clearly divides such system required functionalities. The upper layer is the application while the lower is a platform sustaining agents - the Octopus.
Multi Agent Application
This layer provides the means and tools to wrap the computational task into agents. The layer is build from agents, but the agents themself are further decomposed into two sub-layers - a shell and an embedded task (see [10]). A task is the particular problem with the data required for computations. The task should have the ability to divide itself, but that is a natural property in parallel computing. The outer agent is a task container - it requires its execution environment to provide load information (including RAM and CPU utilization) to compute a local scheduling policy and autonomously take the decision to continue internal task computing or migrate to find better resources. Agents require the execution environment to let them communicate with each other for cooperation.
Agents Execution Platform
The application agents require an environment to execute them. The specific requirements for this layer originated from CAE computing and mainly included minimum runtime overhead with maximum performance. The platform provides the minimal set of functionalities clearly required by large computation problems, but these have to be performing well. Thus we did not require a universal solution that would fit any agent based applications. The platform has to be well scal-
able in terms of the utilized machines, and we required the ability to define virtual topologies. This feature lets us reflect the physical network characteristics in the virtual topology and thus influence the computation. The presented requirements for the execution platform made us create our own, specialized agent runtime environment instead of using available solutions [1, 2]. We would like, however, to reuse the well-known standards for such environments (as FIPA [3]).
Introducing these layers gave clear functionalities separation between the execution platform and the application’s layer. First, the execution platform - named Octopus - became problem independent. It may be configured (e.g. by using the virtual topology parameters) to support a certain application with particular runtime characteristics, but the platform itself provides only basic mechanisms to run the application agents. Secondly, the agent application layer supports ready-made Smart Solid agents (see [9]) that implement required environment integration functionalities. Therefore the actual application implementation is done solely in the upper layer, which requires the embedded tasks to be partitionable, meaning the agents actually divide overall computational task into smaller sub-processes. That follows the basic processing models introduced by Lamport and Charrone [4]. This actually means that our architecture supports execution of any kind of parallel processes.
Beside the characteristic of overall system architecture, it influences the actual application design and its runtime properties. A single task mapping to agents depends on the particular problem. In this paper we present both one to one and one agent to many tasks mappings. This allows the application designer to change the executed entity’s size and thus influence the entities grain. On the other hand, thanks to the dynamic task’s partitioning and dynamic scheduling executed by agents (see [10]), the application’s execution is adapted to the changing environment properties. The Smart Solid agents have autonomous rights and supporting functionalities to change grain size (glue or split internal tasks) or to reschedule while computing. That makes the platform once more problem independent, as it’s free to take autonomous decisions outside the problems scope.
2 Definitions
We start with an analytical model description of the Octopus and the application model as introduced in [10].
Environment
In our model, the execution environment for computation applications is modeled as a quadruple \( (N, B_H, \text{perf}, \text{conn}) \) where:
\[
N = \{P_1, \ldots, P_n\}, \quad \text{where } P_i \text{ is a Virtual Computation Node (VCN). Each VCN may contain a number of agents.}
\]
\( B_H \) is the connection topology \( B_H = \{N_1, \ldots, N_n\}, N_i \subset N \) is an immediate neighborhood of \( P_i \) (including \( P_i \) as well).
\( \text{perf} = \{\text{perf}_1, \ldots, \text{perf}_n\}, \text{perf}_i : \mathbb{R}_+ \rightarrow \mathbb{R}_+ \) is a family of functions which describes the relative performance of all VCNs with respect to the total memory request of all agents allocated on the VCN.
\( \text{conn} : N \times N \rightarrow \mathbb{R}_+ \) is a function which describes up-to-date connection speed between two VCNs.
Application
The application agent is denoted by \( A_i \) where \( i \) stands for an unambiguous agent’s identifier. An agent is independent in terms of its structure and life-cycle. Firstly, the Octopus does not require agents to have any particular internal structure, so the agent needs to carry all required data internally or know how to communicate to get it. For a life-cycle, the Octopus does not require the agent to follow any execution pattern. After the agent finishes its execution, the Octopus delivers it to its owner. One exception to this rule is the right of the Octopus to protect its resources - if there are too many agents which do not want to migrate from the host, the Octopus may block their execution. Finally, the application is a set of agents, and a computing application state is a triple \( (A_t, G_t, \text{Sch}_t), t \in [0, +\infty) \) where:
\( A_t \) is the set of application agents, \( A_t = \{A_{\xi_i}\}_{\xi_i \in I_t}; I_t \) is the set of indices of agents active at the time \( t \),
\( G_t \) is the tree representing agents partitioning at the time \( t \). All agents constitute the set
\[
\{A_1, \ldots, A_n\} = \bigcup_{i \in I_t} A_{\xi_i}
\]
\( A_1 \) is a root agent.
of nodes $\bigcup_{\xi \in \Theta} A_\xi$, $\Theta = \bigcup_{j=0}^{t} I_j$, while $G_t$ edges show the partitioning history. All information on how to rebuild $G_t$ is spread among all agents such that each of them knows only its neighbors in the tree.
$\{Sch_t\}_{t \in [0, +\infty)}$ is the family of functions such that $Sch_t : A_t \rightarrow \mathbb{N}$ is the current schedule of application agents among the MAS platform servers. The function is represented by the sets $\omega_j$ of agents' indices allocated on each $P_j \in \mathbb{N}$. Each $\omega_j$ is locally stored and managed by $P_j$.
3 Octopus Key Tasks
The Octopus serves as a low level execution mechanism; it does not implement any application-related logic. It boosts the application with a simple and robust execution model and gives it the decision to implement more complex tasks like scheduling or tasks partitioning. The following sections describe the low level Octopus tasks in more details.
Agent Execution
The key element of the Octopus is the ability to execute agents. Each agent gets its own execution space - execution context, through which an agent examines the environment, communicates with other entities and requests certain actions.
Communication Means
Although agents act as stand-alone units and their execution is independent, they may relate to each other. To support such relations, the Octopus handles communication between agents and their owners. For the Octopus it is enough to provide asynchronous message communication. It is up to the agent’s designer to build a communication schema, which would simulate both synchronous and asynchronous communication schemas if required.
Environment Information
An agent has a set of goals with one of them being completing the internal task in a shortest possible time. To do that, the agent needs to search for the best available resources in terms of available memory and computational power. This information is gathered and delivered by the VCN running the agent. The VCN delivers not only local information, but also the distant machine’s details - most preferable for $P_i$ is the information concerning $P_j \in N_i$.
Virtual Network Topology
The agents perceive the execution environment as a virtual machine with VCNs interconnected in a graph structure. It is the role of the Octopus to build and sustain this topology.
Agents Migration
To complete the agent’s goals they require the ability to move across VCNs to find better resources or to locate other agents. Again, however, the migration is requested by the agent itself and is performed by the VCN to assure the agent’s integrity and completeness - the Octopus also handles communication between actively migrating agents.
Agent’s Construction Kit
The final Octopus goal is to speed up and ease the new application’s development. For that purpose, the Octopus features a an agent skeleton implementation to be used by the developer. In addition it contains various utility classes that may be used when running the application - like an application log parser used to analyze logs to get performance numbers.
4 Design Details
The Octopus was build from the ground up using object oriented techniques and best practices. Its internal structure constitutes blocks separated by interfaces which implement various functionalities. Each such block provides a particular set of services for executed agents by implementing previously presented tasks. Because each module interacts directly with agents or indirectly supports other modules we named these blocks policies. A policy is defined as a contract between the Octopus platform and an agent, which specifies the means to achieve a certain agent’s goals. The Octopus platform’s functionalities are defined and implemented by the Execution, Information, Communication, Migration and Internal policies. The last one groups all Octopus functionalities which are not exposed directly or indirectly to any agent. It contains all functionalities required to deal with internal issues. All Octopus tasks may be mapped to one of the presented policies. The following sections provide details of each policy, thus giving an overall design overview.
4.1 Execution Policy
The main task of the Octopus is to execute agents. For each agent the Octopus creates a container - a sandbox which guarantees that agents are executed independently and do not influence each other. All agent interaction with the environment is carried through the container’s interfaces. The Octopus defines the agent’s life-cycle, which is then used by the container to control the agent at runtime. The life-cycle guarantees that when the agent gets migrated, it will not execute from the beginning, but from the point where it got migrated. This is defined by the execution stages concept - an agent may slice its execution into as many stages as it wants, and after migration, the Octopus resumes the agent’s execution from the last completed stage. The agent may also state that its execution has finished and needs to be delivered to its owner. This is also handled by the execution context. The execution policy implements the agent’s execution Octopus task.
4.2 Information Policy
Agents require a certain amount of information describing the current environment with its state. The Octopus does not define how the agent interprets this data; it is responsible for delivering it. The information is delivered through the agent’s context, but it is gathered by a separate Octopus block. Therefore the Octopus is solely responsible for data gathering, analyzing and delivering. This separates the agent from the physical runtime environment (machine, operating system and network) configuration and gives the Octopus the advantage of being able to execute agents in a heterogeneous environment. The current implementation delivers all data required by the diffusion based scheduling implemented by the Smart Solid agents (see [10]), namely local and neighborhood host load in terms of active agent amount and virtual topology path information with the cost of communication to a given VCN. Additional information sources may be integrated with the Octopus and delivered transparently to the agent through its context.
4.3 Communication Policy
However, agents execute independently. But for collaborative work they need a means of communication to pass messages to each other. It is this communication that actually let’s agents cooperate with each other. The Octopus does not define how and when messages should be exchanged between agents; rather it is responsible for finding the destination for a properly addressed message and delivering it. As stated when defining the Octopus tasks, the communication is based on asynchronous message queues. Each agent’s context has incoming and outgoing message priority queues. When creating a message, the agent is responsible for setting its unique sender and receiver identification tokens and the message data that could be any type of data serializable into a stream. In addition, the message priority may be set. The agent creates a message and puts it into its outgoing queue. Queues are asynchronous, so this operation is not blocking and the agent may continue its execution. When receiving a message, the agent may actively check the incoming messages queue (non-blocking check and continue processing) or yield its execution until a new message is available (blocking message check). This gives the agent developer the possibility to implement any type of messaging schemas required - synchronous or asynchronous. To properly route the messages, the Octopus uses the virtual topology built up when establishing connections between VCNs. This is a very broad topic and is still under active development. The current implementation let’s agents communicate with their owner or with agents residing on the same VCN. The Octopus also synchronizes incoming messages and keeps the queue complete despite the agent’s migration ability. This is achieved by integrating the communication with migration policy and making message queue transfer a part of the migration procedure.
4.4 Migration Policy
This is the key agent’s ability, which allows an application (being a set of agents) to execute more effectively - agent’s may implement local scheduling algorithms based on neighborhood analysis, which may lead the agent to search and execute in a less loaded environment, thus making the whole application more effective. The Octopus platform’s role in migration is to provide an transaction-like process for the migration process. It is based on the two-phase commit protocol with the following steps:
1. The agent examines the neighborhood and
internally elects a destination host with a set of desired destination host parameters (for example: maximum acceptable machine load). Migration will succeed only when distant VCN parameters conform to these numbers.
2. The agent invokes the migration on its execution context passing the desired destination VCN. The control is passed from the agent to its context, which prepares the agent for migration by serializing it. The migration policy starts the two phase commit protocol by querying the distant machine to lock a place for the new agent. When the distant VCN confirms the lock, its load is immediately increased with the new agent - this allows the migration processes to execute simultaneously from different nodes.
3. The destination machine is asked to confirm the agreed load numbers and the agent is transferred to the remote node. The distant machine may be doing other migrations at that time, but since it confirmed the required load parameters, the migrated agent should find an acceptable environment when the migration is completed.
4. All messages from the agent’s incoming queue are transferred to the distant node where the new agent’s context is created with new message queues.
5. The agent’s owner is notified that the agent has moved - this assures the communication policy will be able to locate the migrated agent.
After these steps the migration is complete. If any of them fails, migration is rolled back, and the control is passed back to the agent with the appropriate error code. If migration succeeds, the agent starts execution on a distant node, but thanks to the execution stages concept, it continues its execution rather than starting from the beginning.
4.5 Internal Policy
The Octopus has a set of internal functionalities that are not exposed to the agents, but they are used by the platform to control the agent’s execution. The policy includes the agent’s serialization and the manual agent’s migration. Serialization may be used by a VCN when it is too loaded and the system’s performance is endangered. In such a case, an agent may get elected and serialized to the hard disk. As soon as the system resources are freed, its execution may be resumed. The manual agent’s migration is the system administrator’s tool to move a particular agent to a manually selected machine. This is similar to the agent-triggered migration, only the destination is imposed over an agent. Other functionalities are foreseen to be implemented for this policy with security (the agent’s authentication and authorization) amongst others. This is a broad topic and requires further explorations.
5 Building Octopus-based Applications
From the very beginning, the Octopus development was driven by particular computational needs (see [9, 10]). Therefore it features a set of components that help building up an application which benefits from agent-based processing. This support is referred to as Agent SDK. It features the Octopus platform client component, abstract agent classes and other utilities not directly related to the agent or application.
Platform’s Client
The Octopus platform runs agents, but these need to first be injected into one of the platform’s nodes - VCNs. This agent’s injector is referred to as the requester and has the following functionalities that cover the Octopus connection, agent delivery and communication. First, the requester connects to a given VCN and establishes a communication channel with it. The communication between the requester and the VCN is organized in the same way as within the Octopus - it is based on asynchronous messages passing. After the connection is established, it may start delivering agents to the VCN it is connected to. On the other hand, when the agent is finished, it is also delivered back to the requester by the Octopus. Finally, the agents may communicate back to the requester and may send messages to agents.
Thanks to these features, different application execution schemas are possible depending on the type of problem domain. The original problem is usually assembled by the requester, which then
feeds the data into agents who start computations. For example, if the application actually requires large data amount processing but the task may not be well partitioned (see [10] for such an example), it is better to create lightweight agents, which after injection into the Octopus, migrate to find the best execution environment. After it is found, agents send to their requester a message to get the actual computation data, which is then also delivered back in the form of a message. This guarantees fast agent scheduling. On the other side, some problem domains allow the task to divide extremely well (see [9] and the container concept). Applying the previously described procedure would result in tremendous network communication overhead, so it is better to create the agent initially with all its data encapsulated inside.
6 Technical Details
As mentioned before, the Octopus has been designed and implemented with the usage of object oriented techniques. Technically, the Octopus is implemented on the Sun Java platform with communication realized by CORBA services. Java is platform-independent since compiled Java classes (byte code) are not executed directly by the host OS, but by the Java Virtual Machine (JVM), which is customized to the operating system. Java also gives fairly a easy and adoptable mechanism for transferring any objects via network in a binary form. This is achieved by the object’s serialization support. We have chosen CORBA for its definition of remote services and support for service discovery (Naming Service). Using these patterns made the Octopus code well separated from the actual communication schema and allows us to change the CORBA into another similar communication framework (for example: RMI). On the other hand, CORBA is far more complicated than low level protocols (RMI), which could impose significant overhead on the implemented applications. This was considered in the design phase, and therefore we separated the communication layer in such a way that it would be feasible to substitute CORBA with another implementation. Please see the examples section for a discussion of communication effectiveness.
7 Examples And Tests
The Octopus has been used as a base platform to implement applications for different types of problems. Up to now, we have successfully realized the applications in the CAD/CAE (this was the original field of research for our team) and genetic based computations domains. These both require parallel computations upfront, and it would be best if the solution would be realizable with a PC’s network since this kind of environment is most widely available. To successfully build such applications, we require a virtual environment which would not restrict the application’s design and have small overhead above the application’s computation time and memory requirements. In the application’s design we additionally included different scheduling and communication schemas. The environment should not restrict that since different applications may have different requirements resulting from their problem domains - the amount of data needed for computation and the possible granularity level amongst others. Additionally, non-critical features included the ease of application creation. The following subsections present the example applications with an Octopus-related results discussion.
7.1 Linear Equations Solver (SBS PCG)
The solver was implemented as a part of the CAD/CAE process as described in [8]. The SBS PCG solver divides the initial problem matrix into a set of sub-matrices that may be computed separately (see [6]). The computation algorithm first requires the initial matrices to be reversed, and then a central algorithm follows an iterative approach to find the final solution vector. The Octopus has been used to encapsulate the sub-matrix tasks into the agents, which implemented a simple local scheduling strategy - based on the Octopus’s Information policy features available on each VCN - to distribute in the network. After distribution had been accomplished, the agents started their computation to invert the sub-matrices. Then, the Octopus’s communication was used to communicate partial results to the agent’s owner, which started iterations to get the final result. Iteration requires communication with sub-tasks. The Octopus message based communication schema was used to implement that. The Octopus’s fundamental functionalities were
used for this application, including the agent’s construction kit, migration, communication and information policies. The local scheduling implemented by SBS agents had enough information from the Octopus to successfully distribute agents while the communication was efficient enough not to slow down the application significantly.
7.2 Mesh Generator
The regular mesh generation (see [5]) was a second appliance in the field of CAD/CAE. The details of the application are presented in [10] with a detailed discussion of the results. The Mesh application followed the same design schema as the SBS PCG one; however, the agent internal design is more sophisticated, including a diffusion based local scheduling paradigm and the Smart Solid concept.
The mesh agents were produced by the requester and migrated within the network to find a suitable environment. After local results were complete, the Octopus’s communication was used to return them to the central application. The experiment showed that the Octopus puts small overhead on the application’s total computational time. The mesh agents were distributed instantly with no delay coming from the platform itself. Figure 1 shows the amount of actively computing agents over time for different runs of the Mesh application. Each function shows the amount of active agents in time for different input data set run - '8 bis' or '16 sekw' are example data set names. Please note that the agent’s amount increases mostly at the beginning - this is because Mesh agents hold all information required to start computations. They migrate and instantly start computations. There are also no synchronization points implied over the application. Mesh tasks are independent, and so they were executed simultaneously by the Octopus - we observed small tasks being finished while others were still migrating in the network.
7.3 Hierarchic Genetic Strategy (HGS)
Genetic computations present far different requirements from the CAD/CAE ones. Mainly the problem’s granularity level is much higher, thus raising the communication level tremendously. In our case, the HGS (see [7]) genetic algorithm was applied to optimize a function value in a given domain. The genetic populations may be evolved simultaneously, so they are a perfect candidate for agent based processing. However, the amount of populations prohibited us from using one agent per one population. We introduced a container concept, where a single agent contains multiple populations and evolves them. Such a container was wrapped with the Smart Solid agent implementing diffusion based local scheduling to get a fully functional, independent computing unit.
The HGS application dynamic is also different from the SBS/Mesh ones. Here, the application starts with a single agent which evolves the internal computations until the container is filled up. When this happens, the agent’s populations are split into two sets. One set is send back to the owner, and the second is further evaluated. When the owner receives the set, it creates a new figure 2. Dynamic of the HGS application run for a particular function.
agent and puts it into the Octopus. This behavior is described in detail here [9] and its impact on the application’s dynamic is illustrated by Figure 2. The amount of agents increases over time as they are created during computation. This effect is different from the Mesh application one, as observed in Figure 1.
In addition, we found that to speed-up the computation, we could introduce a cut-off factor for populations which do not seem to give any promising results. The goal was to achieve this without introducing any centralized communication requirement which would make the system unusable. The solution was based on a second type of agents which traversed the VCN network and communicated locally with agents on the node asking them to kill certain populations. This required the Octopus to introduce the agent’s inter-operability. In conclusion, the following additional Octopus characteristics have been proved in this experiment:
– The ability to sustain large amounts of agents - up to 400 agents on 50 machines were used in the HGS experiments.
– Effective communication that performed well under these circumstances and showed no significant delays in agent execution.
In addition, the HGS Octopus implementation was compared with a low-level Round Robin scheduling strategy (see [9]). The agent based implementation running on the Octopus showed moderate performance losses with a small to medium agent amount (up to around 200), whereas with a bigger amount of agents, the HGS Octopus was faster.
8 Conclusions
We have shown a platform for creation and running multi-agent computing applications. The Octopus is a living project with a set of applications already implemented on it.
It is a stable runtime environment for independent agents. It delivers enough information to let agents integrate with the environment to utilize resources in a very efficient way. It is able to sustain various amounts of agents - starting from 1 up to 400 actively computing agents on 50 machines.
The Octopus does not impose significant requirements over applications; it allows for the implementation of different design schemas - varying from large agents armed with all required information, to small, robust ones using extensive communication to achieve their goals. Despite the presented examples, the Octopus is not limited to executing applications from such domains only. The current appliance results from our team research fields, but our solution may be used in a much broader set of domains, mainly thanks to its low complexity level and small amount of limitations imposed over an application’s structure.
References
|
{"Source-Url": "https://www.redalyc.org/pdf/925/92592807.pdf", "len_cl100k_base": 5951, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27607, "total-output-tokens": 6876, "length": "2e12", "weborganizer": {"__label__adult": 0.0003204345703125, "__label__art_design": 0.00046944618225097656, "__label__crime_law": 0.00036978721618652344, "__label__education_jobs": 0.0007982254028320312, "__label__entertainment": 9.781122207641602e-05, "__label__fashion_beauty": 0.0001741647720336914, "__label__finance_business": 0.00040435791015625, "__label__food_dining": 0.0003592967987060547, "__label__games": 0.0005536079406738281, "__label__hardware": 0.0015840530395507812, "__label__health": 0.0005521774291992188, "__label__history": 0.00038242340087890625, "__label__home_hobbies": 0.0001251697540283203, "__label__industrial": 0.0007181167602539062, "__label__literature": 0.0002760887145996094, "__label__politics": 0.0003116130828857422, "__label__religion": 0.0005269050598144531, "__label__science_tech": 0.1361083984375, "__label__social_life": 0.00010532140731811523, "__label__software": 0.014373779296875, "__label__software_dev": 0.84033203125, "__label__sports_fitness": 0.00029087066650390625, "__label__transportation": 0.0006990432739257812, "__label__travel": 0.00023317337036132812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32140, 0.01835]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32140, 0.30937]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32140, 0.91993]], "google_gemma-3-12b-it_contains_pii": [[0, 345, false], [345, 3137, null], [3137, 7692, null], [7692, 11904, null], [11904, 16432, null], [16432, 20545, null], [20545, 25001, null], [25001, 28124, null], [28124, 32140, null]], "google_gemma-3-12b-it_is_public_document": [[0, 345, true], [345, 3137, null], [3137, 7692, null], [7692, 11904, null], [11904, 16432, null], [16432, 20545, null], [20545, 25001, null], [25001, 28124, null], [28124, 32140, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32140, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32140, null]], "pdf_page_numbers": [[0, 345, 1], [345, 3137, 2], [3137, 7692, 3], [7692, 11904, 4], [11904, 16432, 5], [16432, 20545, 6], [20545, 25001, 7], [25001, 28124, 8], [28124, 32140, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32140, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
e943556be0ed95b8b6d9fc8e3e850f8b91534a25
|
An approach for managing semantic heterogeneity in Systems of Systems Engineering
Simon Foster, Alvaro Miyazawa, Jim Woodcock, Ana Cavalcanti
University of York, UK
firstname.lastname@york.ac.uk
John Fitzgerald
Newcastle University, UK
john.fitzgerald@ncl.ac.uk
Peter Gorm Larsen
Aarhus University, Denmark
pgl@eng.au.dk
Abstract—Semantic heterogeneity is a significant challenge to integration in Systems of Systems Engineering (SoSE) due to the large variety of languages, domains and tools which are used in their construction. In this paper we envision a strategy for managing this heterogeneity by decomposing domain specific languages into their “building block” theories which can be independently analysed, and used as a basis for linking with similar notations. This provides a systematic approach to building a tool-chain which integrates the different theories, methods and tools used in SoSE. Our approach has been piloted on the development of theories enabling machine-supported analysis of SysML models of SoSs. We conclude that the approach has further potential and identify lines of future research, notably in techniques for handling mixed discrete and continuous behaviour, timebands, mobility and model integration in SoSE.
Keywords – systems of systems, modelling, integration, unifying theories, tool-chain, theorem proving.
I. INTRODUCTION
Systems of Systems Engineering (SoSE) is a collection of techniques that support the development and maintenance of a potentially complex aggregate of independently owned and managed systems that are relied upon to provide an emergent service. The nature of the systems that form this aggregate vary in terms of the domain of application, level of independence, ownership, manageability, etc. This variance potentially leads to a plethora of seemingly incompatible models, methods and techniques, and the effective engineering of SoSs depends on their coordinated use.
These issues are observed at different levels of abstraction. For instance at a lower level, we observe different theories in play: integer and rational arithmetic, real and complex differential calculus, sequential and distributed computation, etc. On a different level, we observe different combinations of such theories being used to model systems. For instance, pure software systems are usually modelled by a combination of sequential and parallel computations, whilst the physical parts of cyber-physical systems are modelled in terms of systems of differential equations and more traditional computations.
At an even higher level, we observe variations in domain-specific languages (DSLs) used to represent both the theories and combinations of theories. For instance, programming languages such as C and Java have different syntaxes, and diagrammatic notations such as MATLAB Stateflow [1] and UML State Machines have some different elements and variations in their semantics. These variations at all levels of abstraction need to be managed in order to fully support the formal development of SoSs.
Moreover SoS engineering in large projects is complicated by the use of different tools for engineering the different constituents. The different DSLs are supported by a collection of tools that variously enable an engineer to describe, refine and analyse a system model at different stages of development. This makes it difficult to co-ordinate the tools to provide evidence that an SoS deployment fulfils its requirements. Although we might seek to enforce the use of a single tool in a large SoS consortium, experience tells us that this is not possible as different tools have unique contributions, and individual members will be have experience in a particular tool-set that cannot be abandoned without significant cost.
It is therefore necessary to face the challenge of semantic heterogeneity. We present a vision for an integrated semantic framework for SoS engineering that we believe will enable semantically heterogeneous notations and tools to be unified and co-ordinated. Our approach is to look at the individual notations involved and perform a semantic decomposition, which involves separating out the individual theoretical ideas in an effort to see how a notation fits with other similar notations. We achieve this using Hoare and He’s Unifying Theories of Programming [2] semantic framework (UTP), which allows different theoretical aspects of a modelling language to be formally isolated, modelled and contrasted. By extension this means that SoS features, such as time, concurrency and mobility, can be considered as independent aspects that can then be composed to give a mathematical meaning to a system. Moreover, our semantic framework is mechanised meaning that we can reason mechanically about a DSL to prove both soundness properties and model correctness properties.
In the remainder we present our contributions. In Section II we explain how an integrated tool-chain can be used to solve the problem of heterogeneity. In Section III we expand on this by introducing theory engineering, by which the constituent aspects of a DSL can be studied. In Section IV we exemplify our vision by analysing OMG’s system modelling language SysML in terms of its constituent theories. In Section V we give some future directions for research, particularly in the area of cyber-physical systems. Finally in Section VI we conclude.
II. TOOL INTEGRATION
In this section we introduce the idea of tool-chain integration through application of the UTP semantic framework, and
discuss the study of different DSLs through theory decomposition. In essence we seek to factor out common aspects of a DSL in an effort to relate it to other, similar DSLs.
Creation of an integrated tool-chain for SoSE requires that we give a unified semantic account of the artifacts and results from the various tools. This is the essence of integration: that the tools can be co-ordinated to produce coherent and dependable analysis results and evidence. Achieving this requires that the underlying DSLs of the different tools can be unified by providing them with unambiguous and compatible mathematical meaning. Different analysis tools are based on different notations, for example, a model simulator may work at the level of a transition relation described using structural operational semantics (SOS) rules, whilst a program verifier may use an axiomatic Hoare calculus. Though distinguishable, these formalisms are related in that they provide a particular abstract view of the modelling world. If we are to co-ordinate the tools, we also need to formally link the different semantic models to likewise ensure their compatibility.
The approach taken by UTP is to define denotational models for the different languages, taking input from standard meta-models. A denotational model allows us to give a language a semantics by assigning to each language construct a mathematical object. For example, the operators of an imperative programming language can be described using relational calculus. This then allows the application of the laws and proof procedures of the relational calculus to program verification. Denotational semantics for modelling languages are often much more complicated, combining a wide variety of theoretical notions. Once a suitably expressive model has been fixed, it can be used as a means to prove correspondence between the semantic models so that the associated tool evidence can be properly composed. This idea is illustrated in Figure 1 for a tool-chain consisting of a model checker and simulator, which are both based on operational semantics (dynamic), and a refinement calculator and program verifier, which are both based on axiomatic semantics (static). Though these four components are independent, their basis in a unified semantics means they can be co-ordinated during system development.
Within the context of the COMPASS project [3] this approach has been successfully applied to the development of the Symphony tool platform [4].1 The Symphony tool provides syntax and type checking, interpretation/debugging, proof obligation generation, theorem proving, model checking, test automation and a connection to the Artisan Studio SysML tool where static fault analysis additionally is supplied. Symphony is based on an SoS modelling language called CML (The COMPASS Modelling Language), which combines a number of aspects required in SoS modelling such as discrete time, concurrency, processes, state and contracts [5]. A CML model consists of the following principle elements:
- **types**: such as numeric types, lists, sets, records, union types, and possible invariants;
- **functions**: map input types to output types, with possible pre- and post-conditions;
- **channels**: over which constituent systems can communicate messages;
- **processes**: model constituents, and in turn consist of:
- pre- and post-conditions;
- operations: acting on the state variables;
- actions: specify reactive behaviour (operations calls, message passing, timeout...).
The semantics of CML is being formally constructed in its denotational, operational and axiomatic flavours [3], and these various bases have been used to implement independently the simulator, model checker and theorem prover. Moreover, we have mechanised semantic models for CML in the Isabelle/HOL [6] interactive theorem prover. Isabelle/HOL is a proof assistant for Higher-Order Logic, a kind of functional programming language in which one can also state (dis)prove logical properties. Isabelle brings together a wide variety of automated proof tools [7], such as first-order automated theorem proving in the sledgehammer tool, and counter-example generation in the nitpick tool. It therefore provides an excellent basis for proving theorems about individual models, for example discharging consistency proof obligations for CML. Perhaps more importantly though it also allows us to formalise soundness proofs about the underlying meta-models themselves, which is the subject of the next Section.
### III. Theory Engineering
Core to UTP is the idea of a **theory**: an isolated interesting problem domain that deserves independent study. A UTP theory consists of an **alphabet** describing observations that can be made, a **signature** consisting of constructors for theory objects, and **healthiness conditions** that define the conditions of theory membership. For example, consider a theory of reactive processes where behaviour is represented by event traces. The trace observations can be recorded by a variable \( tr : Event^* \) whose values are lists of events. If we consider the trace before and after an action has executed, we need two such variables: \( tr \) and \( tr' \). Then an obvious healthiness condition is that traces can only get longer, which can be formulated as \( tr \leq tr' \). Additional healthiness conditions can then be used to formulate other aspects of concurrency, like time, as required by a denotational model. The signature of this theory consists of the usual operators for building concurrent processes, such as parallel composition and message passing. From a UTP theory we can derive laws of programming and concurrency which then act as the basis for various semantic models.
A variety of theoretical aspects have been formalised in UTP, including concurrency, discrete time, object-orientation, pointers and contracts. This is the theory layer of the semantic stack in Figure 1 which gives us the basis for performing semantic decomposition. When considering a particular language, we can link it to other languages by looking for
---
1See http://symphonytool.org/ for more information.
common factors. In Figure 3 we consider the theoretical aspects present in nine languages, in terms of seven theories. Though incomplete, it nevertheless shows that there is both commonality and differences between them. If these are considered in terms of UTP theories, we are given a well-founded way to account for semantic heterogeneity.
Distinct UTP theories can also be formally linked, for example through Galois connections [2]. A Galois connection consists of a pair of functions which together formalise the best approximations of an object of one theory in another. This allows parts of a model in one language to be approximated and reproduced in another language, thus providing points of linkage. In CML, for example, a Galois connection is used to link processes that model time to those that do not (untimed processes), enabling composition of processes that are heterogeneous with respect to time. Therefore conquering semantic heterogeneity reduces to theory engineering: construction and analysis of constituent theories, formation of links between them, and their application to solve practical problems, such as SoSE. The overall approach is illustrated in Figure 2.
To mechanically support theory engineering we have created our own semantic embedding of UTP in Isabelle called Isabelle/UTP [8]. We have used this to mechanise a number of key theories underlying CML processes, such as imperative programs with total correctness, and concurrency in reactive processes. We have then combined these theories to form a mechanised denotational model for CML, which is in turn used as the basis for the theorem prover component of Symphony. This mechanisation increases confidence in the semantic model’s soundness, in a similar way to how a pocket calculator can be used to verify a complicated calculation. Since we have a formal link between the different tools and underlying semantics we have an unbroken chain from engineering methodology to the underlying mathematics.
Moreover, from a practical standpoint, Isabelle provides a number of helpful features to aid in theory engineering. Isabelle is backed up by a large theory library, both bundled and in the associated Archive of Formal Proofs[2], to which theoreticians regularly contribute their mechanisation work. This gives the basis for importing existing theory mechanised by others into more expressive denotational models. For example, ordinary differential equations have been mechanised [9], and this can provide the basis for building a theory of continuous time for reasoning about hybrid and cyber-physical systems. This along with its powerful reasoning facilities and our implementation of UTP makes it an ideal environment in which to study semantic heterogeneity.
IV. THEORETICAL DECOMPOSITION OF SYML
SysML is a graphical notation aimed at modelling systems and as such has been adopted as a base notation for the COMPASS project along with CML. SysML provides a number of diagrams that help construct a model in a manner akin to the weaving process found in the aspect-oriented programming paradigm. Four of these are structural diagrams: block definition, internal block, package and parametric diagrams [10]. Block diagrams support the definition of the blocks that form the model as well as their components and relations, internal block diagrams support the description of the internal structure of composite blocks, package diagrams represent the interdependencies between sets of elements of the model, and parametric diagrams allow the statement of constraints over the properties of the model.
SysML also provides four behavioural diagrams: use case, sequence, activity and state machine diagrams. These support the description of the behaviours of the system, often at different levels of abstraction. For instance, use-case diagrams are often used to model high-level interactions with the system, whilst state-machine diagrams model at a lower level of abstraction how the individual components of the system behave.
State-machine diagrams describe the system in terms of its configurations (states), and activity diagrams provide the means of describing workflows and an alternative perspective in the specification of the behaviours of systems. Sequence diagrams support the specification of scenarios, which describe particular ways in which the elements of the system can interact by means of message exchanges. Finally, requirement diagrams provide support for structuring requirements in terms of decomposition and derivation as well as traceability.
Whilst CML has a formal foundation based on the already mentioned UTP, SysML lacks a formal account beyond syntactic and basic consistency properties. This limitation has been tackled by an integration of SysML in the formal setting of
COMPASS based on the informal semantics described in [11] and [12]. We first identified the elements of the notation for which formal support in the form of UTP theories already existed. For instance, state machines describe a subset of reactive processes that involve communication, parallelism and data operations in very specific patterns. Flow ports and parametric diagrams, on the other hand, potentially require the availability of a continuous-time theory as they can specify physical aspects of the system that are often modelled by systems of differential equations. Table 5 summarises some of the differences between CML and SysML that are further discussed in this section.
In the particular case of COMPASS, it has been observed that the subset of SysML that can be formalised within the currently available theories is the subset that can be specified in CML if we add extra abstractions. For instance, basic SysML constructs (such as transitions) often specify behaviours that are not available as primitives, but can be specified using CML. For this reason, the semantics of SysML has been defined in terms of CML, which then provides the link to the more basic theories of discrete time, concurrent state, object orientation, designs and refinement.
As an example, the semantics of the state-machine diagram shown in Figure 4 is a CML process whose behaviour is described by a number of parallel actions: one for the state machine, and one for each state and transition. There are eight parallel actions: \( stm\_Buffer, s\_empty, s\_mid, s\_full, t\_empty\_mid, t\_mid\_empty, t\_mid\_full \) and \( t\_full\_mid \). The CML model of the state machine in Figure 4 is then used to model the behaviour of the block \( Buffer \) that contains that state machine, which in turn is used to specify the model of the overall system.
A particularly interesting point is related to the communication patterns in SysML and CML. Whilst in CML communication is strictly synchronous, in SysML it is predominantly asynchronous. As a consequence, the semantics of SysML must provide an account for asynchronous communication in terms of synchronous communications. This is achieved by the introduction of two CML communications for each SysML communication — one for sending a value and another for receiving a value — as well as a buffer that allows values sent through a communication channel to be queued.
Another aspect in which SysML and CML differ in spite of similar terminologies is the use of operations. In CML, a class operation only modifies data, while in SysML, a block operation may also contain reactive behaviour (e.g., sending an event). The consequence of this mismatch is that SysML operations are modelled as CML actions of a process (that models a block). However, the actions of a process in CML are encapsulated and therefore cannot be called by other processes. This differs from the semantics of operations in SysML, and for this reason the actions that model the SysML operations must be made accessible by the only means a CML process has for interaction with other process: communication.
Now, if we consider a scenario where the design of a SoS involves multiple notations, provided these notations have a common foundation and are compatible, we should be able to analyse the collective behaviour of the SoS. This is, however, rarely the case. An an example, we consider a model where the most abstract specification of the SoS is described in CML, and its design is specified in such a way that some of the constituent systems are modelled in SysML and others are modelled in MATLAB's Simulink/Stateflow\(^3\). Since [13] and [14] provide an account for a subset of discrete time Simulink/Stateflow in Circus [15], which is a state-rich process algebra that shares a similar semantic foundation as CML, it would be desirable to have these models integrated and analysed. For this to be possible, the different models need to be made compatible, that is, an operation call in SysML must be translated into an interaction that a Stateflow diagram or a CML process can understand. If such compatibility can be achieved, the models can be integrated (provided they are all based on discrete time) and their interactions analysed. In particular, it should be possible to compare the abstract specification of the SoS in CML and the actual design modelled in the different formalisms by means of a theory of refinement.
V. Future Directions
In this Section we sketch out some future directions for research to address semantic heterogeneity in SoS Engineering, including: mixed discrete and continuous behaviour, time-bands, mobility and model integration.
Continuous Time. One of the most challenging areas for semantic heterogeneity in SoSE is the link between continuous time models of environmental and controlled phenomena, and the discrete time models of digital systems that interact with them. There are many possible SoSs in which one would wish to verify the presence or absence of an emergent behaviour that requires both cyber and physical models. For example, the integrator of a smart grid SoS may need to verify that feature interaction between existing independent power distribution systems will not lead to overloading of physical storage media such as batteries, or “brown outs” in the network. This requires
\[^3\]Simulink/Stateflow is graphical notation that supports the specification of cyber-physical systems in terms of both discrete and continuous time constructs.
modelling of both the (discrete) computing systems providing control, as well as the physics of electrical storage and distribution. The integrated computing elements of cyber-physical systems are likely to be complex, given the need to handle faults originating in the independent constituent systems.
In order to illustrate the research questions posed by the need to handle discrete and continuous models, and the extent to which our theory decomposition and UPT-based approach can help, we consider an admittedly very simple control example inspired by a Simulink model [1]. In the scenario that we consider, the owner of a warehouse storing a temperature-sensitive product wishes to install a cooling system that ensures the warehouse never exceeds a particular temperature, but is also cost effective. We model this by the hybrid system shown in Figure 6. The physical temperature of the warehouse is modelled by a continuous time model, with two parameters: the ambient temperature and airflow from the fan. The fan controller is a discrete component, which is connected to a temperature model via the port temperature. The controller starts in an off state, but when the ambient temperature reported exceeds $t_{\text{max}}$, the fan is activated to ensure the temperature does not exceed $t_{\text{max}} + \epsilon$. This in turn contributes to an increase in airflow that lowers the temperature according to the continuous-time model. Once the temperature reaches $t_{\text{min}}$, the fan is turned off. From the design perspective we would like to understand two main variables of this system: (1) the minimal $\epsilon$ we can have and (2) the lowest sampling rate needed to respond sufficiently quickly. Both of these questions are important to the modelling and implementation of a correct discrete controller.
**Co-modelling.** Modelling such systems requires that we consider both the discrete and continuous models. This task falls in the domain of co-modelling, where both aspects are considered in the engineering of the model [16]. Co-modelling includes techniques such as co-simulation, where a discrete and a continuous model are simulated in parallel, and hardware in the loop simulation (HiL), where the continuous model is swapped for a real hardware component. These techniques though informative, only provide one part of the engineering toolbox. As in formal methods we would like access to other tools, such as model checking and theorem proving, which together can provide greater assurance of correctness.
By formally constructing these models and applying such tools we may be able to verify that for a particular $\epsilon$ and sampling rate, the temperature bounds are respected. Whilst there are specialised tools that target hybrid systems, the majority of existing research into analysis tools focuses on the discrete domain. If we are to apply methods from the discrete domain to hybrid systems, we need to understand the theories behind both domains and formalise how they are linked. One step towards this is the development of continuous-time models within the context of the UTP, and linking them to existing models of discrete-time such as Circus Time [17] and CML.
There are a number of interesting research directions in this area. In particular the development of timebands frameworks [18] could be applied to specify and analyse hybrid systems through considering time at different levels of granularity. A timeband is an abstraction of the real-time continuum to a particular time unit, which defines the minimal interval at which events can be distinguished. Within a particular timeband events are instantaneous, whilst in a finer band an event can be associated to an activity which may have duration. For example, the minute timeband can be used to distinguish the occurrence of events separated by minutes, whilst the second timeband can further distinguish events occurring in the same minute. Furthermore, an event in the minute timeband can be associated with an activity in the second timeband, which is itself decomposed into individual events. This can be applied to compare different levels of abstraction of the time domain, and can therefore be used to link discretisations of a continuous model.
For instance we could start from a continuous time model that corresponds to the constraint in Figure 6. Next we would specify and verify a set of timebands that model a controller guaranteeing the constraint using a high sampling rate. In subsequent steps we can refine these timebands to use lower and lower sampling rates without violating the required temperature bounds. This would ensure that the final discrete controller implementation guarantees the initial property.
Formalisation in the UTP can be given by two theories for discrete and continuous time. One possibility for a discrete time UTP theory, based on timebands, is to model the time continuum as a function $\text{cont} : \mathbb{N} \to \text{Event}$ and a variable $\text{quant} : \mathbb{R}$ to represent the time unit. A particular value of $\text{quant}$ then corresponds to a particular timeband. A theory of continuous time in contrast can have $\text{rcont} : \mathbb{R} \to \text{Event}$ and behaviours can be modelled via a system of ordinary differential equations. We can then formalise Galois connections between different granularities of time in a discrete time theory within the continuous domain. This in turn allows us to transfer results proved in the continuous domain to a discretisation. Moreover, the two theories could provide models for related calculi, such as the duration calculus, which allow the formulation of temporal properties over a continuum.
**Mobility.** Aside from hybrid and cyber-physical models, a number of other aspects are important to SoSE. When considering evolution and reconfiguration of an SoS, we need to consider the modelling of mobility. Mobility, broadly speaking,
allows the representation of systems whose topology and architecture can change dynamically at runtime. For instance, in an emergency response system the communication system needs to reconfigure to take advantage of the best medium available. This may involve switching from radio, to cellular network, to satellite communication, depending on the circumstances. There are two main approaches to mobility, process mobility where individual processes can change location (e.g. Ambient Calculus [19]), and channel mobility where the processes are fixed, but the communicating topology can change (e.g. π-calculus [20]). Both models can be incorporated into the UTP possibly involving suitable higher-order models [21]. When used in conjunction with timebands, we may also be able to specify situations in which the loss of service due to hardware swapping has no significant impact on the availability of services. This kind of combination again can be supported by UTP theory composition.
Integration. We can then go one step further and consider the question of integration of existing models. If we already have access to a pre-existing temperature model in a diagrammatic notation such as Modelica [22] and wish to specify the controller in Stateflow, instead of translating the Modelica model into a compatible Simulink model we may wish to integrate them directly. Two possible solutions to this problem exist. Firstly, we can formalise the primitive notions in the languages as UTP theories, and construct semantic models for Modelica and Stateflow linked by their common factors. Secondly, we can specify the semantics independently and then provide formal adaptors that mediate the interaction between the two models. These adaptors can be modelled in the UTP by Galois connections. In this case we would like to access the airflow port interface in the Modelica temperature model, and the temperature port in the Stateflow controller model, both of which must be approximated in the adjacent model.
VI. Conclusion
The need to deal with diversity is one of the distinguishing characteristics of SoS Engineering [23]. In our work, we focus on the need to address the semantic diversity of models that make up a SoS description. Our systematic approach exploits the UTP to provide “building block” theories that can be composed via Galois connections to provide reasoning systems able to compose results from previously separate models.
This is still at the level of a vision. However, first steps have been realised in CML and the benefits are to be seen in the integrated tool-chain that is now emerging from this work. We have demonstrated how the approach plays out in the integration of SysML with CML. The practical costs of providing a semantic integration are yet to be evaluated, but it is important to emphasise that they are “one-off” costs in the sense that, once a UTP-based integration has been achieved, it serves for any use of the constituent model types.
Our experience in applying theory engineering in COMPASS suggests that there is potential in this approach for providing a sound basis for reasoning about global emergent SoS behaviours from heterogeneous component models, and so we have identified promising next steps in research. The approach works at the foundations of constituent theories, but has a direct bearing on the feasibility of sound automated tool support that is able more fully to realise the value of model-based SoS Engineering.
Acknowledgment
This work is supported by EU FP7 Integrated Project “Comprehensive Modelling for Advanced Systems of Systems” (COMPASS, Grant Agreement 287829). For more information see http://www.compass-research.eu.
References
|
{"Source-Url": "https://www-users.cs.york.ac.uk/~alcc/publications/papers/FMWCFL14.pdf", "len_cl100k_base": 6107, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19443, "total-output-tokens": 7690, "length": "2e12", "weborganizer": {"__label__adult": 0.00043320655822753906, "__label__art_design": 0.0005030632019042969, "__label__crime_law": 0.0004916191101074219, "__label__education_jobs": 0.0014200210571289062, "__label__entertainment": 0.0001239776611328125, "__label__fashion_beauty": 0.00025010108947753906, "__label__finance_business": 0.0004880428314208984, "__label__food_dining": 0.0005373954772949219, "__label__games": 0.000797271728515625, "__label__hardware": 0.0015993118286132812, "__label__health": 0.001094818115234375, "__label__history": 0.0005497932434082031, "__label__home_hobbies": 0.00019407272338867188, "__label__industrial": 0.001262664794921875, "__label__literature": 0.00055694580078125, "__label__politics": 0.0005068778991699219, "__label__religion": 0.0007958412170410156, "__label__science_tech": 0.427001953125, "__label__social_life": 0.00018918514251708984, "__label__software": 0.00868988037109375, "__label__software_dev": 0.55029296875, "__label__sports_fitness": 0.0004458427429199219, "__label__transportation": 0.0014429092407226562, "__label__travel": 0.0002720355987548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34982, 0.01967]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34982, 0.61652]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34982, 0.92599]], "google_gemma-3-12b-it_contains_pii": [[0, 5541, false], [5541, 11703, null], [11703, 16509, null], [16509, 22037, null], [22037, 27994, null], [27994, 34982, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5541, true], [5541, 11703, null], [11703, 16509, null], [16509, 22037, null], [22037, 27994, null], [27994, 34982, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34982, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34982, null]], "pdf_page_numbers": [[0, 5541, 1], [5541, 11703, 2], [11703, 16509, 3], [16509, 22037, 4], [22037, 27994, 5], [27994, 34982, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34982, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
e41c1a803b73039810c5952e64f9644c62ad6099
|
Proceedings of the Sixth OCL Workshop
OCL for (Meta-)Models
in Multiple Application Domains
(OCLApps 2006)
Improving the OCL Semantics Definition by Applying Dynamic
Meta Modeling and Design Patterns
Juan Martin Chiaradía and Claudia Pons
19 Pages
Improving the OCL Semantics Definition by Applying Dynamic Meta Modeling and Design Patterns
Juan Martín Chiaradía 1 Claudia Pons 1,2
1 LIFIA – Facultad de Informática, Universidad Nacional de La Plata
2 CONICET (Consejo Nacional de Investigaciones Científicas y Técnicas)
La Plata, Buenos Aires, Argentina
{jmchiara,cpons}@sol.info.unlp.edu.ar
Abstract. OCL is a standard specification language, which will probably be supported by most software modeling tools in the near future. Hence, it is important for OCL to have a solid formal foundation, for its syntax and its semantic definition. Currently, OCL is being formalized by metamodels expressed in MOF, complemented by well formedness rules written in the own OCL. This recursive definition not only brings about formal problems, but also puts obstacles in language understanding. On the other hand, the OCL semantics metamodel presents quality weaknesses due to the fact that certain object-oriented design rules (patterns) were not obeyed in their construction. The aim of the proposal presented in this article is to improve the definition for the OCL semantics metamodel by applying GoF patterns and the dynamic metamodeling technique. Such proposal avoids circularity in OCL definition and increases its extensibility, legibility and accuracy.
Keywords: OCL; formal semantics; dynamic meta modeling; design patterns.
1 Introduction
OCL (Object Constraint Language) is a formal specification language, accepted as a standard by the OMG (Object Management Group). OCL is a three-valued Kleene-Logic with equality that allows for specifying constraints on graphs of object instances whose structure is described by UML class diagrams, thus extending the expressive capacity of such notation. OCL is intended to be a practical formalism, addressing software developers who do not have a strong mathematical background. For that reason, OCL deliberately avoids mathematical notation; instead of symbols it uses a programming language oriented syntax and attempts to hide concepts such as logical quantifiers.
A fundamental requirement for any formalism is that it should count with a rigorous definition of both its syntax and its semantics. The recently adopted OCL 2.0 specification provides a formal definition of the OCL semantics (see official OCL semantics, appendix A in [1]) following the denotational approach. Such semantics is based on set theory with the notion of an object model, which is basically a formalization of UML Class Diagrams [4]. OCL expressions are interpreted by functions over environments, in the classical way [3]. Another approach to specify the OCL semantics that can be found in the literature consists in defining an embedding of OCL into other logics [5].
These two approaches have succeeded in describing the evaluation of OCL constraints in a formal, non-ambiguous manner providing established technologies for abstract reasoning, automatic verification, execution, or simulation of models; however they are not especially suited for explaining the semantics to people with modest mathematical background.
Due to the fact that the purpose of the semantics is to provide a common understanding of the formalism among its users, those mathematically rigorous definitions, which are not readable for a wide range of OCL users, are of little help. In the last years the academic community accepted that the semantics should be given in formalisms OCL users are familiar with, for example metamodelling. Adhering to this trend, the OCL 2.0 specification provides a semantics definition based on MOF metamodels (see chapter 10 of [1]) complemented by well formedness rules written in the own OCL. However, such circular definition not only gives rise to formal problems [6], but also puts obstacles in language understandings. Additionally, having into account the dynamic nature of semantics evaluation, it seems reasonable to think that dynamic meta-modeling techniques, rather than static meta-classes should be used to define the OCL semantics.
Working towards the solution to this problem, we propose to create a clearer and simpler alternative definition for the OCL semantics by giving a dynamic meta-model which is specified using a simple form of UML collaboration diagrams and applying well established design patterns [7].
The paper is organized as follows; in Section 2 we present a summary of the current OCL semantics [1] to provide an adequate context to the reading of this proposal. Then, in section 3, we propose a new definition for the OCL semantics, based on the Dynamic Meta Modelling technique (DMM) [8, 9]. In section 4 we apply the Visitor pattern [7] on the semantics metamodels. Finally, in Section 5, we present conclusions and future works.
2 OCL Specification Overview
An OCL expression is defined in [1] as "an expression that can be evaluated in a given environment" Additionally the specification in [1] states that “evaluation of the expression yields a value”. Taking it into account, the ‘meaning’ (semantics) of an OCL expression can be defined as the value yielded by its evaluation in a given environment.
Figure 1 shows an overview of the UML based specification of the OCL syntax and semantics presented in [1].
Figure 1: Overview of packages in the UML-based semantics
Figure 2 shows the overview of the AbstractSyntax package, which defines the abstract syntax of OCL as a hierarchy of meta classes. On the other hand, Evaluations package defines the semantics of these expressions using also a hierarchy of meta classes where each one represents an evaluation of a particular kind of expression (see figure 3). The idea behind this representation is that each evaluation yields a result in a given environment, therefore, the semantics evaluation of an expression in a specific environment is given by associating each evaluation instance to an expression model (see figure 4).
Figure 2: AbstractSyntax package overview
Improving the OCL Semantics Definition by Applying DMM
Figure 3: Evaluations package overview
Figure 4: Semantics Evaluation of OCL expressions.
The Evaluations package replicates the hierarchy of the abstract syntax. We believe that such duplication should be avoided since it hinders the legibility of the meta model and the efficiency in the development of automatics tools based on this semantics. We will expand on this issue in the next section.
3 Semantics evaluation via Dynamic Meta Modeling
In this section we extend the Abstract Syntax structure by defining a new operation which will give semantic meaning to syntax expressions by associating them with their corresponding values.
The standard maths semantics of OCL ([1] Appendix A) states:
A context for evaluation is given by an environment \( \tau = (\sigma, \beta) \) consisting of a system state \( \sigma \) and a variable assignment \( \beta : \text{Var} \rightarrow I(t) \). A system state \( \sigma \) provides access to the set of currently existing objects, their attribute values, and association links between objects. A variable assignment \( \beta \) maps variable names to values.
Let \( \text{Env} \) be the set of environments \( \tau = (\sigma, \beta) \). The semantics of an OclExp is a function \( I[[e]] : \text{Env} \rightarrow I(t) \) which binds each syntactic expression \( e \) with a value in \( I(t) \).
Using this maths semantics as a foundation, we define an operation \( \text{evalOn() \rangle} \) which is expected to represent \( I[[e]]:\text{Env} \rightarrow I(t) \). \( \text{EvalOn()} \) takes an EvalEnvironment as input parameter and returns a Value (i.e., \textbf{context} OclExpression \textbf{def}:evalOn(env:EvalEnvironment):Value). This operation acts like a bridge between AbstractSyntax and Values packages replacing the whole Evaluations package (see figure 5).

In addition, we believe that the best way to understand the semantics evaluation is by showing the evaluation process itself. By using only class diagrams to reflect the semantics evaluation, it is hard to reveal the latter process, because of the static nature inherent to these diagrams. Because of that, to completely understand all the process it is necessary to pay attention to the constraints established on these diagrams. In [1], these constrains are written in OCL with two negatives outcomes:
- The expressibility and simplicity obtained by using UML in the semantics metamodel over the math one is lost because of the necessity of be aware of the constraints to fully understand the semantic.
Improving the OCL Semantics Definition by Applying DMM
- The constraints are written in OCL, so that the semantics of OCL is defined in terms of OCL itself! If someone didn’t understand OCL, they would neither understand these constraints (see for example the IterateExp semantics in [1]).
Consequently, with the aim of producing a simple, precise and clear explanation, in this section we use sequence diagrams to visualize the distinct steps throughout the semantics evaluation of expressions. Each meta-class belonging to the Domain package will be replaced with a sequence diagram which states the concrete semantics and evaluation process of the corresponding syntactic construction. This approach is known as Dynamic Meta Modelling (DMM) [8] [9], and has been used in the semantics specification of UML elements (such as State Machines and Collaborations), but its use in OCL specification has not been explored before.
Taking advantage of the classification proposed in [10] we categorize the OCLExpressions in: Atomic Expressions, Navigation Expressions, OCL Predefined Operations and Iterator Expressions, adding a “new” category to this list that we call OCL Language Expressions. In the following sections we show one or two particular expression for each category.
3.1 Semantics of Atomic Expressions
This category consists of expressions such as LiteralExp and VariableExp that do not have any subexpressions.
Because of the simplicity of these two kind of expressions, we only show our DMM semantics for IntegerLiteralExp (\(I[[\text{Integer}]] = \mathbb{Z}\) in figure 6) and VariableExp (\(I[[v]](r) = \beta(v)\) in figure 7). A deeper discussion on these semantics can be found in [1].

3.2 Semantics of OCL Language Expressions
This category consists of expressions such as *LetExp* and *IfExp* that are predefined constructions of the language. With the aim of highlighting the benefits obtained using this new “translation” of the maths semantics, we remark the differences between the standard UML based semantics and the semantics of a *LetExp* proposed in this paper. The standard UML based evaluation of a *LetExp* proposed in [1] is shown in figure 8. The diagram shows how the evaluation encapsulates the result value and the evaluation environment, although neither the evaluation method nor the structural constraints are specified on this diagram.
Improving the OCL Semantics Definition by Applying DMM
A simple analysis of the last diagram does not give us enough information about the semantics of a LetExp; it only gives us information about the static structure of the elements implied in this evaluation. In order to fully understand the previous diagram, we must study its well formed rules expressed in OCL itself.
The Maths semantics of a LetExp as expressed in the Appendix A of [1] is shown in figure 9.
\[ I[[\text{let } v = e_1 \text{ in } e_2]](\tau) = I[[e_2]](\sigma, \beta(v/I[[e_1]](\tau))) \]
**Figure 9:** Standard Math based semantic of a LetExp.
At this moment, we can translate this algorithm under the applicative order reduction into a sequence diagram (see figure 10). As a first step of evaluation, we evaluate the init expression \((I[[e_1]](\tau), \text{signal 2})\) to get a new evaluation environment which extends the previous one with the latter evaluation \((\beta(v/I[[e_1]](\tau)), \text{signals 3 and 4})\). Then, we evaluate the in expression in the new environment, and the value returned by this later evaluation is the result of the whole LetExp evaluation \((I[[e_2]](\sigma, \beta(v/I[[e_1]](\tau))), \text{signal 5})\).
**Figure 10:** Sequence diagram of a LetExp evaluation.
3.3 Semantics of OCL Predefined Operations
As defined in [10], expressions from this category are instances of the metaclass `OperationCallExp` where the called operation is a predefined one, such as `+`, `=`,
Figure 11 shows the AS of these expressions. The semantics is given with a different scenario for each predefined operation. In particular we show the scenario corresponding to the maths semantics of equality operation in figures 12 and 13.
\[
\text{if } v_1 = v_2, \text{ and } v_1 \neq \perp \text{ and } v_2 \neq \perp, \\
\perp \text{ if } v_1 = \perp \text{ or } v_2 = \perp, \\
\text{false otherwise.}
\]
Figure 11. AS of `OperationCallExp`
Figure 12. Maths semantic of equality operation
3.4 Semantics of Navigation Expressions
OCL expressions of this category are instances of PropertyCallExp and AssociationEndCallExp. Such expressions are evaluated by ‘navigating’ from the object, to which the source expression is evaluated, to that element in the object diagram, which is referenced by the attribute or association end. We focus our example in the PropertyCallExp, named AttributeCallExp in the maths semantics stated in [1]; this semantics can be seen in figure 14.
\[
I_{\text{ATT}}(a : t_c \rightarrow t)(c) = \begin{cases}
\sigma_{\text{ATT}}(a)(c) & \text{if } c \in \sigma_{\text{CLASS}}(c), \\
\bot & \text{otherwise}
\end{cases}
\]
Figure 13. evalOn() over equality operation
Figure 14. Maths semantics of AttributeCallExp
Based on this semantics and using the AS of PropertyCallExp (figure 15) we construct the evalOn() as is shown in figure 16. The getCurrentValueOf() operation is a predefined operation over ObjectValue which returns either the value attached to the attribute name or OclUndefined if such value is not found.


### 3.5 Semantics of Iterator Expressions
Iterator expressions are the predefined operation over any Collection in OCL: select(), reject(), forAll(), iterate(), exists(), collect() or isUnique(). Since all these expressions can be expressed
by macros based on `iterate()`, it would be sufficient to refer for their semantics just to the semantics of `iterate`.
We will remark the differences between the standard UML based semantics ([1]) and our specific semantics of an `IterateExp`. The semantics evaluation of an `IterateExp` as is expressed in [1] is shown in figure 17. Once again we face the problem that such static diagram does not transmit enough information about the semantics evaluation process and we have to appeal to the well formedness rules established on this diagram [1]; without these constraints we would be unable to completely understand the semantic process of an `IterateExp`.

**Figure 17**: Standard UML based semantic evaluation of an `IterateExp`.
Even worse, such constraints try to explain how `IterateExp` works but lacks of correctness due to the fact that the `IterateExp` is defined in terms of a `ForAllExp` which is itself defined in terms of `IterateExp`, as follows:
The environment of any sub evaluation is the same environment as the one from its previous sub evaluation, taking into account the bindings of the iterator variables, plus the result variable which is bound to the result value of the last sub evaluation.
```ocltex
context IterateExpEval inv:
let SS: Integer = source.value->size()
in
if iterators->size() = 1
then
Sequence[2..SS]->forAll(i:Integer | bodyEvals->at(i).environment = bodyEvals->at(i-1).environment
>replace(NameValueBinding(iterators->at(1).varName, source.value->asSequence()->at(i)))
>replace(NameValueBinding(result.varName, bodyEvals->at(i-1).resultValue ))
```
---
**Proc. OCLApps 2006 13 / 19**
else -- iterators->size() = 2
Sequence(2..SS*SS)->forAll(i: Integer | bodyEvals->at(i).environment = bodyEvals->at(i-1).environment->replace(NameValueBinding( iterators->at(1).varName, source->asSequence() ->at(i.div(SS) + 1)))->replace( NameValueBinding( iterators->at(2).varName, source.value->asSequence()->at(i.mod(SS)))))->replace( NameValueBinding(result.varName, bodyEvals->at(i-1).resultValue ))
Endif
If the reader has not a previous knowledge on OCL, it is clear that the previous constraint is almost impossible to understand, and with the proper knowledge of the language, the reading and comprehensiveness of these constraints is a hard task to do.
A summary of the math semantics is shown in figure 18 (see Appendix A of [1] for the full version), while figure 19 and figure 20 display the semantics expressed via sequence diagrams.
\[
\mathcal{I}[\text{initExp}] = \mathcal{I}[\text{bodyExp}] \quad (\tau)
\]
Where \( \tau' = (\sigma, \beta') \) and \( \tau'' = (\sigma, \beta'') \)
\[
\beta' := \beta\{ v_2 / \mathcal{I}[\text{initExp}] (\tau) \}
\]
\[
\beta'' := \beta\{ v_2 / \mathcal{I}[\text{bodyExp}] (\sigma, \beta' \{ v_1 / x_1 \}) \}
\]
and if \( e_1 \in \text{ExprSequence}(t_1) \), \( \text{iterate}' \) is defined as:
\[
\mathcal{I}[\text{iterate}' (v_1 | e_3)] (\tau') = \begin{cases}
\mathcal{I}[v_2] (\tau') & \text{if } \mathcal{I}[e_1] (\tau') = <> \\
\mathcal{I}[\text{mkSequence}_{e_1}(x_2, ..., x_n) \to \text{iterate}' (v_1 | e_3)] (\tau'') & \text{if } \mathcal{I}[e_1] (\tau') = < x_1, ..., x_n>
\end{cases}
\]
Figure 18: Maths semantics of IterateExp
The IterateExp evaluation is defined as follows:
The first sub evaluation will start with an environment in which the result variable is bound to the init expression (initExp) of the variable declaration in which it is defined (\( \beta' := \beta\{ v_2 / \mathcal{I}[\text{initExp}] (\tau) \}) , signals 2, 3 and 5 in figure 19); then we proceed to evaluate the body with all iterator variables bound to the different combinations of the source (figure 17). The iterators binding (\( \beta'\{ v_1/x_1 \}) \) in \( \beta'' := \beta'\{ v_2 / \mathcal{I}[\text{bodyExp}] (\sigma, \beta' \{ v_1 / x_1 \}) \}) is realized by CombinationGenerator (signals 7 and 8 in figure 20), under a ‘depth first search’ strategy. This strategy determines the number of sub evaluations over the body ( \( \mathcal{I}[\text{bodyExp}] (\sigma, \beta' \{ v_1 / x_1 \}) \)) in \( \beta'' := \beta'\{ v_2 / \mathcal{I}[\text{bodyExp}] (\sigma, \beta' \{ v_1 / x_1 \}) \}) ; signal 9 figure 20); as last step, these sub evaluations will update the result variable (signals 10 and 11 in figure 20) which is returned as the result value of the whole evaluation process (\( \mathcal{I}[v_2] (\tau') \), signal 12 in figure 19).
Figura 19: \textit{IterateExp} semantics as sequence diagram.
Figure 20: Body Evaluation of an \textit{IterateExp}.
4 Applying the Visitor Pattern
The *OclExpression* structure is not likely to change, and several operations might be defined (e.g. refactoring operations, semantics evaluation, code generation operations, etc.). Consequently, we consider that it is more appropriate to avoid polluting the static structure with these operations and then to apply the Visitor pattern [7], in order to keep it simple and clear (See figure 21).
Now we define a meta class named *OclEvaluator* which will replace *evalOn()* maintaining its functionality in one separate *visit()* operation for each non abstract syntactic class of the AS package.
The evaluation environment now is known directly by the *OclEvaluator* and must be modified in a controlled way to avoid the side effect and maintain the query property (figure 22).
With this approach, the semantics evaluation of an *OclExpression* is entirely carried out by *OclEvaluator* which associates AbstractSyntax package with Values packages (see figure 23).
Improving the OCL Semantics Definition by Applying DMM
As an example, in figure 24, we show how the operation `evalOn()` over a `LetExp` is modified to become adapted to the visitor pattern.
**Figure 23:** OCL meta model using OclEvaluator
**Figure 24:** LetExp semantics using OclEvaluator
5 Conclusion and Future Works
OCL is an object property specification language, which is rigorous but simple and easy to use. Therefore, it becomes a very interesting option for the development of code verification and derivation tools.
OCL addresses people with modest mathematical background. Thus the OCL semantics should be given in a simple formalism OCL users are familiar with, for example metamodeling.
In this article, we elaborate an alternative definition for the OCL semantics. This proposal re-uses the OCL syntax metamodel, defines the relation between syntax and semantics through UML collaboration diagrams adhering to the Dynamic Metamodeling (DMM) approach. In this way, circularity on the OCL definition is avoided, and intuitive communication is increased. Besides, the OCL math semantics was used as a foundation and guidance for the semantics definition. Although math semantics could be tedious and hard to understand, and demands users with more academic background, we showed that it could be translated into sequence diagrams offering a more readable and simple semantics metamodel.
On the other hand, the adequate performance of the tools supporting OCL strongly depends on the quality of language definition. To count on a well-defined syntax and semantics will result in benefits for such tools. Also, it is almost straightforward to translate this semantics into a programming language such as Java, because of the proximity between sequence diagrams and programming languages.
Finally, we re-designed the OCL semantics metamodel by applying the ‘Visitor’ design pattern, which makes it easier the creation of new functionality over the OCL syntax structure and its integration into CASE tools. For example, concerning model transformations, it is possible to define OCL constraints transformations by adding a new “visitor” for the OCL syntax hierarchy. In this sense, we are working on the redefinition of the ePlatero evaluator following the proposal presented in this article in order to analyze the potential advantages regarding the different indicators, such as reliability, efficiency, modifiability, etc.
References
Improving the OCL Semantics Definition by Applying DMM
Proc. OCLApps 2006 19 / 19
|
{"Source-Url": "https://journal.ub.tu-berlin.de/eceasst/article/download/40/75", "len_cl100k_base": 5336, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 37392, "total-output-tokens": 7112, "length": "2e12", "weborganizer": {"__label__adult": 0.00034689903259277344, "__label__art_design": 0.0003693103790283203, "__label__crime_law": 0.0002999305725097656, "__label__education_jobs": 0.0006351470947265625, "__label__entertainment": 5.65648078918457e-05, "__label__fashion_beauty": 0.00014638900756835938, "__label__finance_business": 0.0002294778823852539, "__label__food_dining": 0.00032448768615722656, "__label__games": 0.0003612041473388672, "__label__hardware": 0.0004422664642333984, "__label__health": 0.0004045963287353515, "__label__history": 0.00021564960479736328, "__label__home_hobbies": 7.891654968261719e-05, "__label__industrial": 0.0004138946533203125, "__label__literature": 0.0003228187561035156, "__label__politics": 0.0002689361572265625, "__label__religion": 0.000446319580078125, "__label__science_tech": 0.0133056640625, "__label__social_life": 9.16123390197754e-05, "__label__software": 0.00466156005859375, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.000263214111328125, "__label__transportation": 0.0004413127899169922, "__label__travel": 0.0001989603042602539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25430, 0.02503]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25430, 0.54589]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25430, 0.8368]], "google_gemma-3-12b-it_contains_pii": [[0, 251, false], [251, 2322, null], [2322, 5505, null], [5505, 6218, null], [6218, 6916, null], [6916, 8872, null], [8872, 10642, null], [10642, 11316, null], [11316, 12593, null], [12593, 13303, null], [13303, 14057, null], [14057, 14704, null], [14704, 16367, null], [16367, 19167, null], [19167, 19284, null], [19284, 20284, null], [20284, 20578, null], [20578, 23370, null], [23370, 25430, null]], "google_gemma-3-12b-it_is_public_document": [[0, 251, true], [251, 2322, null], [2322, 5505, null], [5505, 6218, null], [6218, 6916, null], [6916, 8872, null], [8872, 10642, null], [10642, 11316, null], [11316, 12593, null], [12593, 13303, null], [13303, 14057, null], [14057, 14704, null], [14704, 16367, null], [16367, 19167, null], [19167, 19284, null], [19284, 20284, null], [20284, 20578, null], [20578, 23370, null], [23370, 25430, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25430, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25430, null]], "pdf_page_numbers": [[0, 251, 1], [251, 2322, 2], [2322, 5505, 3], [5505, 6218, 4], [6218, 6916, 5], [6916, 8872, 6], [8872, 10642, 7], [10642, 11316, 8], [11316, 12593, 9], [12593, 13303, 10], [13303, 14057, 11], [14057, 14704, 12], [14704, 16367, 13], [16367, 19167, 14], [19167, 19284, 15], [19284, 20284, 16], [20284, 20578, 17], [20578, 23370, 18], [23370, 25430, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25430, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
941a81171b1eded7a0c2f77da4e20ec7d1263d2c
|
Comparative Evaluation of Packet Classification Algorithms for Implementation on Resource Constrained Systems
Original
Availability:
This version is available at: 11583/1494576 since:
Publisher:
IEEE
Published
DOI:10.1109/CONTEL.2005.185835
Terms of use:
openAccess
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
(Article begins on next page)
Comparative Evaluation of Packet Classification Algorithms for Implementation on Resource Constrained Systems
Gianluca Varenni*, Federico Stirano***, Elisa Alessio**, Mario Baldi*, Loris Degioanni*, Fulvio Risso*
* Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy
**Telecom Italia Labs - System On Chip, Torino, Italy
***Istituto Superiore Mario Boella, Torino, Italy
{gianluca.varenni,mario.baldi,loris.degioanni,fulvio.risso}@polito.it; stirano@ismb.it; elisa.alessio@tilab.com
Abstract — This paper provides a comparative evaluation of a number of known classification algorithms that have been considered for both software and hardware implementation. Differently from other sources, the comparison has been carried out on implementations based on the same principles and design choices. Performance measurements are obtained by feeding the implemented classifiers with various traffic traces in the same test scenario. The comparison also takes into account implementation feasibility of the considered algorithms in resource-constrained systems (e.g. embedded processors on special purpose network platforms). In particular, the comparison focuses on achieving a good compromise between performance, memory usage, flexibility and code portability to different target platforms.
I. INTRODUCTION
A vast literature on classification algorithms and their performance does exist, but our work is necessary, hence relevant since existing evaluations do not allow a significant comparison based on real-life data. In fact, a comparison based on existing literature could be carried out only according to analytical worst-case bounds. Even though figures on the performance of classification algorithms in real-life scenarios can be found, they are part of studies on a single algorithm: the measurement scenarios are different and the implementations are not uniform, consequently the results are not comparable.
This work studies known classification algorithms with respect to their suitability for being (i) deployed for common networking applications (i.e., not optimized for a specific one), and (ii) implemented in embedded systems, i.e., systems with strict requirements, limited resource availability, and no specific hardware support, such as content addressable memories.
A (packet) classifier is a collection of rules — usually called ruleset — that is used to partition network traffic into different groups, sometimes called flows or buckets. Every rule specifies a subset of the network traffic, for example “IP traffic”, or “traffic sent from host 1.2.3.4”, thus somehow characterizing packets grouped into that flow. When a packet satisfies a rule, the packet is said to match the given rule. A classification algorithm determines whether a packet matches at least one rule of a classifier.
Packet classifiers are widely used in IP networking where rules usually involve one or more packet header fields (e.g. IP source address, TCP destination port). Each rule $R$ is composed of $i$ components, so that each component $R[i]$ applies to a specific header field. When more than one field is considered, the classifier is said to be multifield. As an example, Table 1 shows a small multifield ruleset that includes value/mask rules on the source and destination IP addresses.
Packet classifiers are widely used for various network applications, many of which related to quality of service (QoS) provision, and consequently in several types of network devices that might be implemented as or composed of embedded systems. Examples of QoS related applications of packet classifiers are:
- Traffic conditioning and shaping appliances; they use multifield classifiers, usually on session tuples, to separate traffic flows in order to be able to apply on them admission, marking and shaping policies. Traffic conditioning appliances or functionality are fundamental whenever in the deployment of both the IntServ [1] and DiffServ [2][3] approach.
- IntServ routers; they use multifield classifiers, usually on session tuples, to separate traffic flows in order to store packets in different queues on which scheduling algorithms suitable to provide the required QoS are applied.
- DiffServ routers; they use single field classifiers based with a limited ruleset concerning the value of the DS (Differentiated Services) field [3] to separate packets belonging to different traffic classes in order to handle them according to the corresponding per-hop behavior (PHB).
This work aims at identifying classification algorithms that can be effectively implemented on embedded systems and deployed in any of the above listed applications. Execution in embedded systems imposes strict limits on the characteristics of the algorithms, such as simple (static) memory management, limited code size, limited CPU usage requirements, limited data storage necessities,
### Table 1. Sample Multifield Ruleset
<table>
<thead>
<tr>
<th>Rule</th>
<th>Source Value</th>
<th>Source Mask</th>
<th>Destination Value</th>
<th>Destination Mask</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>130.192.1.0</td>
<td>255.255.255.0</td>
<td>130.192.2.0</td>
<td>255.255.255.0</td>
</tr>
<tr>
<td>2</td>
<td>130.192.2.0</td>
<td>255.255.255.0</td>
<td>130.192.1.0</td>
<td>255.255.255.0</td>
</tr>
<tr>
<td>3</td>
<td>130.192.0.0</td>
<td>255.255.0.0</td>
<td>130.192.3.0</td>
<td>255.255.255.0</td>
</tr>
</tbody>
</table>
adaptability to various hardware platforms and architectures.
Our work, and this paper describing it, was organized as follows. The various algorithms proposed in the literature (Section B) as well as the metrics commonly deployed to evaluate them (Section A) are first surveyed. The implementation objectives and the guidelines followed to develop software for embedded systems are then shown in Section III. Based on this, selection criteria (Section A) are formulated and are used to identify a limited set of algorithms on which to perform a more detailed and targeted comparative evaluation. Section IV provides the results of the comparative evaluation conducted with real-life traffic traces and final conclusive remarks are provided in Section V.
II. THEORETICAL ANALYSIS OF CLASSIFICATION ALGORITHMS
Among the others [5], the comparative survey of classification algorithms by Gupta and McKeown [4] provides a detailed comparison of the most important known algorithms for multifield classification. Even though this work represents a complete and interesting tutorial on classification algorithms, it does not present any performance comparison based on real life network traffic. Our work leverages off some of the criteria and results presented by Gupta and McKeown to select a reduced set of classification algorithms that best fit to be implemented in embedded systems. Another contribution of our work lies in the detailed and homogeneous evaluation of such selected algorithms that have been implemented with common criteria and evaluated in a common test bed using real traffic captures.
A. Evaluation metrics and parameters
The metrics adopted are the ones commonly used by various authors [6][7][8][9][11][12] in literature, including Gupta and McKeown in [4]: search time, memory consumption, and update time.
Search time (T), i.e. the amount of time needed to classify a packet, is the most obvious metric; in order to devise a measurement (at least partially) independent from the particular test bed, the search time is measured in terms of CPU clock cycles.
Memory consumption (M) is the amount of memory needed to store the ruleset in some specific data structure in memory, computed either at instantiation or run time. Memory consumption is an excellent indicator of the compression capability of the algorithm measured as the ratio between the ruleset size (i.e. number of rules and number of fields) and its footprint in memory.
The update time (U) is the amount of time necessary to insert, delete, or modify a rule in the running ruleset.
An interesting metric is represented by the number of memory accesses performed by the algorithm, but it is not widely used because getting this data is far from being trivial.
The three metrics previously described generally depend on the following parameters:
- The number of rules \( N \) in the ruleset
- The number of fields \( d \) globally used within the \( R[i] \) components of each rule
- The length of each field, in bits, called \( W_i \). In order to simplify the evaluation of the algorithms, we will use a new fictitious parameter \( W \), defined as \( W = \max(W_i) \)
Section A will provide some insight in the implications of such simplification on the comparative evaluation presented later.
B. Theoretical complexity of some well-known algorithms
In order to have a first general comparison of the classification algorithms and select which to adopt for a more thorough analysis, the theoretical worst-case bounds for the metrics identified in Section A were taken into consideration. Table 2 shows the formulas expressing the bound for each of the metrics. Such formulas were either taken directly from the literature, when available, or inferred from a paper describing the corresponding algorithm.
<table>
<thead>
<tr>
<th>Algorithm</th>
<th>Search time (T)</th>
<th>Memory usage (M)</th>
<th>Update time (U)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Linear search</td>
<td>N</td>
<td>N</td>
<td>N/A</td>
</tr>
<tr>
<td>TRIES</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Hierarchical tries [4]</td>
<td>( W^T )</td>
<td>( N_dW )</td>
<td>( D^W )</td>
</tr>
<tr>
<td>Heap-on-Trie [6]</td>
<td>( W^H )</td>
<td>( N^W )</td>
<td>( W^{log}N )</td>
</tr>
<tr>
<td>Binary search-on-Trie [6]</td>
<td>( W^{log}N )</td>
<td>( N^W )</td>
<td>( W^{3log}N )</td>
</tr>
<tr>
<td>GEOMETRIC TECHNIQUES</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cross producing [7]</td>
<td>( dW )</td>
<td>( N^d )</td>
<td>N/A</td>
</tr>
<tr>
<td>HEURISTICS</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Hierarchical Cuttings [9]</td>
<td>D</td>
<td>( N^d )</td>
<td>N/A</td>
</tr>
<tr>
<td>Tuple Space Search [8]</td>
<td>N</td>
<td>N</td>
<td>N</td>
</tr>
<tr>
<td>Recursive Flow Classification [12]</td>
<td>D</td>
<td>( N^d )</td>
<td>N/A</td>
</tr>
</tbody>
</table>
Hardware based [14] and ad-hoc algorithms [10] were not included in this evaluation since either the selected metrics cannot be applied to them, or a comparison based on them is meaningless due to the particular nature of such algorithms. Instead, the linear algorithm was included because it is widely used by software based firewalls (e.g. Linux netfilter/iptables [13]) and it is an excellent baseline against which other algorithms can be compared to, especially in the implementation and testing part of this work.
The bound on the update time is not shown for some of the algorithms since they do not explicitly support dynamic updates to the running ruleset. This stems from the fact that these algorithms preprocess the ruleset into a specific custom data structure that does not support insertion or removal of rules. Instead, in order to cope with ruleset changes the whole ruleset must be re-processed thus yielding a new data structure. Such an approach is usually inefficient, since the preprocessing time is typically quite high.
C. Practical issues with the theoretical complexity
The worst cases in Table 2 show quite clearly that the linear search algorithm outperforms the other algorithms in terms of memory consumption and update time. Its
search time performance is comparable to the other algorithms when the number of rules is not large; for example, when classifying UDP flows or TCP connections (\(d=5\) and \(W=32\)) the break point is one or two hundreds rules. In fact, the search time of the other algorithms depends on the total number of bits \(dW\) of the various fields in each rule because the classification algorithm processes the classification fields bit by bit—in particular, this is the approach used by all the algorithms based on tries. Consequently, the linear algorithm might be particularly interesting in cases, IPv6 addresses, in which the total number of bits \(dW\) is high.
As a matter of fact, the theoretical analysis previously conducted is limited by several factors:
- The performance of many classification algorithms when used with real traffic might be very different from the theoretical results shown in Table 2; this is particularly true for heuristics, that are engineered to achieve good performances in the average case, and not in the worst case.
- The theoretical complexities shown in Table 2 have been devised assuming that all fields used for the classification have the same length, equal to the length of the largest one; this simplification can bring to unrealistic theoretical results (e.g. in the case of IPv6 session identifiers, we consider the length of a TCP/UDP port to be 128 bit, and this is completely misleading). A solution to this problem could be to re-formulate each metric taken into consideration using the various fields’ lengths \(W_i\), but this out of the scope of this paper.
III. IMPLEMENTATION
An objective of this work is to identify and evaluate the packet classification algorithms that are more suitable for an implementation on resource constrained systems. When writing software for an embedded system, specific constraints have to be taken into account in order to grant good performances and flexibility in terms of code portability to different target platforms: hence, several aspects have been considered while implementing the above mentioned algorithms.
First of all, the main goal of our work was to write a code portable to different target platforms, independent from the processor and the operating system used. To accomplish this objective, we developed a software library made up of pure ANSI C, trying to avoid any use of OS/compiler support functions that could not be available on special purpose processors. The crucial point in generating portable code is to separate the coding of functional modules from the one related to the specific target environment. This can be achieved by defining some sort of API, which avoids the use of platform dependent functions directly inside the code. A second consideration is that the code should use static memory allocation, since a dynamic allocation infrastructure is not granted to be present on all the target platforms.
Another requirement is that the code should avoid the use of explicit pointers in the raw data structures containing the ruleset; in fact, sometimes the code creating and initializing the data structure and the code that classifies packets using this structure run either on different processors (e.g. network processors using multiple processing units) or within different address spaces (e.g. code running partially at kernel level and partially at user level on a general purpose PC). A commonly used solution to the problem is to make use of indirect addressing, using only displacement pointers in the data structure, and the base pointer outside it.
In a network embedded system we can distinguish among data-plane functions (related to packet processing functionalities, with high performance requirements) that usually run on specific processor engines and control-plane functions (for data structure initialization and configuration, usually with high memory requirements) that may run on a general purpose processor. Thus, one general issue is to modularize the code as deeply as possible, trying to separate the main algorithm functionalities, which may have high performance requirements, from the control and configuration functions that may run on a different processor.
A. Selecting the algorithms to be implemented
Given previous considerations and taking into account the practical issues enlightened in Section 2, we decided which algorithms to implement to meet our objectives.
1. We excluded Cross-Producing and Set-Pruning Tries, because their memory consumption grows as \(N^2\), which is extremely critical even with rather low values of \(N\) and \(d\) (e.g. with \(N=100\) rules and \(d=4\) fields memory consumption is about \(10^8\)). While RFC and HiCuts have the same worst case memory consumption, they are heuristic algorithms, therefore this value is not enough to get rid of them.
2. We excluded Heap on Tries and Binary trees on Tries, because their memory consumption and search time is proportional to \(W\) which is too large (e.g. this value is larger that \(10^8\), when the maximum field size \(W\) is 128 bits and the number of fields \(d\) is 5); moreover the paper presenting these algorithms does not give any hint about any working implementation of them. Although the Hierarchical Tries algorithm has the same search time as the two previous ones, it has not been excluded because of its excellent characteristics referred to memory consumption.
3. We excluded HiCuts, because this algorithm is patent pending.
4. Tuple Space Search was excluded essentially because it was decided that the comparative study would include a single heuristic algorithm and from the information we gathered in the literature the implementation details of RFC seemed clearer.
In summary, we decided to implement the Linear algorithm, to be used as a baseline for the comparison, the Hierarchical Tries algorithm (the only remaining non-heuristic algorithm after the screening described above), and the Recursive Flow Classification algorithm.
IV. PERFORMANCE EVALUATION
Although our implementation is targeted to both general and special purpose platforms, so far it has been validated through extensive tests only on a standard personal computer. We did not consider tests on special purpose platforms in the context of this work since it specifically aims at giving a homogeneous comparison between the implementation of various algorithms by measuring their performance in real-life working conditions. Moreover, the obtained experimental results are compared against
the theoretical worst-case results. However, tests on special purpose platforms will be carried out as a future work in an effort to evaluate the performance disparities on different platforms.
A. Testbed
The tests were conducted using a network trace taken from our university link to the Italian trans-university backbone. This trace has the following characteristics:
- duration: 6 hours
- total packets: 24 millions
- total bytes: 13 GBytes
- average traffic: 5 MBps, 1100 pps.
The implemented algorithms have been compiled with the Microsoft Visual C++ 6.0 SP 5 compiler. We used an Intel Pentium IV 2GHz workstation with 1GB RAM, running Microsoft Windows XP. The measurements were taken with the x86 assembler instruction RDTSC (Read TimeStamp Counter), which gives the number of CPU clock ticks from the machine bootstrap.
We used the ruleset running on the router connected to the same link on which we captured the network trace (the packets were captured immediately before the router classifier); this ruleset is formed of 349 rules, each rule working on these fields:
- source / destination IPv4 address
- Layer 4 protocol (TCP/UDP/ICMP/any)
- source / destination TCP/UDP port.
In order to evaluate the algorithms with rulesets of different size, we extrapolated some fictitious ruleset from the original one. These are the new rulesets we defined:
- 2 rulesets formed of 50 rules (rules 1-50 and 51-100 of the original ruleset)
- 2 rulesets formed of 100 rules (rules 1-100 and 101-200 of the original ruleset)
- 1 ruleset formed of 200 rules (rules 1-200 of the original ruleset).
B. Search time test results
This test aims at measuring the average packet classification time for the various rulesets; the results are shown in Table 3.
The results of this test show that the mean search time grows linearly with the number of rules in the case of the linear algorithm; in the case of the Hierarchical Tries algorithm, the search time seems to grow linearly, too, but the trend is much lower than the linear one. The RFC algorithm, instead, shows a mean search time that is independent on the number of rules in the ruleset.
By comparing the results in Table 3 and the worst cases in Table 3, we can note that:
- the linear algorithm performs worse than the other two algorithms in our tests, compared to the theoretical results;
- the Hierarchical Tries algorithm seems to be loosely dependent on the number of rules N, while its worst case is independent from this parameter. This behavior could be due to the fact that the number of recursive visits of the tries grows with the number of rules N.
C. Memory consumption test results
We measured the amount of memory needed to store the raw data structure containing the ruleset for each algorithm. The results of this test are shown in Table 4.
D. Preprocessing time test results
The last test attempts to measure the amount of time needed to process the various rulesets and create the internal data structures used by each classification algorithm. The results of this test are shown in Table 5.
<table>
<thead>
<tr>
<th>RULESETS</th>
<th>Number of rules</th>
<th>Linear</th>
<th>HiTrie</th>
<th>RFC</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ruleset 1-50</td>
<td>50</td>
<td>2603</td>
<td>981</td>
<td>419</td>
</tr>
<tr>
<td>Ruleset 51-100</td>
<td>50</td>
<td>2170</td>
<td>560</td>
<td>422</td>
</tr>
<tr>
<td>Ruleset 1-100</td>
<td>100</td>
<td>4572</td>
<td>1014</td>
<td>416</td>
</tr>
<tr>
<td>Ruleset 101-200</td>
<td>100</td>
<td>4408</td>
<td>1141</td>
<td>420</td>
</tr>
<tr>
<td>Ruleset 1-200</td>
<td>200</td>
<td>8949</td>
<td>1276</td>
<td>428</td>
</tr>
<tr>
<td>Ruleset 1-349</td>
<td>349</td>
<td>17552</td>
<td>2032</td>
<td>437</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>RULESET</th>
<th>Number of rules</th>
<th>Linear</th>
<th>HiTrie</th>
<th>RFC</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ruleset 1-50</td>
<td>50</td>
<td>2192</td>
<td>32708</td>
<td>1838596</td>
</tr>
<tr>
<td>Ruleset 51-100</td>
<td>50</td>
<td>2192</td>
<td>34028</td>
<td>1836964</td>
</tr>
<tr>
<td>Ruleset 1-100</td>
<td>100</td>
<td>4192</td>
<td>64588</td>
<td>1841668</td>
</tr>
<tr>
<td>Ruleset 101-200</td>
<td>100</td>
<td>4192</td>
<td>59428</td>
<td>1847796</td>
</tr>
<tr>
<td>Ruleset 1-200</td>
<td>200</td>
<td>8192</td>
<td>115068</td>
<td>1850148</td>
</tr>
<tr>
<td>Ruleset 1-349</td>
<td>349</td>
<td>14112</td>
<td>155048</td>
<td>6074748</td>
</tr>
</tbody>
</table>
The outcome of this test shows that the trend is roughly linear in the number of rules for the linear and Hierarchi-
cal Tries algorithm; moreover the latter is about 100 times slower than the former one, but the overall time to process the original ruleset containing 349 rules seems to be acceptable (less than 10 ms on the test platform). The RFC algorithm shows instead a rather interesting behavior: the trend is roughly linear on the number of rules up to 200 rules, with a cost that is about three orders of magnitude more expensive than the Hierarchical Tries algorithm; when we compute the data structure with the entire ruleset of 349 rules, the preprocessing time literally explodes to about 20 minutes. This explosion is generally due to two main factors:
1. It is a heuristic algorithm, so each metric normally depends on the particular ruleset used for the test.
2. Some experiments on this algorithm have shown that this behavior is largely due to rules containing a large number of “any” values in their components.
V. CONCLUSIONS
A continuously growing number of network appliances are deploying packet classifiers to implement Quality of Service, security, traffic engineering functionalities. As a consequence, in the last years several authors have proposed novel algorithms to achieve better results in terms of classification time and memory consumption. Many works provided case studies of such algorithms applied to a large number of real-life rulesets and network traffic traces. However, a fair comparison with common criteria and test cases has not yet been provided. Our main contribution in this work is filling this gap, by providing a homogeneous evaluation of three classification algorithms that have been implemented following the same criteria.
Our tests have shown that the Recursive Flow Classification algorithm outperforms, as expected, the other two algorithms in terms of search time. In fact, its heuristics is able to effectively exploit the characteristics of the real-life rulesets considered. However, it is known that this algorithm does not support dynamic updates, and our tests have shown that its preprocessing time is unpredictable.
The Hierarchical Tries algorithm shows acceptable performance in terms of classification time, being less than one order of magnitude worse that RFC. Instead it features low memory consumption, outperforming RFC for more than one order of magnitude. In practice, we have shown that the Hierarchical Tries algorithm is preferable over RFC when memory consumption and preprocessing time are more critical than classification time alone.
Finally, our tests confirm that the linear algorithm, despite the worst classification time with large rulesets, is the one that assures the lowest memory consumption, the fastest preprocessing phase, and the most flexible support for dynamic updates.
VI. REFERENCES
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/1494576/46995/05Contel-Classifiers.pdf", "len_cl100k_base": 5785, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18899, "total-output-tokens": 6690, "length": "2e12", "weborganizer": {"__label__adult": 0.00045013427734375, "__label__art_design": 0.00028443336486816406, "__label__crime_law": 0.0007390975952148438, "__label__education_jobs": 0.0005011558532714844, "__label__entertainment": 0.00016617774963378906, "__label__fashion_beauty": 0.0002155303955078125, "__label__finance_business": 0.0003731250762939453, "__label__food_dining": 0.0004346370697021485, "__label__games": 0.001026153564453125, "__label__hardware": 0.00608062744140625, "__label__health": 0.0009717941284179688, "__label__history": 0.0005297660827636719, "__label__home_hobbies": 0.0001041889190673828, "__label__industrial": 0.0008935928344726562, "__label__literature": 0.00032067298889160156, "__label__politics": 0.00052642822265625, "__label__religion": 0.0005803108215332031, "__label__science_tech": 0.4287109375, "__label__social_life": 0.00010985136032104492, "__label__software": 0.0254974365234375, "__label__software_dev": 0.529296875, "__label__sports_fitness": 0.0005135536193847656, "__label__transportation": 0.0011663436889648438, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27788, 0.05308]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27788, 0.49929]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27788, 0.90482]], "google_gemma-3-12b-it_contains_pii": [[0, 857, false], [857, 6195, null], [6195, 12464, null], [12464, 19002, null], [19002, 23158, null], [23158, 27788, null]], "google_gemma-3-12b-it_is_public_document": [[0, 857, true], [857, 6195, null], [6195, 12464, null], [12464, 19002, null], [19002, 23158, null], [23158, 27788, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27788, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27788, null]], "pdf_page_numbers": [[0, 857, 1], [857, 6195, 2], [6195, 12464, 3], [12464, 19002, 4], [19002, 23158, 5], [23158, 27788, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27788, 0.21935]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
69e270d0681a0230355f95cbf7fb0e763801f5c8
|
Performance Technology for Parallel Component Software
Sameer Shende
sameer@cs.uoregon.edu
Department of Computer and Information Science
NeuroInformatics Center
University of Oregon
Outline
- What is Component Software? [www.cca-forum.org]
- Performance Engineered Component Software
- CCA Performance Observation Component
- CCAFFEINE (Classic C++)
- SIDL
- Applications:
- Optimizer Component
- Combustion Component
- Concluding remarks
Why Components?
The task of the software development team is to engineer the illusion of simplicity [Booch].
The Good the Bad and the Ugly
- An example of what can lead to a crisis in software:
- At least 41 different Fast Fourier Transform (FFT) libraries:
- Many (if not all) have different interfaces
- different procedure names and different input and output parameters
- SUBROUTINE FOUR1(DATA, NN, ISIGN)
- Replaces DATA by its discrete Fourier transform (if ISIGN is input as 1) or replaces DATA by NN times its inverse discrete Fourier transform (if ISIGN is input as -1). DATA is a complex array of length NN or, equivalently, a real array of length 2*NN. NN MUST be an integer power of 2 (this is not checked for!).
The Good the Bad and the Ugly
- An example of what can lead to a crisis in software:
- At least 41 different Fast Fourier Transform (FFT) libraries:
- Many (if not all) have different interfaces
- different procedure names and different input and output parameters
- SUBROUTINE FOUR1(DATA, NN, ISIGN)
- Replaces DATA by its discrete Fourier transform (if ISIGN is input as 1) or replaces DATA by NN times its inverse discrete Fourier transform (if ISIGN is input as -1). DATA is a complex array of length NN or, equivalently, a real array of length 2*NN. NN MUST be an integer power of 2 (this is not checked for!).
What Are Components [Szyperski]
- A component is a binary unit of independent deployment
- well separated from other components
- fences make good neighbors
- can be deployed independently
- A component is a unit of third-party composition
- is composable (even by physicists)
- comes with clear specifications of what it requires and provides
- interacts with its environment through well-defined interfaces
- A component has no persistent state
- temporary state set only through well-defined interfaces
- throw away that dependence on global data (common blocks)
- Similar to Java packages and Fortran 90 modules (with a little help)
Component Technology
□ What is a component?
- Implementation provides functionality but hides details
- No direct access is possible
- Interface provides access to component functionality
- Access “ports” are well-defined and generated by tools
- Matching connector links component interfaces
- Constructed by framework and hidden from users
Component Technology Features
- Interoperability across multiple languages
- Language independent interfaces (C/C++, Fortran, Java,…)
- Automatically generated bindings to working code
- Interoperability across multiple platforms
- Computer systems hardware independence
- Operating systems independence
- Transparent execution model
- Serial, parallel, and distributed system
- Incremental evolution of application software
- Components promote software reuse
- Components are “plug-and-play”
Language Interoperability
Simulation Framework (C)
Numerical Routines (f77)
Solver Library (C++)
Callback Handlers (Python)
Scripting Driver (Python)
Visualization System (Java)
Mixing Languages is Hard!
Natural:
cfortran.h
SWIG
JNI
Siloon
Chasm
Platform Dependent
Babel makes all supported languages peers
Once a library has been “Babelized” it is equally accessible from all supported languages.
This is not an LCD Solution!
Babel’s Mechanism for Mixing Languages
- Code Generator
- Runtime Library
SIDL interface description → Babel Compiler
- XML
- C
- C++
- F77
- Python
- Java
Babel Runtime
Application
version greetings 1.0;
package greetings {
interface Hello {
void setName( in string name );
string sayIt ( );
}
class English implements-all Hello { }
}
Library Developer Does This...
- `babel --server=C++ greetings.sidl`
- Add implementation details
- Compile & Link into Library/DLL
namespace greetings {
class English_impl {
private:
// DO-NOT-DELETE splicer.begin(greetings.English._impl)
string d_name;
// DO-NOT-DELETE splicer.end(greetings.English._impl)
string
greetings::English_impl::sayIt()
throw ()
{
// DO-NOT-DELETE splicer.begin(greetings.English.sayIt)
string msg("Hello ");
return msg + d_name + "!");
// DO-NOT-DELETE splicer.end(greetings.English.sayIt)
}
}
}
- `babel --client=F77 greetings.sidl`
- Compile & Link generated Code & Runtime
- Place DLL in suitable location
Common Component Architecture Specification
Scientific IDL
Proxy generator
Component 1
Component 2
CCA Services
Any CCA compliant framework
Repository
CCA ports
Framework-specific part of CCA ports
Abstract configuration API
Repository API
Builder
CCA Concepts: Ports
- Designing for interoperability and reuse requires “standard” interfaces
- *Ports* define how components interact
- Through well-defined interfaces (ports)
- In OO languages, a port is a class or interface
- In Fortran, a port is a set of subroutines or a module
- Components may *provide* ports
- Implement the class or subroutines of the port
- Components may *use* ports
- Call methods or subroutines in the port
- Links denote a caller/callee relationship
CCA Concepts: Frameworks
- Provides the means to “hold” components and compose them into applications
- Allow exchange of ports among components without exposing implementation details
- Provide a small set of standard services to components
- Builder services allow programs to compose CCA apps
- Frameworks may make themselves appear as components in order to connect to components in other frameworks
- Specific frameworks support specific computing models
Numerically integrate a continuous function
Use two different techniques
Lines show port connections
Dashed lines are alternate port connections
\[
f(x) = \int_a^b f(x) \, dx, \quad x_n \text{ uniformly distributed over } [a,b]
\]
CCA Framework Prototypes
- CCAFFEINE
- SPMD/SCMD parallel, direct connect
- Direct connection
- CCAT / XCAT
- Distributed network
- Grid Web services
- SCIRun
- Parallel, multithreaded, direct connect
- Decaf
- Language interoperability via Babel
- Legion (under development)
Performance-Engineered Component Software
- Intra- and Inter-component performance engineering
- Four general parts:
- Performance observation
- integrated measurement and analysis
- Performance query and monitoring
- runtime access to performance information
- Performance control
- mechanisms to alter performance observation
- Performance knowledge
- characterization and modeling
- Consistent with component architecture / implementation
Main Idea: Extend Component Design
- Extend the programming and execution environment to be performance observable and performance aware.
Performance Observation and Component
- Performance measurement integration in component form
- Functional extension of original component design (↔)
- Include new component methods and ports (↑) for other components to access measured performance data
- Allow original component to access performance data
- Encapsulate as tightly-coupled and co-resident performance observation object
- POC “provides” port allow use of optimized interfaces (↔) to access "internal" performance observations
Performance Knowledge
- Describe and store “known” component performance
- Benchmark characterizations in performance database
- Empirical or analytical performance models
- Saved information about component performance
- Use for performance-guided selection and deployment
- Use for runtime adaptation
- Representation must be in common forms with standard means for accessing the performance information
- Compatible with component architecture
Component Performance Repository
- Performance knowledge storage
- Implement in component architecture framework
- Similar to CCA component repository
- Access by component infrastructure
- View performance knowledge as component (PKC)
- PKC ports give access to performance knowledge
- ↓ to other components, ← back to original component
- Static/dynamic component control and composition
- Component composition performance knowledge
Performance Engineering Support in CCA
- Define a standard observation component interface for:
- Performance measurement
- Performance data query
- Performance control (enable/disable)
- Implement performance interfaces for use in CCA
- TAU performance system
- CCA component frameworks (CCAFFEINE, SIDL/Babel)
- Demonstrations
- Optimizing component
- picks from a set of equivalent CCA port implementations
- Flame reaction-diffusion application
CCA Performance Observation Component
- Design measurement port and measurement interfaces
- Timer
- start/stop
- set name/type/group
- Control
- enable/disable groups
- Query
- get timer names
- metrics, counters, dump to disk
- Event
- user-defined events
namespace performance {
namespace ccaports {
class Measurement: public virtual classic::gov::cca::Port {
public:
virtual ~Measurement (){}
/* Create a Timer interface */
virtual performance::Timer* createTimer(void) = 0;
virtual performance::Timer* createTimer(string name) = 0;
virtual performance::Timer* createTimer(string name, string type) = 0;
virtual performance::Timer* createTimer(string name, string type, string group) = 0;
/* Create a Query interface */
virtual performance::Query* createQuery(void) = 0;
/* Create a user-defined Event interface */
virtual performance::Event* createEvent(void) = 0;
virtual performance::Event* createEvent(string name) = 0;
/* Create a Control interface for selectively enabling and disabling * the instrumentation based on groups */
virtual performance::Control* createControl(void) = 0;
};
}
namespace performance {
class Timer {
public:
virtual ~Timer() {}
/* Implement methods in a derived class to provide functionality */
/* Start and stop the Timer */
virtual void start(void) = 0;
virtual void stop(void) = 0;
/* Set name and type for Timer */
virtual void setName(string name) = 0;
virtual string getName(void) = 0;
virtual void setType(string name) = 0;
virtual string getType(void) = 0;
/* Set the group name and group type associated with the Timer */
virtual void setGroupName(string name) = 0;
virtual string getGroupName(void) = 0;
virtual void setGroupId(unsigned long group) = 0;
virtual unsigned long getGroupId(void) = 0;
};
}
Use of Observation Component in CCA Example
```cpp
#include "ports/Measurement_CCA.h"
...
double MonteCarloIntegrator::integrate(double lowBound, double upBound, int count) {
classic::gov::cca::Port * port;
double sum = 0.0;
// Get Measurement port
port = frameworkServices->getPort("MeasurementPort");
if (port)
measurement_m = dynamic_cast<performance::ccaports::Measurement * >(port);
if (measurement_m == 0){
cerr << "Connected to something other than a Measurement port";
return -1;
}
static performance::Timer* t = measurement_m->createTimer(
string("IntegrateTimer"));
t->start();
for (int i = 0; i < count; i++) {
double x = random_m->getRandomNumber ();
sum = sum + function_m->evaluate (x);
}
t->stop();
}
```
Using TAU Component in CCAFEINE
```plaintext
repository get TauTimer
repository get Driver
repository get MidpointIntegrator
repository get MonteCarloIntegrator
repository get RandomGenerator
repository get LinearFunction
repository get NonlinearFunction
repository get PiFunction
create LinearFunction lin_func
create NonlinearFunction nonlin_func
create PiFunction pi_func
create MonteCarloIntegrator mc_integrator
create RandomGenerator rand
create TauTimer tau
connect mc_integrator RandomGeneratorPort rand RandomGeneratorPort
connect mc_integrator FunctionPort nonlin_func FunctionPort
connect mc_integrator TimerPort tau TimerPort
create Driver driver
connect driver IntegratorPort mc_integrator IntegratorPort
go driver Go
quit
```
version performance 1.0;
package performance
{
interface Timer
{
/* Start/stop the Timer */
void start();
void stop();
/* Set/get the Timer name */
void setName(in string name);
string getName();
/* Set/get Timer type information (e.g., signature of the routine) */
void setType(in string name);
string getType();
/* Set/get the group name associated with the Timer */
void setGroupName(in string name);
string getGroupName();
/* Set/get the group id associated with the Timer */
void setGroupId(in long group);
long getGroupId();
}...
interface Control
{
/* Enable/disable group id */
void enableGroupId(in long id);
void disableGroupId(in long id);
/* Enable/disable group name */
void enableGroupName(in string name);
void disableGroupName(in string name);
/* Enable/disable all groups */
void enableAllGroups();
void disableAllGroups();
}
/* Implementation of performance component Control interface*/
class TauControl implements all Control
{
}
/* Implementation of performance component Measurement interface*/
class TauMeasurement implements all Measurement, gov.cca.Component
{
}
SIDL Interface : Query
/* Query interface to obtain timing information */
interface Query
{
/* Get the list of Timer and Counter names */
array<string> getTimerNames();
array<string> getCounterNames();
void getTimerData(in array<string> timerList,
out array<double, 2> counterExclusive,
out array<double, 2> counterInclusive,
out array<int> numCalls,
out array<int> numChildCalls,
out array<string> counterNames,
out int numCounters);
/* Writes instantaneous profile to disk in a dump file. */
void dumpProfileData();
/* Writes the instantaneous profile to disk in a dump file whose name
* contains the current timestamp. */
void dumpProfileDataIncremental();
/* Writes the list of timer names to a dump file on the disk */
void dumpTimerNames();
/* Writes the profile of the given set of timers to the disk. */
void dumpTimerData(in array<string> timerList);
/* Writes the profile of the given set of timers to the disk. The dump
* file name contains the current timestamp when the data was dumped. */
void dumpTimerDataIncremental(in array<string> timerList); }
/* User defined event profiles for application specific events */
interface Event
{
/* Set the name of the event */
void setName(in string name);
/* Trigger the event */
void trigger(in double data);
}
Measurement Port Implementation
- Use of Measurement port (i.e., instrumentation)
- independent of choice of measurement tool
- independent of choice of measurement type
- TAU performance observability component
- Implements the Measurement port
- Implements Timer, Control, Query, Control
- Port can be registered with the CCAFEINE framework
- Components instrument to generic Measurement port
- Runtime selection of TAU component during execution
- TauMeasurement_CCA port implementation uses a specific TAU library for choice of measurement type
What’s Going On Here?
Two instrumentation paths using TAU API
application component
performance component
TAU API
Two query and control paths using TAU API
runtime TAU performance data
Alternative implementations of performance component
application component
other API
TAU API
Alternative implementations of performance component
Components are “plug-and-play”
- One can choose from a set of equivalent port implementations based on performance measurements
- An outside agent can monitor and select an optimal working set of components
Component Optimizing Performance Results
Sandia National Laboratory
- DOE SciDAC project (http://cfrfs.ca.sandia.gov)
- Jaideep Ray
Component-based simulation and analysis
- Sandia’s CCAFFEINE framework
- Toolkit components for assembling flame simulation
- integrator, spatial discretizations, chemical/transport models
- structured adaptive mesh, load-balancers, error-estimators
- in-core, off-machine, data transfers for post-processing
- Components are C++ and wrapped F77 and C code
Kernel for 3D, adaptive mesh low Mach flame simulation
Flame Reaction-Diffusion Demonstration
CCAFFEINE
Meeting CCA Performance Engineering Goals?
- Language interoperability?
- SIDL and Babel give access to all supported languages
- TAU supports multi-language instrumentation
- Component interface instrumentation automated with PDT
- Platform interoperability?
- Implement observability component across platforms
- TAU runs wherever CCA runs
- Execution model transparent?
- TAU measurement support for multiple execution models
- Reuse with any CCA-compliant framework?
- Demonstrated with SIDL/Babel, CCAFEINE, SCIRun
Component software is a natural model for developing applications for the Grid
- ICENI (Imperial College), CCAT / XCAT (U. Indiana)
Our work leverages abstraction power of CCA as well as the infrastructure of CCA frameworks
- Similarly leverage Grid infrastructure and services
- Mostly riding back of CCA framework development
Application-level performance view coupled with Grid resource assessment and monitoring
- More responsive to performance dynamics
- Beginning work with NWS forecaster in applications
Meeting CCA Performance Engineering Goals?
- Component performance knowledge?
- Representation and performance repository work to do
- Utilize effectively for deployment and steering
- Build repository with TAU performance database
- Performance of component compositions?
- Component-to-component performance
- Per connection instrumentation and measurement
- Utilize performance mapping support
- Ensemble-wide performance monitoring
- connect performance “producers” to “consumers”
- component-style implementation
Concluding Remarks
- Parallel component systems pose challenging performance analysis problems that require robust methodologies and tools.
- New performance problems will arise
- Instrumentation and measurement
- Data analysis and presentation
- Diagnosis and tuning
- Performance modeling
- Performance engineered components
- Performance knowledge, observation, query and control
Available from:
http://www.cs.uoregon.edu/research/paracomp/tau/tauprofile/dist/taucomponent.tar.gz
Support Acknowledgement
☐ TAU and PDT support:
✷ Department of Energy (DOE)
➢ DOE 2000 ACTS contract
➢ DOE MICS contract
➢ DOE ASCI Level 3 (LANL, LLNL)
➢ U. of Utah DOE ASCI Level 1 subcontract
✷ DARPA
✷ NSF National Young Investigator (NYI) award
|
{"Source-Url": "http://www.cs.uoregon.edu:80/research/paracomp/papers/talks/uoregon/feb03/performance.pdf", "len_cl100k_base": 4209, "olmocr-version": "0.1.51", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 82298, "total-output-tokens": 6114, "length": "2e12", "weborganizer": {"__label__adult": 0.0002923011779785156, "__label__art_design": 0.0003387928009033203, "__label__crime_law": 0.00030732154846191406, "__label__education_jobs": 0.0004892349243164062, "__label__entertainment": 7.62939453125e-05, "__label__fashion_beauty": 0.0001291036605834961, "__label__finance_business": 0.000194549560546875, "__label__food_dining": 0.0002696514129638672, "__label__games": 0.0005197525024414062, "__label__hardware": 0.0013599395751953125, "__label__health": 0.0003120899200439453, "__label__history": 0.0002532005310058594, "__label__home_hobbies": 8.594989776611328e-05, "__label__industrial": 0.0005121231079101562, "__label__literature": 0.00018990039825439453, "__label__politics": 0.00019490718841552737, "__label__religion": 0.0003459453582763672, "__label__science_tech": 0.048004150390625, "__label__social_life": 8.279085159301758e-05, "__label__software": 0.010711669921875, "__label__software_dev": 0.9345703125, "__label__sports_fitness": 0.0002961158752441406, "__label__transportation": 0.0005207061767578125, "__label__travel": 0.00019037723541259768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19420, 0.00394]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19420, 0.40126]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19420, 0.72239]], "google_gemma-3-12b-it_contains_pii": [[0, 185, false], [185, 451, null], [451, 561, null], [561, 1246, null], [1246, 1919, null], [1919, 2578, null], [2578, 2928, null], [2928, 3438, null], [3438, 3622, null], [3622, 3710, null], [3710, 3874, null], [3874, 4061, null], [4061, 4244, null], [4244, 4377, null], [4377, 4912, null], [4912, 5025, null], [5025, 5285, null], [5285, 5777, null], [5777, 6240, null], [6240, 6473, null], [6473, 6762, null], [6762, 7229, null], [7229, 7368, null], [7368, 7874, null], [7874, 8332, null], [8332, 8784, null], [8784, 9254, null], [9254, 9545, null], [9545, 10442, null], [10442, 11162, null], [11162, 11979, null], [11979, 12722, null], [12722, 13394, null], [13394, 13986, null], [13986, 15115, null], [15115, 15335, null], [15335, 15901, null], [15901, 16243, null], [16243, 16451, null], [16451, 16492, null], [16492, 17005, null], [17005, 17055, null], [17055, 17593, null], [17593, 18106, null], [18106, 18652, null], [18652, 19149, null], [19149, 19420, null]], "google_gemma-3-12b-it_is_public_document": [[0, 185, true], [185, 451, null], [451, 561, null], [561, 1246, null], [1246, 1919, null], [1919, 2578, null], [2578, 2928, null], [2928, 3438, null], [3438, 3622, null], [3622, 3710, null], [3710, 3874, null], [3874, 4061, null], [4061, 4244, null], [4244, 4377, null], [4377, 4912, null], [4912, 5025, null], [5025, 5285, null], [5285, 5777, null], [5777, 6240, null], [6240, 6473, null], [6473, 6762, null], [6762, 7229, null], [7229, 7368, null], [7368, 7874, null], [7874, 8332, null], [8332, 8784, null], [8784, 9254, null], [9254, 9545, null], [9545, 10442, null], [10442, 11162, null], [11162, 11979, null], [11979, 12722, null], [12722, 13394, null], [13394, 13986, null], [13986, 15115, null], [15115, 15335, null], [15335, 15901, null], [15901, 16243, null], [16243, 16451, null], [16451, 16492, null], [16492, 17005, null], [17005, 17055, null], [17055, 17593, null], [17593, 18106, null], [18106, 18652, null], [18652, 19149, null], [19149, 19420, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19420, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19420, null]], "pdf_page_numbers": [[0, 185, 1], [185, 451, 2], [451, 561, 3], [561, 1246, 4], [1246, 1919, 5], [1919, 2578, 6], [2578, 2928, 7], [2928, 3438, 8], [3438, 3622, 9], [3622, 3710, 10], [3710, 3874, 11], [3874, 4061, 12], [4061, 4244, 13], [4244, 4377, 14], [4377, 4912, 15], [4912, 5025, 16], [5025, 5285, 17], [5285, 5777, 18], [5777, 6240, 19], [6240, 6473, 20], [6473, 6762, 21], [6762, 7229, 22], [7229, 7368, 23], [7368, 7874, 24], [7874, 8332, 25], [8332, 8784, 26], [8784, 9254, 27], [9254, 9545, 28], [9545, 10442, 29], [10442, 11162, 30], [11162, 11979, 31], [11979, 12722, 32], [12722, 13394, 33], [13394, 13986, 34], [13986, 15115, 35], [15115, 15335, 36], [15335, 15901, 37], [15901, 16243, 38], [16243, 16451, 39], [16451, 16492, 40], [16492, 17005, 41], [17005, 17055, 42], [17055, 17593, 43], [17593, 18106, 44], [18106, 18652, 45], [18652, 19149, 46], [19149, 19420, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19420, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
e694e8ad223e23573924aeecbbff7a4ab7c28846
|
Object-Model Transfer in the General Video Game Domain
Alexander Braylan, Risto Miikkulainen
Department of Computer Science, The University of Texas at Austin
Abstract
A transfer learning approach is presented to address the challenge of training video game agents with limited data. The approach decomposes games into objects, learns object models, and transfers models from known games to unfamiliar games to guide learning. Experiments show that the approach improves prediction accuracy over a comparable control, leading to more efficient exploration. Training of game agents is thus accelerated by transferring object models from previously learned games.
Introduction
Reinforcement learning methods have achieved high levels of performance across a broad spectrum of games but often require large amounts of training data (Hausknecht et al. 2014; Mnih et al. 2015). Learning forward models of an agent’s environment can reduce the amount of required training and improve overall performance and flexibility. A forward model is a function that predicts the future state of an environment from its current state. However, when the data used to train a model is sparse, noisy, or high-dimensional, the model is at risk of suffering from generalization error in predictions made outside of the data seen during training. For example, the first few frames of a new game an agent observes may not be sufficient to inform the agent about how the game will behave later on.
One field of research that may help address the problem of generalization error is transfer learning (Taylor and Stone 2009; Pan and Yang 2010), the reuse of knowledge and skills learned in source tasks to accelerate and improve performance in a different target task. Applied to video games, the idea is that an agent with ample experience playing various source games can learn a better model of a new target game by transferring and combining knowledge from the source games. This paper considers the role of this combined transferred knowledge in forming an inductive bias — an assumption that constrains the space of possible models, and a way to guard against generalization error (Mitchell 1980).
The first challenge in transfer learning is mapping between variables in the source and target environments. A second challenge is integrating and applying the transferred knowledge. This paper responds to both questions in the context of general video game playing. The key step taken toward the first challenge is to decompose game environments into collections of objects. In an object-oriented formulation of a game environment, objects belong to object classes exhibiting similar behaviors across a wide variety of games (Druk, Cohen, and Littman 2008). The variables of an object class are interpreted in the same way regardless of the game, simplifying the question of variable mapping. The approach of this paper toward the second challenge is to construct transfer ensembles out of models transferred from source games and scratch models newly trained for the target game. Each transfer ensemble uses a weighted average of predictions from its constituent models to predict the behavior of a target object. The weights are calculated based on how well each constituent model describes the data observed for the target object class. An important final step is retraining the source models to better fit target data.
Experimental results show that agents using transfer ensembles as models of object classes generalize better than using scratch models. After observing small quantities of in-sample training data, transfer ensembles achieve greater accuracy than scratch models when predicting the behavior of objects in subsequent out-of-sample test data. Agents that use learned models to inform their actions in an exploration task are shown to perform better when using the transfer learning approach than when learning from scratch.
Overall, the conclusion is that decomposing environments into objects and transferring object models across games is a promising approach for learning to play video games from small amounts of experience.
Background
This paper draws from research in general video game playing, model-based reinforcement learning, and transfer learning. Each is a broad field of research, so this section will review the topics most relevant to this work.
General Video Game AI
General Video Game AI (GVG-AI) is an open-source project that facilitates artificial intelligence research in general video game playing (Schaul 2013; Perez-Liebana et al. 2016). The GVG-AI project provides a framework for agents to interact with games and includes 60 games hand-coded in
Copyright © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
the Video Game Description Language (Ebner et al. 2013). The games are similar to games from the Atari 2600 console and other popular video games, including games inspired by Space Invaders, Frogger, Zelda, Lemmings, Seaquest, Sokoban, Dig Dug, Pacman, Star Fox, Plants vs. Zombies, among many others. Borrowing from several genres of video games presents agents with a wide diversity of challenges.
Additionally, there are a few advantages to using GVG-AI over an Atari emulator. GVG-AI objects can exhibit stochastic behavior. For Atari, stochasticity can so far only be added artificially to the initial game state or to the actions input by the player (Hausknecht and Stone 2015). Furthermore, each game in GVG-AI includes several levels with different initial conditions. These features allow for straightforward out-of-sample testing, crucial for measuring generalization error. Therefore, the experiments in this paper use the GVG-AI framework and games.
Model-Based Reinforcement Learning
Reinforcement learning problems challenge agents to take actions in response to observations of an environment in order to accumulate rewards over time (Sutton and Barto 1998). In the most common case, the environment is formally a Markov decision process (MDP), which consists of a set of states, actions, and a probabilistic transition function. This function governs the distribution of subsequent states given every current state and action. Model-based reinforcement learning methods rely on an estimate of the transition function. In contrast to model-free methods, they have rich representations of the environmental dynamics. Such representations yield various benefits: data efficiency, better planning and exploration, and robustness against changes in the reward function (Atkeson and Santamaria 1997; Asmuth and Littman 2011). Most approaches to model learning for high-dimensional environments use factored state representations, learning approximate transition functions on a manageable number of features of the state space.
Factored-State Model Learning for Video Games
Because video games are high-dimensional environments, the only approaches that learn models of video games use factored state representations. One approach to learning models of Atari games by Bellemare et al. (Bellemare, Veness, and Bowling 2013) predicts patches of pixels from neighboring patches using a compression algorithm, taking advantage of the tendency of game objects to depend only on nearby objects. Alternatively, a deep learning approach by Oh et al. (Oh et al. 2015) uses convolutional neural networks on pixel inputs from the entire game screen to predict future pixel values.
While some research exists on learning factored models to make the most out of few training samples (Degris, Sigaud, and Wuillemin 2006; Hester and Stone 2013; Jong and Stone 2007), both papers on model learning for video games focus on the scalability and power of the models rather than on sample efficiency. The neural networks used by Oh et al. trained on 500,000 frames per game, while the models in Bellemare et al. trained on 10 million frames. This paper investigates training on as few as 10 frames.
Object-Oriented Markov Decision Process
An Object-Oriented Markov Decision Process (OO-MDP) is a factorization that exploits the object-oriented nature of many reinforcement learning problems by re-framing the environment as a collection of objects (Diuk, Cohen, and Littman 2008). Compared to the high dimensionality of the full game state, the object-oriented environment is represented only by the relatively few attributes of each object. These attributes include the object’s internal state variables and variables representing relations with other objects. For example, geographic relationships are encoded by first-order propositions \( \text{on}(o_1, o_2), \text{touch}^N(o_1, o_2), \text{touch}^C(o_1, o_2), \text{etc.} \) Each object belongs to an object class; all instances of the same object class are assumed to follow the same transition function, thus only one model is needed for each object class. The assumption that many of these object classes are similar over multiple games is one motivation for choosing the object-oriented factorization for transfer learning.
Transfer Learning
The transfer of models between tasks is related to the theory behind choosing a good inductive bias. When a learner can sample from multiple related tasks, an inductive bias that works well for several of those tasks can be expected to work well in other related tasks (Baxter 2000). For example, learning multiple related tasks at the same time with some shared learning parameters can be better than learning each task individually (Caruana 1997). Similarly, source knowledge can inform the selection of inductive bias in target tasks. Some such approaches involve the use of an ensemble model, a weighted combination of source models where the weights depend on how well each source model predicts the target data (Dai et al. 2007; Gao et al. 2008; Eaton and DesJardins 2009). This is the type of approach taken in this paper.
Approach
This section first presents a method for learning a forward model of the transition function of each object class from scratch in GVG-AI games. It then presents a transfer learning method for reusing scratch models to learn models more quickly for new objects in target games.
Learning Object Models from Scratch
A forward model \( F_j \) of an object class \( j \) is a function that generates a prediction \( \hat{S}_t^j = F_j(S_{t-1}^j) \) for the state \( S_t^j \) of object instance \( i \) (belonging to object class \( j \)), given its previous state \( S_{t-1}^j \). The state \( S_{t-1}^j \) includes the object instance’s internal variables as well as global state variables such as the player action \( A_{t-1} \). Learning a model involves using observed data to alter the parameters of the model so as to improve its prediction accuracy. The three major decisions for specifying a model learner are on the model variables, the model’s functional form, and the learning algorithm, described in detail in the rest of this subsection.
Model Variables: Object Class Attributes
In addition to a visual display, GVG-AI reports a list of all objects in the
Figure 1: Simplified depiction of scratch and transfer approaches. The top row shows sequential game transitions containing object instance a1 of the dark creature class and b1 and b2 of the light creature class. The middle row shows data from each class being used to train their respective scratch models, m1 and m2. In the bottom row, observations of object instance c1 of the bumblebee class are used to train a transfer model m3 by adopting model m2 from the most similar class (the light creature) and retraining it. The scratch models serve both as stand-alone models to predict transitions and as building blocks for transfer models.
game at each frame. For each of these objects, it discloses the occupied tile – or x and y position – as well as a token representing the object’s class used for grouping different instances of the same class within a game.
The above position and object class information are sufficient to extract a set of attributes that capture most of the observable object behaviors in GVG-AI games. The most common behaviors encountered include deterministic, stochastic, and player-controlled movement; death; and spawning of other objects on the same tile. Spawning is a novel extension of the OO-MDP formulation to capture the effects of a new object instance appearing in a game.
The predicted next state of an object instance \( g \) consists of the following output attributes:
- Directional movement (North/South/East/West) at time \( t \), \( M_t = \{m^N_t, m^S_t, m^E_t, m^W_t\} \);
- Whether the object is alive and on screen, \( e_t \); and
- New spawns of other objects on the tile of this object, \( N_t = \{n^C_t(i) : \text{spawn}_t(i), \text{on}_t(g,i)\} \).
To clarify how the spawn attributes are managed, the proposition \( \text{spawn}_t(i) \) denotes whether an object \( i \) is a spawn – a newly observed object instance in a game at time \( t \). Every spawn observation is recorded as \( n^C_t(i) = 1 \) for every other object on the same tile as the spawn, with \( C(i) \) denoting the object class of object \( i \). For example, when a new bomb object appears on the same tile as an alien object, that alien object takes a value of 1 for the attribute \( n^\text{bomb}_t \).
In addition to the above output attributes, the following input attributes account for factors upon which the predicted behaviors are conditioned:
- Directional movement at time \( t-1 \), \( M_{t-1} = \{m^{N}_{t-1}, m^{S}_{t-1}, m^{E}_{t-1}, m^{W}_{t-1}\} \);
- Other objects touching the object \( H_{t-1} = \{h_{t-1,D,C(i)} : \text{touch}_{t-1}(g,i)\}, D \in \{N, S, E, W, ON\};
- Whether the object was facing in the direction of its last movement, \( f_{t-1} \); and
- Action input by the player, \( A_{t-1} = \{a_{\text{NIL}}_{t-1}, a_{\text{UP}}_{t-1}, a_{\text{DOWN}}_{t-1}, a_{\text{LEFT}}_{t-1}, a_{\text{RIGHT}}_{t-1}, a_{\text{SHOOT}}_{t-1}\} \).
For example, whenever an object instance is adjacent or overlapping another object instance of a different class, its \( h \) attribute corresponding to the other object’s class and relative position takes a value of 1.
In addition to object class models, termination models can be learned for predicting the game is won or lost at each frame from some global game variables. Termination models are not deeply explored in this paper but are helpful for experiments involving action selection. The two termination models, \( P(\text{WIN}|X) \) and \( P(\text{LOSE}|X) \), are conditioned on the following inputs:
- Existence of at least one live object instance of each class in the game, \( X_t = \{x^i_t : \text{exists}t_{-1}(j)\} \).
Each \( x^i_t \) represents whether any instance of the game’s object class \( j \) exists at all at the given time. This input is used because termination often coincides with the total disappearance of one of the game’s object classes.
**Functional Form and Learning Algorithm** In a factored state model, the prediction \( \hat{s}_t \) of the next state of an object is decomposed into predictions for each output variable \( s^k_t \in \hat{s}_t \). All of the specified object variables take values of either 0 or 1. The values of variables not observed by an object are 0 by default. The factored-state model produces a prediction between 0 and 1 for each output variable of an object instance. Each prediction represents the probability of the output variable taking a value of 1 given the observed input values. A logistic regression model is trained for each output variable of each object class using observations of all instances of the object class in a game, depicted in the first two thirds of Figure 1.
A logistic regression output is a sigmoidal function of its weighted inputs, taking the form \( s^k_t = \frac{1}{1 + e^{-\sum x_m^k \beta_m}} \). The weight vector \( \beta \) consists of coefficients to the input variables and an intercept term which are trained through gradient descent. The gradient descent algorithm iteratively decreases a cost computed from the values of observed \( \hat{s}_t \) and predicted \( \hat{s}_t \). Weights are gradually changed in the direction of the partial derivative of this cost with respect to the weights so as to reduce the cost. The cross entropy error \( E_t = -\sum_k (s^k_t \ln s^k_t + (1 - s^k_t) \ln (1 - s^k_t)) \) is used as the cost function to ensure convergence near a global minimum.
During gradient descent training, data points are presented in random order at each iteration to avoid biasing the learned model.
**Object Model Transfer**
The transfer learning approach in this paper relies on a simple and intuitive assumption: Some object classes encoun-
tered in a target should behave similarly to other object
classes previously encountered in sources. Therefore, when
reasoning about an unknown target object, knowledge of
previously seen similar source objects can help constrain
and shape the distribution of predictions for the target ob-
ject’s behavior. The measure of similarity depends on what
is known about the target object, what is known about the
source objects, and the ability to establish relationships be-
tween attributes of the different objects. This assumption
forms an inductive bias which should help trained models
generalize better to unseen target data.
The bottom third of Figure 1 is a sketch of how this ap-
proach uses source models to train transfer models. The fol-
lowing example serves to illustrate more tangibly how object
class models can be transferred.
An Illustrative Example of Walls and Avatars In the
game Chase, the player-controlled avatar must chase goats
by moving freely in four directions except when blocked by
wall objects. These movement rules for the avatar are com-
non in several other games, such as the game Escape. In
Escape, the avatar is again moved in four directions by
the player and is blocked by walls. The Escape avatar can also
push away box objects and disappear through hole objects.
A transfer-learning agent who has played many games of
Chase but has only seen a few frames of Escape should be
able to use specific knowledge from Chase to make more
accurate predictions about Escape than a total novice.
Upon encountering a wall for the first time in Escape,
a novice agent with no Chase experience would have low
certainty on the outcome of an attempt to move the avatar
into the wall. In contrast, a transfer learning agent could
notice some similar behavior between the Chase and Escape
avatars — such as how the player inputs move both of them
in similar directions — and reason that the interaction with
the wall is also likely to be the same in both games.
The transfer learning method presented in this paper pro-
duces models that make predictions as described above
when the source is Chase and the target is Escape. How-
ever, transferring object class models is not always so sim-
ple for all sources and targets. The following subsections
explain additional challenges encountered and how they are
addressed.
Source Knowledge Selection One objective for a trans-
fer learning algorithm is an ability to choose automatically
what knowledge to transfer from a potentially large pool of
source models. Transferred knowledge may harm rather than
improve performance in the target task, an outcome called
negative transfer (Taylor and Stone 2009). In order to re-
duce negative transfer, the transfer learning algorithm may
select its sources according to their expected contribution to
performance.
This paper uses a measure of one-frame forward predic-
tion accuracy to evaluate learned models, both for guid-
ing source selection and for overall evaluation. Prediction
accuracy of an object class model $F$ on object transition
data $S = \{S_1, S_2, \ldots S_T\}$ is calculated as accuracy($F, S$) =
\[
\frac{1}{T} \sum_{i=2}^{T} \text{equal}(S_i, F(S_{i-1})),
\]
where equal($S, \hat{S}$) = 1 if for
all output attributes $k$, $s_k = \text{round}(\hat{s}_k)$, and 0 otherwise.
This measure can also serve to evaluate the goodness of
fit of a source model $f_{\text{SRC}}^T$ to target data $S_{\text{TRG}}$, referred
to in this paper simply as the fitness of SRC to TRG, $\text{fitness}(f_{\text{SRC}}^T, S_{\text{TRG}})$. These accuracy measures range from
0 to 1, with higher values denoting better models.
The fitness measure serves to estimate which source mod-
els are likely to transfer well to target object classes. A
source selection algorithm might additionally use other mea-
sures such as the visual similarity of the icon used to repres-
ent the object or the frequency at which the object class
model successfully transfers to other object classes. Such
additional measures could further improve performance on
source selection but are left to future research.
Target Model as Ensemble of Source Models The al-
gorithm for transferring object class models is described
in detail in this subsection. The algorithm starts with sev-
eral trained object class models from source games. Then
it observes some frames in a new target game. Some of the
source models should predict later observations of the tar-
get game objects more accurately than new models trained
from scratch on the observations made so far. Specifically,
the assumption is that source models with high fitness to the
target data should be more useful than those with low fit-
ness. Therefore, for each object class in the target game, the
algorithm builds a transfer ensemble out of both the pool of
source models and the new scratch target model. The basic
ensemble used is a forward model that makes predictions
based on the weighted sum of its constituent forward mod-
els’ predictions. Each of its constituents is assigned a weight
as follows:
1. The scratch target model gets a nominal weight of 1.
2. Each source model $j$ gets a nominal weight equal to
$\frac{b_j - a/2}{b_j}$, where $b_j$ is the source model’s fitness and $a$ is
the scratch accuracy. Subtracting a portion of the scratch
accuracy increases the relative strength of weights given
to fitter source models.
3. Source models with non-positive weights are dropped.
4. The final weights are normalized by the sum.
5. The source models are retrained by adjusting their inter-
nal coefficients through the same gradient descent method
used for their initial training, minimizing prediction error
on the new target data while leaving intact the parts of the
source models uninformed by target data.
The transfer ensemble is expected to predict target objects
in out-of-sample data better than the scratch target model
alone because of the inductive bias — the ensemble is bi-
ased toward models that work in other games. Retraining
improves the accuracy of the transfer ensembles and reduces
the number of cases and severity of negative transfer.
Experiments and Results
Out of the 60 GVG-AI games, 30 were used for exploratory
testing and tuning of the system, while the other 30 were
withheld for experiments. The reason for this division was to
prevent bias from corrupting the results of the experiments.
The first experiments test the generalization ability of transfer ensembles compared to models learned from scratch. The hypothesis is that, after observing a small amount of in-sample training data from target games, transfer ensembles achieve higher accuracy on out-of-sample target data than scratch models. Initially, source models are learned from scratch using 500 frames of each source game. Then, for each of the 30 target games, scratch and transfer models are trained on 10 frames of target data. Each transfer model is an ensemble composed of object models from one of three disjoint sets of six randomly selected source games. The target game is never in the set of source games used for transfer. The ensemble is built according to the method described in the section above, using target training data both to calculate each source model’s fitness score and to retrain the models. After the 10 frames of training, 100 out-of-sample testing frames are produced from a different level of the target game, and accuracy is measured for each object class model produced by the scratch and transfer methods. All player actions are selected randomly.
The main measure of success is the outperformance in forward prediction accuracy of transfer models over scratch models in the testing frames for each object class in each target game. To reject the null hypothesis of the differences being due to chance, a $t$-statistic is used to compute a one-sided $p$-value. A total of 500 object class models from 30 games are tested.
Table 1 shows that the average increase in accuracy is statistically significant, and it can be concluded that the transfer ensemble approach for learning models of object classes is sound. Figure 2 displays how the transfer ensemble models compare in out-of-sample accuracy against models trained from scratch. Each dot represents one object class model; scratch and transfer perform equally when a dot falls on the line, transfer outperforms when a dot is in the upper-left, and scratch outperforms when a dot is in the lower-right.

Table 1: Mean accuracy $\mu$ of scratch (s) and transfer (t) models for each experimental setup with training frames $T$ and source set $S$. Also reported are the mean accuracy $\mu^a$ for models of the avatar – usually the most important object – and $t$-statistics for differences in object class model accuracy between transfer and scratch. Improvement in accuracy from using transfer is statistically significant.
<table>
<thead>
<tr>
<th>$T$</th>
<th>$S$</th>
<th>$\mu_s$</th>
<th>$\mu_t$</th>
<th>$\mu^a_s$</th>
<th>$\mu^a_t$</th>
<th>$t$</th>
<th>$p$</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>1</td>
<td>0.90</td>
<td>0.92</td>
<td>0.74</td>
<td>0.80</td>
<td>3.53</td>
<td><1%</td>
</tr>
<tr>
<td>10</td>
<td>2</td>
<td>0.92</td>
<td>0.94</td>
<td>0.75</td>
<td>0.82</td>
<td>3.46</td>
<td><1%</td>
</tr>
<tr>
<td>10</td>
<td>3</td>
<td>0.91</td>
<td>0.92</td>
<td>0.78</td>
<td>0.85</td>
<td>3.70</td>
<td><1%</td>
</tr>
</tbody>
</table>
These graphs show that improvement is consistent across many games with rare occurrences of negative transfer.
As shown in Table 1, the average difference between scratch and transfer performance is only about two percent. However, this average difference understates the significance of the improvement. Many object classes are easy enough to model that scratch achieves perfect accuracy. For example, wall objects never move and are modeled with perfect accuracy by scratch in all the tested target games. More important is the improvement in object classes that are hard to model, such as the avatar, which behaves with varying complexity depending on the game. Transfer achieves higher accuracy of about seven percent for the avatar models. Furthermore, Table 1 shows that the improvement is consistent using all three sets of source games. Overall, these results strongly support the hypothesis that the transfer method of this paper leads to improved out-of-sample accuracy for many object classes, with very little negative transfer.
The final experiments test how well scratch and transfer models perform relative to each other and relative to a random action-taking agent on the task of exploring the environment. Agents perform this task on three levels of the game Labyrinth, in which the agent must guide the avatar to reach a destination through maze-like levels containing a few spike tiles fatal to the avatar.
First, agents are given either 10, 50, or 100 frames of training before being evaluated on a fresh 500 frames. If the avatar dies or reaches the goal at any time, the game is restarted with the avatar in its original position. In all setups, the transfer agent is built using an ensemble of six random source games other than Labyrinth, with 500 frames of training for each source, and retrained on the 10/50/100 frames of target data. Termination models are trained in addition to object class models for both scratch and transfer in source and target games in order to help predict death.
After training, agents go through a testing phase of another 500 frames. During this phase, agents use their forward models to choose one-step actions most likely to take them to novel or least-recently visited states. Their decision-making works as follows. The agent remembers each unique game state it visits and the time frame at which it was last visited. For each action the agent predicts the next game state by using its object class models to predict the next state of each game object. Model outputs are treated as probabilities of setting the corresponding object variables to one.
Treating forward predictions as probabilistic samples in this way helps agents avoid getting stuck. The value of each action considered at each frame by an agent is calculated as $1 - \frac{t_\hat{A}}{t_A}$, where $t_\hat{A}$ is the current time frame of the game and $t_\hat{S}$ is the last visited time frame of the predicted next state ($t_\hat{S} = 0$ if the state has never been visited). If the agent predicts death the value is -1. At each frame the agent chooses whichever action has the maximum value. At the end of the testing phase, the total number of unique states visited are counted and used as the metric of evaluation. After the testing phase, an additional 500-frame phase is run to measure prediction accuracy as in the previous experiments. During this phase, all agents take random actions rather than informed ones, in order to ensure fair comparison. The purpose is to determine the relationship between model accuracy and actual performance on an important task requiring action selection. Results are averaged over five experiments on each of the three levels of Labyrinth and each of the three training setups.
Table 2: Average improvement in accuracy (Acc) and exploration (Exp) over random actions, by level and training size (N), for the scratch (S) and transfer (T) approaches. Transfer outperforms scratch in exploration even when they are tied for accuracy, as in the results of the 100-frame training scenarios. The conclusion is that transfer leads to better accuracy and exploration performance.
<table>
<thead>
<tr>
<th>Map</th>
<th>N</th>
<th>Acc$_S$</th>
<th>Acc$_T$</th>
<th>Exp$_S-R$</th>
<th>Exp$_T-R$</th>
</tr>
</thead>
<tbody>
<tr>
<td>L0</td>
<td>10</td>
<td>0.71</td>
<td>0.86</td>
<td>-3.2</td>
<td>45.8</td>
</tr>
<tr>
<td>L1</td>
<td>10</td>
<td>0.77</td>
<td>0.84</td>
<td>4.4</td>
<td>24.4</td>
</tr>
<tr>
<td>L2</td>
<td>10</td>
<td>0.69</td>
<td>0.92</td>
<td>2.4</td>
<td>12.6</td>
</tr>
<tr>
<td>L0</td>
<td>50</td>
<td>0.84</td>
<td>0.93</td>
<td>12.4</td>
<td>27.8</td>
</tr>
<tr>
<td>L1</td>
<td>50</td>
<td>0.89</td>
<td>0.91</td>
<td>-3.6</td>
<td>32.8</td>
</tr>
<tr>
<td>L2</td>
<td>50</td>
<td>0.88</td>
<td>0.94</td>
<td>3.4</td>
<td>13.4</td>
</tr>
<tr>
<td>L0</td>
<td>100</td>
<td>0.95</td>
<td>0.86</td>
<td>41.8</td>
<td>39.2</td>
</tr>
<tr>
<td>L1</td>
<td>100</td>
<td>0.91</td>
<td>0.9</td>
<td>12.2</td>
<td>26.6</td>
</tr>
<tr>
<td>L2</td>
<td>100</td>
<td>0.93</td>
<td>0.95</td>
<td>10.6</td>
<td>25.6</td>
</tr>
</tbody>
</table>
Figure 3: Trajectory maps of avatar during test phase, five runs overlaid for Level 0, agents trained on 10 frames. The maps show more space explored by transfer agents.
Table 2 shows how scratch and transfer agents perform in exploring three levels of Labyrinth given 10, 50, and 100 initial frames of training. Figure 3 shows an example of the avatars’ trajectories. The agents trained from scratch on only ten frames of the game are not highly accurate in out-of-sample experience and struggle to perform better than random exploration. In contrast, the transfer agents are more accurate, supporting the results of the previous experiments, and are also able to explore much more efficiently.
As the number of training frames increases to 100, scratch models catch up in accuracy to transfer models. Interestingly, the transfer agents still explore more efficiently on average than the scratch agents, despite not being any more accurate. One possible explanation for the outperformance unexplained by accuracy is that the transfer agent may be particularly more accurate in the more important predictions. For example, if the avatar dies from contact from a spike tile, the agent must restart from the original position. The transfer agent may more accurately predict this deadly interaction than a scratch agent when no spike traps appear during training because the transfer agent is composed of some source models that predict death from contact with foreign objects. Being able to avoid death allows the transfer agent to keep exploring while the scratch agent has to start over. This advantage from using transfer is underestimated by the average accuracy measure, which is diluted by other predicted behaviors.
Discussion and Future Work
The methods explored in this paper - object-oriented factorization, transfer ensembles, and model retraining - help improve the sample efficiency of agents learning GVG-AI games. In these experiments, transfer-learning agents were more accurate than scratch agents when predicting future states. They were also more efficient at exploration, which is a widely useful ability for learning games. Future work will investigate the ultimate task of maximizing score, which is outside the scope of this paper because it requires the integration of planning and value approximation methods.
GVG-AI games contain diverse challenges that test how well the learning approach generalizes across games. Crucially, its games also have stochastic behaviors and multiple levels, which test how well agents generalize across experiences. However, there are some challenges that are not covered by the GVG-AI domain, and an important path for future work is to improve the robustness of this approach by adapting it to other domains. For example, using transfer to reduce generalization error could be useful in domains with noisy or high-dimensional observation spaces.
Conclusion
This paper demonstrated a model-based transfer learning approach for training video game agents from very little data. The approach constructs an ensemble out of source object models and uses the limited target data both to choose the ensemble weights and to retrain the final model. Although both scratch and transfer models achieve global minima in prediction errors during training, experiments showed consistently higher out-of-sample performance for transfer models across diverse GVG-AI games. Transfer agents showed particular improvement in modeling important objects such as avatars, which was useful for more quickly exploring unfamiliar game maps. Artificial agents can use this approach to accelerate early-stage learning and quickly adapt to novel situations.
References
|
{"Source-Url": "https://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http://www.cs.utexas.edu/users/nn/downloads/papers/braylan.aiide2016.pdf&pubid=127588", "len_cl100k_base": 7925, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24566, "total-output-tokens": 9511, "length": "2e12", "weborganizer": {"__label__adult": 0.0016994476318359375, "__label__art_design": 0.0017852783203125, "__label__crime_law": 0.002010345458984375, "__label__education_jobs": 0.00650787353515625, "__label__entertainment": 0.0010585784912109375, "__label__fashion_beauty": 0.0011959075927734375, "__label__finance_business": 0.001094818115234375, "__label__food_dining": 0.001949310302734375, "__label__games": 0.226318359375, "__label__hardware": 0.003772735595703125, "__label__health": 0.0023021697998046875, "__label__history": 0.001956939697265625, "__label__home_hobbies": 0.00054168701171875, "__label__industrial": 0.002285003662109375, "__label__literature": 0.0016202926635742188, "__label__politics": 0.0010347366333007812, "__label__religion": 0.0016050338745117188, "__label__science_tech": 0.34375, "__label__social_life": 0.0003235340118408203, "__label__software": 0.01468658447265625, "__label__software_dev": 0.377685546875, "__label__sports_fitness": 0.002239227294921875, "__label__transportation": 0.00186920166015625, "__label__travel": 0.0007610321044921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38916, 0.02483]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38916, 0.64727]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38916, 0.90757]], "google_gemma-3-12b-it_contains_pii": [[0, 4800, false], [4800, 11109, null], [11109, 16784, null], [16784, 23183, null], [23183, 29069, null], [29069, 34924, null], [34924, 38916, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4800, true], [4800, 11109, null], [11109, 16784, null], [16784, 23183, null], [23183, 29069, null], [29069, 34924, null], [34924, 38916, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38916, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38916, null]], "pdf_page_numbers": [[0, 4800, 1], [4800, 11109, 2], [11109, 16784, 3], [16784, 23183, 4], [23183, 29069, 5], [29069, 34924, 6], [34924, 38916, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38916, 0.06987]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
e1428fb5935835062332e8770b381e5b849ac81c
|
Outline
- XML parsers
- XPath
- XQuery
- XML publishing
- Background (reading)
- http://www.w3.org/TR/xmlquery-use-cases/ several Xquery examples
- http://www.galaxquery.org/ nice command line tool
XML Parsers - Overview
• What do we do if we need to “read” an XML document?
– This is typically called parsing
• Navigate through XML trees
• Construct XML trees
• Output XML
– SOAP libraries use this technique to read and write XML messages
Two XML Parsers
• Two main APIs for XML parsers
– DOM (Document Object Model)
• Tree structure
– SAX (Simple Api for Xml)
• Read individual tags
• Built for programmers who do not want to write their own parsers
• Available for multiple languages
DOM
- Platform- and language-neutral interface
- Allows to build documents, navigate their structure, and add, modify, or delete elements and content.
DOM Example
![XML Document and Document object tree]
Figure 1: Hierarchical structure of a document object
DOM Cont.
- **Tree** for XML works fine since XML is hierarchically organised
- Every XML document can be represented as a tree
Some Insights in API
```java
org.w3c.dom.Document
NodeList getElementsByTagName(java.lang.String tagname)
Returns a NodeList of all the Elements with a given tag name in the order in which they are encountered in a preorder traversal of the Document tree.
Element getDocumentElement()
This is a convenience attribute that allows direct access to the child node that is the root element of the document.
```
http://java.sun.com/webservices/jaxp/dist/1.1/docs/api/org/w3c/dom/package-summary.html
Some Insights in API
org.w3c.dom.Node
NodeList
getChildNodes()
A NodeList that contains all children of this node.
Node
getFirstChild()
The first child of this node.
Node
getLastChild()
The last child of this node.
Node
getParentNode()
The parent of this node.
import java.io.IOException; // Exception handling
import org.w3c.dom.*; // DOM interface
import org.apache.xerces.parsers.DOMParser; // Parser (to DOM)
class Hello {
public static void main(String[] args) {
String filename = args[0];
System.out.print("The document element of " + filename + " is ... ");
try {
DOMParser dp = new DOMParser();
dp.parse(filename);
Document doc = dp.getDocument();
Element docElm = doc.getDocumentElement();
System.out.println(docElm.getNodeName() + ".");
} catch (Exception e) {
System.out.println("\nError: " + e.getMessage());
}
}
}
http://www.troubleshooters.com/tpromag/200103/codexercises.htm
<?xml version="1.0"?>
<workers>
<contractor>
<info lname="albertson" fname="albert" ssno="123456789"/>
<job>C++ programmer</job>
<hiredate>1/1/1999</hiredate>
</contractor>
<contractor>
<info lname="bartholemew" fname="bart" ssno="223456789"/>
<job>Technology Director</job>
<hiredate>1/1/2000</hiredate>
<firedate>1/11/2000</firedate>
</contractor>
<partner>
<info lname="carlson" fname="carl" ssno="323456789"/>
<job>labor law</job>
<hiredate>10/1/1979</hiredate>
</partner>
<contractor>
<info lname="denby" fname="dennis" ssno="423456789"/>
<job>cobol programmer</job>
<hiredate>1/1/1959</hiredate>
</contractor>
<employee>
<info lname="edwards" fname="eddie" ssno="523456789"/>
<job>project manager</job>
<hiredate>4/4/1996</hiredate>
</employee>
<partner>
<info lname="fredericks" fname="fred" ssno="623456789"/>
<job>intellectual property law</job>
<hiredate>10/1/1991</hiredate>
</partner>
</workers>
class ContractorLastNamePrinter {
ContractorLastNamePrinter(Document doc) {
System.out.println();
try {
//*** GET DOCUMENT ELEMENT BY NAME ***
NodeList nodelist = doc.getElementsByTagName("workers");
Element elm = (Element) nodelist.item(0);
//*** GET ALL contractors BELOW workers ***
NodeList contractors = elm.getElementsByTagName("contractor");
for(int i = 0; i < contractors.getLength(); i++) {
Element contractor = (Element) contractors.item(i);
//*** NO NEED TO ITERATE info ELEMENTS, ***
//*** WE KNOW THERE'S ONLY ONE ***
Element info =
(Element)contractor.getElementsByTagName("info").item(0);
System.out.println("Contractor last name is " + info.getAttribute("lname"));
}
} catch (Exception e) {
System.out.println("ContractorLastNamePrinter() error: " + e.getMessage());
}
}
}
SAX
• Access to XML information as a sequence of events
– Document is scanned from start to end
• Faster than DOM
• You can create your own object model
• You are responsible to interpret all the objects read by the parser
SAX Events
• the start of the document is encountered
• the end of the document is encountered
• the start tag of an element is encountered
• the end tag of an element is encountered
• character data is encountered
• a processing instruction is encountered
<purchase-order>
<date>2005-10-31</date>
<number>12345</number>
<purchased-by>
<name>My name</name>
<address>My address</address>
</purchased-by>
<order-items>
<item>
<code>687</code>
<type>CD</type>
<label>Some music</label>
</item>
<item>
<code>129851</code>
<type>DVD</type>
<label>Some video</label>
</item>
</order-items>
</purchase-order>
import javax.xml.parsers.SAXParser;
import javax.xml.parsers.SAXParserFactory;
import org.xml.sax.Attributes;
import org.xml.sax.SAXException;
import org.xml.sax.helpers.DefaultHandler;
private static final class SaxHandler extends DefaultHandler {
// invoked when document-parsing is started.
public void startDocument() throws SAXException {
System.out.println("Document processing started");
}
// notifies about finish of parsing:
public void endDocument() throws SAXException {
System.out.println("Document processing finished");
}
// we enter to element 'qName':
public void startElement(String uri, String localName,
String qName, Attributes attrs) throws SAXException {
if (qName.equals("purchase-order")) {
} else if (qName.equals("date")) {
/* if (...) */
} else {
throw new IllegalArgumentException("Element "+
qName + " is not allowed here");
}
}
// we leave element 'qName' without any actions:
public void endElement(String uri, String localName, String qName)
throws SAXException {
// do nothing;
}
}
Outline
- XML parsers
- XPath
- XQuery
- XML publishing
Querying XML Data
- XPath = simple navigation through XML tree
- XQuery = the SQL of XML
- XSLT = recursive traversal
- eXtensible Stylesheet Language Transformation
- will not discuss
- XQuery and XSLT build on XPath
Sample Data for Queries
```xml
<bib>
<book>
<publisher> Addison-Wesley </publisher>
<author> Serge Abiteboul </author>
<author> Victor Vianu </author>
<title> Foundations of Databases </title>
<year> 1995 </year>
</book>
<book price="55">
<publisher> Freeman </publisher>
<author> Jeffrey D. Ullman </author>
<title> Principles of Database and Knowledge Base Systems </title>
<year> 1998 </year>
</book>
</bib>
```
Data Model for XPath
The root element is `bib`. The root is `The root element`. The XPath data model is shown in the diagram.
XPath: Simple Expressions
/bib/book/year
Result:
\(<\text{year}> 1995 </\text{year}>\)
\(<\text{year}> 1998 </\text{year}>\)
/bib/paper/year
Result: empty (there were no papers)
XPath: Restricted Kleene Closure
//author
Result:
\(<\text{author}> \text{Serge Abiteboul} </\text{author}>\)
\(<\text{author}> \text{Rick} </\text{first-name}> \text{Hull} </\text{last-name}>\)
/bib//first-name
Result: \(<\text{first-name}> \text{Rick} </\text{first-name}>\)
XPath: Text Nodes
\[ /\text{bib/book/author/text()} \]
Result:
Serge Abiteboul
Jeffrey D. Ullman
Rick Hull doesn’t appear because he has firstname, lastname
Functions in XPath:
- `text()` = matches the text value
- `node()` = matches any node (= * or @* or `text()`)
- `name()` = returns the name of the current tag
XPath: Wildcard
\[ //\text{author/*} \]
Result:
<first-name> Rick </first-name>
<last-name> Hull </last-name>
* Matches any element
XPath: Attribute Nodes
/bib/book/@price
Result: “55”
@price means that price is has to be an attribute
XPath: Predicates
/bib/book/author[first-name]
Result: <author> <first-name> Rick </first-name>
<last-name> Hull </last-name>
</author>
Predicate corresponds to an IF/THEN statement. If it is true, the Element will be selected!
General: parent[child someTestHere]
**XPath: More Predicates**
/bib/book/author[firstname][address[.//zip][city]]/lastname
Result: <lastname> … </lastname>
<lastname> … </lastname>
**XPath: More Predicates**
/bib/book[@price < “60”]
/bib/book[author/@age < “25”]
/bib/book[author/text()]
XPath: Summary
bib matches a bib element
* matches any element
/ matches the root element
/bib matches a bib element under root
bib/paper matches a paper in bib
bib//paper matches a paper in bib, at any depth
//paper matches a paper at any depth
paper/book matches a paper or a book
@price matches a price attribute
bib/book/@price matches price attribute in book, in bib
bib/book[@price<“55”]/author/lastname matches…
Outline
• XML parsers
• XPath
• XQuery
• XML publishing
XQuery Motivation
• Query is a strongly typed query language
• Builds on XPath
• XPath expressivity insufficient
– no join queries (as in SQL)
– no changes to the XML structure possible
– no quantifiers (as in SQL)
– no aggregation and functions
FLWR (“Flower”) Expressions
• XQuery uses XPath to express more complex queries
```xml
for ...
let...
where...
return...
```
Sample Data for Queries
```xml
<bib>
<book>
<publisher> Addison-Wesley </publisher>
<author> Serge Abiteboul </author>
<author> Rick </author>
<author> Hull </author>
<author> Victor Vianu </author>
<title> Foundations of Databases </title>
<year> 1995 </year>
</book>
<book price="55">
<publisher> Freeman </publisher>
<author> Jeffrey D. Ullman </author>
<title> Principles of Database and Knowledge Base Systems </title>
<year> 1998 </year>
</book>
</bib>
```
Basic FLWR
Find all book titles published after 1995:
```xml
<bib> {
for $x in doc("bib.xml")/bib/book
where $x/year/text() > 1995
return $x/title
} </bib>
```
Result:
```xml
<bib><title> Principles of Database and Knowledge Base Systems </title></bib>
```
FLWR vs. XPath expressions
Equivalently
```xml
for $x in doc("bib.xml")/bib/book[year/text() > 1995]/title
return $x
```
And even shorter:
```xml
doc("bib.xml")/bib/book[year/text() > 1995]/title
```
Result Structuring
- Find all book titles and the year when they were published:
```xml
for $x in doc("bib.xml")/bib/book
return <answer>
<title>{ $x/title/text() } </title>
<year>{ $x/year/text() } </year>
</answer>
```
Braces { } denote evaluation of enclosed expression
Result Structuring
- Notice the use of “{“ and “}”
- What is the result without them?
```xml
for $x in doc("bib.xml")/bib/book
return <answer>
<title> $x/title/text() </title>
<year> $x/year/text() </year>
</answer>
```
XQuery Joins and Nesting
For each author of a book by Addison-Wesley, list all books she published:
```xml
for $b in doc("bib.xml")/bib,
$a in $b/book[publisher/text()="Addison-Wesley"]/author
return <result>
{ $a,
for $t in $b/book[author/text()=$a/text()]/title
return $t
}
</result>
```
In the `return` clause comma concatenates XML fragments
XQuery Nesting
Result:
```xml
<result>
<author>Jones</author>
<title> abc </title>
<title> def </title>
</result>
<result>
<author>Smith</author>
<title> ghi </title>
</result>
```
Aggregates
Find all books with more than 3 authors:
```xml
for $x in doc("bib.xml")/bib/book
where count($x/author)>3
return $x
```
- **count** = a function that counts
- **avg** = computes the average
- **sum** = computes the sum
- **distinct-values** = eliminates duplicates
Aggregates
Same thing:
```
for $x in doc("bib.xml")/bib/book[count(author)>3]
return $x
```
Aggregates
Print all authors who published more than 3 books – be aware of duplicates!
```
for $b in doc("bib.xml")/bib,
$a in distinct-values($b/book/author/text())
where count($b/book[author/text()=$a])>3
return <author> { $a } </author>
```
Aggregates
Find books whose price is larger than average:
```xml
for $b in doc("bib.xml")/bib
let $a:=avg($b/book/price/text())
for $x in $b/book
where $x/price/text() > $a
return $x
```
Result Structure
“Flatten” the authors, i.e. return a list of (author, title) pairs
```xml
for $b in doc("bib.xml")/bib/book,
$x in $b/title/text(),
$y in $b/author/text()
return <answer>
<title> { $x } </title>
<author> { $y } </author>
</answer>
```
Result:
```xml
<answer>
<title> abc </title>
<author> efg </author>
</answer>
<answer>
<title> abc </title>
<author> hkj </author>
</answer>
```
Result Structure
For each author, return all titles of her/his books
```
for $b in doc("bib.xml")/bib, $x in $b/book/author/text()
return
<answer>
<author> $x </author>
{ for $y in $b/book[author/text()=$x]/title
return $y }
</answer>
```
Result:
```
<answer>
<author> efg </author>
<title> abc </title>
<title> klm </title>
. . . .
</answer>
```
What about duplicate authors?
Result Structure
Eliminate duplicates:
```
for $b in doc("bib.xml")/bib, $x in distinct-values($b/book/author/text())
return
<answer>
<author> $x </author>
{ for $y in $b/book[author/text()=$x]/title
return $y }
</answer>
```
**SQL and XQuery Side-by-side**
**Product(pid, name, maker)**
**Company(cid, name, city)**
Find all products made in Seattle
**SQL**
```
SELECT x.name
FROM Product x, Company y
WHERE x.maker=y.cid
and y.city="Seattle"
```
**XQuery**
```
for $r in doc("db.xml")/db,
$x in $r/Product/row,
$y in $r/Company/row
where
$x/maker/text()=$y/cid/text()
and $y/city/text() = "Seattle"
return { $x/name }
```
---
**XML Example**
```xml
<db>
<product>
<row>
<pid> ??? </pid>
<name> ??? </name>
<maker> ??? </maker>
</row>
<row> .... </row>
</product>
....
</db>
```
XQuery Variables
- **for $x in expr** -- binds $x to each value in the list expr
- **let $x := expr** -- binds $x to the entire list expr
- Useful for common sub-expressions and for aggregations
XQuery: LET
Find all publishers that published more than 100 books:
```xml
<big_publishers>
{ for $p in distinct-values(//publisher/text())
let $b := /db/book[publisher/text() = $p]
where count($b) > 100
return <publisher> { $p } </publisher>
}
</big_publishers>
```
$b is a collection of elements, not a single element
count = a (aggregate) function that returns the number of elms
FOR vs. LET
FOR
• Binds *node variables* → iteration
LET
• Binds *collection variables* → one value
---
FOR
```xml
for $x$ in /bib/book
return <result> { $x } </result>
```
LET
```xml
let $x$ := /bib/book
return <result> { $x } </result>
```
---
Returns:
```
<result> <book>...</book> </result>
<result> <book>...</book> </result>
<result> <book>...</book> </result>
...
```
Returns:
```
<result> <book>...</book> </result>
<book>...</book>
<book>...</book>
...
</result>
```
Collections in XQuery
- Ordered and unordered collections
- `/bib/book/author/text()` = an ordered collection: result is in document order
- `distinct-values(/bib/book/author/text())` = an unordered collection: the output order is implementation dependent
- `let $a := /bib/book` → $a is a collection
- `$b/author` → a collection (several authors...)
```xquery
return <result> { $b/author } </result>
```
SQL and XQuery Side-by-side
Product(pid, name, maker, price) Find all product names, prices, sort by price
```sql
SELECT x.name, x.price
FROM Product x
ORDER BY x.price
```
```xquery
for $x in doc("db.xml")/db/Product/row
order by $x/price/text()
return <answer>
{ $x/name, $x/price }
</answer>
```
XQuery’s Answer
<answer>
<name> abc </name>
<price> 7 </price>
</answer>
<answer>
<name> def </name>
<price> 23 </price>
</answer>
Notice: this is NOT a well-formed document!
(WHY ???)
Producing a Well-Formed Answer
<myQuery>
{ for $x in doc("db.xml")/db/Product/row
order by $x/price/text()
return <answer>
{ $x/name, $x/price }
</answer>
}
</myQuery>
XQuery’s Answer
```xml
<myQuery>
<answer>
<name> abc </name>
<price> 7 </price>
</answer>
<answer>
<name> def </name>
<price> 23 </price>
</answer>
.
.
</myQuery>
Now it is well-formed!
```
SQL and XQuery Side-by-side
For each company with revenues < 1M count the products over $100
```sql
SELECT y.name, count(*)
FROM Product x, Company y
WHERE x.price > 100 and x.maker=y.cid and y.revenue < 1000000
GROUP BY y.cid, y.name
```
```xml
for $r in doc("db.xml")/db,
$y in $r/Company/row[revenue/text()<1000000]
return
<proudCompany>
<companyName> { $y/name/text() } </companyName>
<numberOfExpensiveProducts>
{ count($r/Product/row[maker/text()=$y/cid/text()][price/text()>100]) }
</numberOfExpensiveProducts>
</proudCompany>
```
SQL and XQuery Side-by-side
Find companies with at least 30 products, and their average price
SELECT y.name, avg(x.price) FROM Product x, Company y WHERE x.maker=y.cid GROUP BY y.cid, y.name HAVING count(*) > 30
for $r in doc("db.xml")/db, $y in $r/Company/row let $p := $r/Product/row[maker/text()=y/cid/text()] where count($p) > 30
return
<theCompany>
<companyName> { $y/name/text() } </companyName>
<avgPrice> avg($p/price/text()) </avgPrice>
</theCompany>
XQuery
Summary:
• FOR-LET-WHERE-RETURN = FLWR
Practical Example: Galax
```
$ more iis0.xq
<bib> {
for $x in doc("bib.xml")/bib/book/author[first-name]
return <result> {$x} </result>
} </bib>
$ galax-run iis0.xq
<bib><result><author><first-name>Rick</first-name>
<last-name>Hull</last-name>
</author></result></bib>
http://www.galaxquery.org
```
Outline
- XML parsers
- XPath
- XQuery
- XML publishing
XML from/to Relational Data
- XML publishing:
- relational data $\rightarrow$ XML
- XML storage:
- XML $\rightarrow$ relational data
Client/server DB Apps
Relational Database $\rightarrow$ Network $\rightarrow$ Application
Tuple streams $\rightarrow$ SQL
XML Publishing
- Relational schema:
- Student(sid, name, address)
- Course(cid, title, room)
- Enroll(sid, cid, grade)
First thing to do: design the DTD:
```xml
<!ELEMENT xmlview (course*)>
<!ELEMENT course (title, room, student*)>
<!ELEMENT student (name, address, grade)>
<!ELEMENT name (#PCDATA)>
<!ELEMENT address (#PCDATA)>
<!ELEMENT grade (#PCDATA)>
<!ELEMENT title (#PCDATA)>
```
Group by courses: redundant representation of students
Other representations possible too
Now we write an XQuery to export relational data → XML
Note: result is in the right DTD
```xml
<xmlview>
{ for $x in /db/Course/row
return
<course>
<title> { $x/title/text() } </title>
...
</course>
}
</xmlview>
```
**XML Publishing**
Query: find Mary’s grade in Operating Systems
**XQuery**
```xml
for $x in /xmlview/course[title/text()=“Operating Systems”],
$y in $x/student/[name/text()=“Mary”]
return <answer> $y/grade/text() </answer>
```
**SQL**
```sql
SELECT Enroll.grade
FROM Student, Enroll, Course
WHERE Student.name=“Mary” and Course.title=“OS”
and Student.sid = Enroll.sid and Enroll.cid = Course.cid
```
Can be done automatically
XML Publishing
How do we choose the output structure?
- Determined by agreement with partners/users
- Or dictated by committees
- XML dialects (called *applications*) = DTDs
- XML Data is often nested, irregular, etc
- No normal forms for XML
Conclusion
- **XML parsers** are required for detailed usage of XML encoded data
- **XPath** provides a simple query language
- **XQuery** is an enhanced version of XPath (SQL like)
|
{"Source-Url": "http://lsirwww.epfl.ch/courses/iis/2007ss/slides/slides-10-Xpath-Xquery.pdf", "len_cl100k_base": 5773, "olmocr-version": "0.1.50", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 148542, "total-output-tokens": 7744, "length": "2e12", "weborganizer": {"__label__adult": 0.0002460479736328125, "__label__art_design": 0.0001989603042602539, "__label__crime_law": 0.0001901388168334961, "__label__education_jobs": 0.00024127960205078125, "__label__entertainment": 3.904104232788086e-05, "__label__fashion_beauty": 7.37905502319336e-05, "__label__finance_business": 0.00010895729064941406, "__label__food_dining": 0.00022780895233154297, "__label__games": 0.0002206563949584961, "__label__hardware": 0.00034880638122558594, "__label__health": 0.00016319751739501953, "__label__history": 9.292364120483398e-05, "__label__home_hobbies": 3.921985626220703e-05, "__label__industrial": 0.00017070770263671875, "__label__literature": 0.00011664628982543944, "__label__politics": 9.959936141967772e-05, "__label__religion": 0.0002498626708984375, "__label__science_tech": 0.0019969940185546875, "__label__social_life": 5.435943603515625e-05, "__label__software": 0.00782012939453125, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0001552104949951172, "__label__transportation": 0.00018870830535888672, "__label__travel": 0.00013685226440429688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20339, 0.009]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20339, 0.30135]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20339, 0.58363]], "google_gemma-3-12b-it_contains_pii": [[0, 248, false], [248, 769, null], [769, 1031, null], [1031, 1666, null], [1666, 2688, null], [2688, 4608, null], [4608, 5093, null], [5093, 6709, null], [6709, 6991, null], [6991, 7579, null], [7579, 8054, null], [8054, 8512, null], [8512, 8899, null], [8899, 9157, null], [9157, 9635, null], [9635, 10018, null], [10018, 10805, null], [10805, 11293, null], [11293, 11883, null], [11883, 12357, null], [12357, 12701, null], [12701, 13316, null], [13316, 13950, null], [13950, 14605, null], [14605, 15206, null], [15206, 15695, null], [15695, 16412, null], [16412, 16802, null], [16802, 17569, null], [17569, 18093, null], [18093, 18467, null], [18467, 18730, null], [18730, 18856, null], [18856, 19217, null], [19217, 19910, null], [19910, 20339, null]], "google_gemma-3-12b-it_is_public_document": [[0, 248, true], [248, 769, null], [769, 1031, null], [1031, 1666, null], [1666, 2688, null], [2688, 4608, null], [4608, 5093, null], [5093, 6709, null], [6709, 6991, null], [6991, 7579, null], [7579, 8054, null], [8054, 8512, null], [8512, 8899, null], [8899, 9157, null], [9157, 9635, null], [9635, 10018, null], [10018, 10805, null], [10805, 11293, null], [11293, 11883, null], [11883, 12357, null], [12357, 12701, null], [12701, 13316, null], [13316, 13950, null], [13950, 14605, null], [14605, 15206, null], [15206, 15695, null], [15695, 16412, null], [16412, 16802, null], [16802, 17569, null], [17569, 18093, null], [18093, 18467, null], [18467, 18730, null], [18730, 18856, null], [18856, 19217, null], [19217, 19910, null], [19910, 20339, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20339, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20339, null]], "pdf_page_numbers": [[0, 248, 1], [248, 769, 2], [769, 1031, 3], [1031, 1666, 4], [1666, 2688, 5], [2688, 4608, 6], [4608, 5093, 7], [5093, 6709, 8], [6709, 6991, 9], [6991, 7579, 10], [7579, 8054, 11], [8054, 8512, 12], [8512, 8899, 13], [8899, 9157, 14], [9157, 9635, 15], [9635, 10018, 16], [10018, 10805, 17], [10805, 11293, 18], [11293, 11883, 19], [11883, 12357, 20], [12357, 12701, 21], [12701, 13316, 22], [13316, 13950, 23], [13950, 14605, 24], [14605, 15206, 25], [15206, 15695, 26], [15695, 16412, 27], [16412, 16802, 28], [16802, 17569, 29], [17569, 18093, 30], [18093, 18467, 31], [18467, 18730, 32], [18730, 18856, 33], [18856, 19217, 34], [19217, 19910, 35], [19910, 20339, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20339, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
d165aa031148607a762c2841d27b4715363a070a
|
Runtime Optimizations for a Java DSM Implementation
R. Veldema† R.F.H. Hofman‡ R.A.F. Bhoedjang† H.E. Bal‡
Department of Computer Science Vrije Universiteit
Amsterdam, The Netherlands
{rveldema,rutger,ral}@cs.vu.nl
†Department of Computer Science Cornell University
Ithaca, NY, USA
raoul@cs.cornell.edu
ABSTRACT
Jackal is a fine-grained distributed shared memory implementation of the Java programming language. Jackal implements Java’s memory model and allows multithreaded Java programs to run unmodified on distributed-memory systems.
This paper focuses on Jackal’s runtime system, which implements a multiple-writer, home-based consistency protocol. Protocol actions are triggered by software access checks that Jackal’s compiler inserts before object and array references. We describe optimizations for Jackal’s runtime system, which mainly consist of discovering opportunities to dispense with flushing of cached data. We give performance results for different runtime optimizations, and compare their impact with the impact of one compiler optimization. We find that our runtime optimizations are necessary for good Jackal performance. However, only in conjunction with the Jackal compiler optimizations described in [24]. As a yardstick, we compare the performance of Java applications run on Jackal with the performance of equivalent applications that use a fast implementation of Java’s Remote Method Invocation (RMI) instead of shared memory.
1. INTRODUCTION
Jackal is a compiler-supported, fine-grained distributed shared memory (DSM) system for Java. The system can run unmodified, multithreaded Java programs on a cluster of workstations. Together, Jackal’s compiler and runtime system (RTS) hide the distributed nature of the cluster: Jackal programs use threads and shared variables instead of message-passing abstractions like Remote Method Invocation [19]. This paper focuses on the implementation of the RTS and its optimizations, which mainly consist of discovering opportunities to dispense with flushing of cached data to main memory.
Jackal resembles fine-grained DSM systems like Shasta [22] and Sirocco [13] in that it uses a small unit of coherence that is managed entirely by software. In Jackal, the unit of coherence is called a region. Each region contains either a complete Java object or a section of a Java array. In contrast with page-based DSMs, Jackal uses software access checks to determine if a region is present in local memory and up-to-date. If an access check detects that a region is absent or out-of-date, it invokes Jackal’s runtime system which implements a multiple-writer cache coherence protocol that resolves read and write misses. A region is managed by its home node, which is the processor that created the associated object. Jackal does not use a single-writer protocol, because that would require the compiler to inform the runtime system when a read/write operation has finished; that would increase code size and protocol overhead, and pose complications for the compiler in (re)moving access checks.
Jackal conforms to the Java memory model, which allows caching of objects in (thread) local memory and logically requires complete flushing of local memory upon each entry and exit of a synchronized block. In our system, main memory equates to an object’s home node, and local memory to the requesting machine’s memory. Flushing regions and subsequently requesting them again may cause a large overhead under a naive implementation (especially using the class libraries which perform many unnecessary synchronizations [1]). To reduce this overhead, we investigate possibilities offered by the Java memory model to cache regions across a synchronization operation. This is possible for regions that are read-shared and regions that are accessed by a single machine.
Jackal uses an optimizing Java compiler to generate access checks. In the optimization passes of the compiler, access checks may be removed, lifted or combined. For example, array accesses may be combined and lifted from a loop that (partially) traverses the array, or accesses may be aggregated when the compiler determines that an object is used together with its referenced subobjects. The compiler optimizations are described in detail in [24].
The contributions of this paper are as follows:
• We describe various RTS optimizations to reduce the number of region flushes.
• We measure the impact of the RTS optimizations for several Java applications and compare them to the impact of compiler optimizations.
The paper is structured as follows. Section 2 treats Java’s memory model. Section 3 describes Jackal and its implementation. Section 4 summarizes Jackal’s compiler optimizations and describes our new RTS optimizations. Section 3 and an extended version of Subsection 4.1 appeared earlier in [24], but we repeat these introductory sections here to make this paper more self-contained.
self-contained. Section 5 studies the impact of the RTS optimizations on Jackal’s performance on a Myrinet-based cluster computer. Section 6 discusses related work. Finally, Section 7 concludes.
2. JAVA’S MEMORY MODEL
We briefly summarize Java’s memory model; for a detailed description we refer to the language specification [10] and Pugh’s critique of the memory model [21].
Java’s memory model specifies that each thread has a working memory, which can be considered a thread-private cache. The entire program has a main memory which is used for communication between threads. The data modified by a thread is flushed to main memory upon encountering a synchronization point. (In this respect, the model resembles release consistency [8, 16].) Synchronization points in Java correspond to the entry and exit of synchronized blocks. These are implemented as calls that lock and unlock an object. A lock operation conceptually copies all of a thread’s working memory to main memory and invalidates the working memory. For each storage location, the first access to that location after a lock operation will copy the storage location’s value from main memory into working memory.
Both lock and unlock operations must flush a thread’s working memory, but an implementation is allowed to flush earlier, even after every write operation. If a thread updates an object from outside a synchronized block, Java does not specify when other threads will see the update.
In contrast with entry consistency [3], Java’s memory model does not couple locks to specific objects or fields. In particular, different fields of one object may be protected by different locks, so that those fields can be updated concurrently without introducing race conditions.
3. IMPLEMENTATION
Jackal consists of an optimizing Java compiler and a runtime system. The compiler translates Java sources directly into executable code rather than Java bytecode. (The Jackal runtime system, however, contains a dynamic bytecode compiler [19] to support dynamic class loading.) The compiler also generates software access checks and performs several optimizations to reduce the number and cost of these checks. The runtime system implements Jackal’s multiple-writer cache-coherence protocol. The following sections describe the main components of the implementation. Optimizations are described separately in Section 4.
3.1 Regions
A region is Jackal’s unit of coherence. A region is a contiguous chunk of virtual memory that contains one Java object or a contiguous section of a Java array. Jackal partitions arrays into fixed-size, 256-byte regions (to reduce false sharing inside large arrays).
Every region has a region header that contains a pointer to the start of the Java data stored in the region, a pointer to the region’s twin (see Section 3.3), and DSM status information. Each object or array has a Java object header that contains a pointer to a virtual-function table and object status flags. To keep array data contiguous, regions and their headers are stored separately (see Fig. 1).
The processor that allocates a region is called the region’s home node. The home node always provides storage for the region and plays an important role in Jackal’s coherence protocol (see Section 3.3). Non-home nodes can cache the region and may discard their copy and its memory when they see fit (e.g., during garbage collection).
3.2 Address-Space Management
Jackal stores all regions in a single, shared virtual address space. Each region occupies the same virtual-address range on all processors that store a copy of the region. Regions are named and accessed through their virtual address; this scheme avoids translation of object pointers.
Fig. 2 shows a processor’s address-space layout. The shared virtual address space is split into \( P \) equal parts, where \( P \) is the number of processors. Each processor owns one of these parts and creates objects and arrays in its own part. This way, each processor can allocate objects without synchronizing with other processors.
When a processor wishes to access a region created by another machine, it must (1) potentially allocate physical memory for the virtual memory pages in which the object is stored, and (2) retrieve an up-to-date copy of the region from its home node. Region retrieval is described in Section 3.3. Physical memory is allocated using the \texttt{mmap()} system call. Un-
3.3 Coherence Protocol and Access Checks
Jackal employs an invalidation-based, multiple-writer protocol that combines features of HLRC [26] and TreadMarks [15]. As in HLRC, modifications are flushed to a home node; as in TreadMarks, twinning and differencing is used to allow concurrent writes to shared data. Unlike TreadMarks, Jackal uses software access checks inserted before each object/array usage to detect non-local and stale data. The runtime data structures related to the coherence protocol are shown in Fig. 3.
The coherence protocol allows processors to cache a region created on another processor (i.e., the region’s home node). All threads on one processor share one copy of a cached region. The home node and the caching processors all store this copy at the same virtual address.
Although all threads on a processor access the same copy of a given region, each thread maintains its own cache-state vector for that region. This is required because Jackal allows multiple threads per processor and the JMM is defined with respect to threads, not processors. For this purpose, each thread maintains a present and a dirty bitmap, each of which contains one bit per 64 bytes of heap. Objects are 64-byte aligned to map a single object to a single bit in the bitmap. To reduce memory usage, pages for these bitmaps are allocated lazily.
The present bit in thread T’s bitmap indicates whether thread T retrieved an up-to-date copy of region R from R’s home node. A dirty bit in thread T’s bitmap indicates whether thread T wrote to region R since it fetched R from its home node. If the present bit is not set, the access-check code invokes the runtime system to retrieve an up-to-date copy from the region’s home node. When the copy arrives, the runtime system stores the region at its virtual address and sets the accessing thread’s present bit for this region. This cached region copy is called a processor’s working copy of a region. The runtime system stores a pointer to the region in the accessing thread’s flush list. In the case of a write miss, the runtime system also sets the region’s dirty bit and creates a twin, a copy of the region just retrieved, unless such a twin already exists.
A cached region copy remains valid for a particular thread until that thread reaches a synchronization point. At a synchronization point, the thread empties its flush list. All regions on the thread’s flush list are invalidated for that thread by clearing their present bits for that thread. Regions that have their dirty bits set are written back to their home nodes in the form of diffs, and the dirty bits are cleared. A diff contains the difference between a region’s working copy and its twin. The home node uses the incoming diff to update its own copy. To speed up flushing, region flushes to the same home node are combined into a single message.
When two threads on a single processor miss on the same region, both threads must request a fresh copy from the home node, because region state is maintained per thread, not per processor. The data accessed by the second thread may have been modified on another processor after the first thread requested its copy. (As explained in Section 2, this is not a race condition if these parts are protected by different locks.) To see the modification, the second thread must fetch an up-to-date copy from the home node. The second copy is stored at the same virtual address; the newly arrived data is merged into the twin and into the working copy.
4. OPTIMIZATIONS
To improve performance, Jackal removes superfluous access checks, prefetches regions, flushes regions lazily, and employs computation migration to improve locality. The compiler optimizations are described in detail in [24] and are briefly summarized here. The RTS optimizations are described in detail below.
4.1 Compiler Optimizations
Jackal’s front-end inserts access checks before all heap accesses. Since these access checks add considerable runtime overhead, the backend’s optimization passes try to remove as many checks as possible.
The compiler performs interprocedural program analysis to discover opportunities to lift access checks. The front-end of Jackal’s compiler can determine sets of virtual-function call targets and maintain label lists for switch statements. This information is passed on to the compiler back-end which uses it to remove access checks. An access check for address a at program point p can be removed if a has already been checked on all paths that reach p, but only if no path contains a synchronization statement.
Access checks to array elements that are accessed in a loop may be lifted into one aggregate array check before the loop.
The compiler also performs heap analysis [9] to discover when subobjects referenced by an object are always accessed through that outer object. If this is the case, an aggregate access check is generated to fault in the outer object and all its referenced subobjects. This may greatly increase granularity, and may save a number of network round-trips. The applicability of this optimization strongly depends on interprocedural analysis. Escape analysis [6] in combination with heap analysis is used to remove checks on objects that remain local to the creating thread.
The compiler may generate code for computation migration [12]: part or all of a method invocation is moved to the machine where the data resides. This may be especially effective for synchronized blocks and thread object constructors.
In Jackal, the home node of the lock object acts as the manager of the lock. Lock, unlock, wait and notify calls are implemented as control messages to the lock’s home node. When the data protected by the lock resides on the same node as the lock, it is often more efficient to ship the whole synchronized computation to its home: only two messages are involved.
A comparable optimization is applied to calls to thread object constructors. These constructor calls are shipped to the machine where the new thread will run. The result is that the thread object and data created from the constructor have the machine where the thread will run as their home node.
4.2 Runtime Optimizations: Adaptive Lazy Flushing
The coherence protocol described in Section 3.3 invalidates and possibly flushes all data in a thread’s working memory at each synchronization point. That is, the protocol exactly follows the specification of Java’s memory model, which potentially leads to much interprocessor communication. The implementation, however, can relax this protocol without violating the memory model. In particular, it is not necessary to invalidate or flush a region that is accessed by a single processor, or that is only read by any accessing threads. This covers several important cases:
- **home-only** regions that are accessed only at their home node,
- **read-only** regions that are accessed in read mode only,
- **exclusive** regions that have been created and initialized by one node, but are currently accessed by one other node.
Each of these cases corresponds to a region state. In general, a region is in **shared** state; if a region is in any of the other states, the thread(s) holding the region apply lazy flushing: the region is not flushed on a synchronization operation. **Home-only** is a special case of exclusive. It is profitable to make this distinction, however, since the protocol to support home-only is much simpler than the protocol for exclusive.
A processor is said to **share** a region if the region occurs in the flush list of one or more of the threads on that processor. In its optimized version, the RTS tracks which machines share a region; moreover, it distinguishes between read and write sharers.
The optimized version brings a performance trade-off. In the unoptimized version, regions are always mapped at their home node; they are never faulted or flushed by the home node. To detect any of the other states, the RTS must be aware whether the home node also shares the region (for read-only, it must monitor whether the home node is a writer). Now, threads must also flush and fault regions at their home node: present or dirty bits must be set and cleared in home node thread bitmaps, and a pointer to the region must be added to the threads’ flush list. However, lazy flushing may be capable of removing most of the flushes and faults at the home node.
We alleviate this penalty by combining release notices during a flush into a single message per home node, like we did with diff messages.
A region state can be changed by its home node only when a new sharer requests the region, or when a machine gives notice that it no longer shares the region. The new state is computed based on the number of read or write sharers, with the home node as a special case. Some state changes have only local effect (to and from home-only), for some state changes the information can be piggy-backed on the data reply (to read-only).
Two state transitions bring extra communication with them. First, for a region that goes from read-only state to shared state, all sharers must be notified; the region is restored on the flush lists of all threads that access the region on all sharer machines. Second, transitions to and from exclusive state are rather complicated (see Fig. 4). If a region is shared by zero nodes and some node requests a copy for write access (1), then the home node makes the requesting node the region’s owner and gives it an exclusive copy (2). The region remains in exclusive state until another node requests another copy from the home node (3). In that case, the home node first sends a message to the owner, informing it to move the region to shared state (4). The owner replies with an acknowledgement or a diff (5). The home node merges the diff into its own copy and sends the resulting copy to the requesting node (6). Since the region is now in shared state, modifications will be flushed to the home node at synchronization points (7). The region remains in shared state until there is only one sharing node, or there are only read sharers left. If any node no longer shares the region, the node informs the home node that there is one sharer less (8). If the last thread on this node had write access to the region, this information is piggybacked onto the diff that is sent home. When only one write sharer remains, the home node puts the region in exclusive state and informs the

remaining sharer that it is now the region’s owner (9). Since the
new owner will not invalidate the region from now on, its
copy must be brought up to date, so the home node includes
the region data in message (9). When only read sharers remain
after a release notice, the home node puts the region in read-
only state; sharers are not explicitly notified, and they will find
out the next time the region is accessed.
Frequent transitions to and from exclusive state may cause
thrashing. We arbitrarily limit the number of times a region
is allowed to go to exclusive state to 5. From then on, such
a region is allowed to go to all region states except exclusive
state.
5. PERFORMANCE
In this section we study the impact of RTS optimizations on
Jackal’s performance. All tests were performed on a cluster
of 200 MHz PentiumPros, running Linux, and connected by a
Myrinet [5] network. We use LFC [4], an efficient user-level
communication system. On our hardware, LFC achieves a null
roundtrip latency of 20.8 µs and a throughput of 27.6 Mbyte/s
(for a 256 byte message, including a receiver-side copy).
Jackal was configured so that each processor has a max-
umum of 32 Mbyte of local heap and 32 Mbyte of cache avail-
able for mapping pages from other processors.
We quote results on Jackal’s basic performance from [24].
The time to fault and retrieve a region that contains only one
pointer as data is 35 µs. Throughput for a stream of array re-
gions is 24 MByte/s (768 user bytes per 1K packet). Jackal’s
compiler generates good sequential code; sequential speed of
code without access checks is at least as good as the perfor-
mance of IBM’s JIT version 1.3 for Linux, which is the fastest
JIT compiler system currently available [7, 23]. Generation
of access checks without optimization creates a large perfor-
mance penalty: up to a factor of 5.5 for the applications de-
scribed below. The compiler optimization passes reduce the
overhead for access checks to 9 % on average for these appli-
cations.
5.1 Application Suite
Our application suite consists of four multithreaded Java
programs: ASP, SOR, TSP, and Water. Besides the multi-
threaded, shared-memory versions of these programs, we also
wrote equivalent RMI (message-passing) versions of these pro-
grams. The data set for each application is small. Fine-grained
applications show protocol overhead much more clearly than
coarse-grained applications, which communicate infrequently.
The differences for the various optimizations come out markedly;
also, the comparison with the RMI implementations becomes
extremely competitive, since RMI has substantially smaller
protocol overhead.
5.2 Parallel Performance
This section compares, for each application, the performance of
various Jackal configurations, and presents the performance
of an equivalent, hand-optimized RMI program as a yardstick.
The RMI programs use a highly optimized RMI implemen-
tation [19] and run on the same hardware and communication
platform (LFC) as Jackal. On this platform, an empty RMI
reads 38 µs. Both the Jackal and the RMI programs were compi-
led using Jackal’s Java compiler.
RMI has its own sources of overhead: parameters and return
values must be marshaled and unmarshaled and at the server
side a thread is created to execute the method invoked by the
client. Nevertheless, RMI has several important advantages
over Jackal: data and synchronization traffic can be combined;
large arrays can always be transferred as a unit; and object
trees can be transferred as a single unit.
In certain circumstances, Jackal’s compiler is also able to
identify these optimizations [24]; however, the programmer
has no opportunity to fine-tune them, since he completely de-
dpends on the automatic optimization passes of the compiler.
Below, we discuss the performance of each application. All
speedups are relative to the sequential Jackal program com-
piled without access checks.
We vary RTS optimizations by successively allowing more
cases of lazy flushing:
- basic: no lazy flushing
- home-only
- home-only and read-only
- home-only, read-only and exclusive
Compiler optimizations are all enabled, except for computa-
tion migration, which is toggled to allow comparison of RTS
optimizations with compiler optimizations. We toggle only
one of the compiler optimizations because switching off many
of the compiler optimizations (access check lifting, escape anal-
ysis, etc) severely impairs sequential performance, which makes
performance evaluation useless. Computation migration has
no impact on sequential performance.
To access the impact of RTS vs. compiler optimizations, we
present two sequences of measurements: in the first sequence,
we start with basic, then computation migration is enabled,
then the series of lazy flushing states is successively enabled.
In the second sequence of measurements, first all states of lazy
flushing are successively enabled, and finally computation mi-
gration is enabled. If lazy flushing has a far larger impact
on performance than the compiler optimization, these two se-
quences will resemble each other in their performance data.
If, however, compiler optimizations are more important, the
sequences will differ in their performance data.
Fig. 5 shows the relative data message counts, control mes-
sage counts (which includes lock and unlock messages) and
network data volumes for all application variants on 16 pro-
cessors. The RMI data is used to normalize the statistics.
ASP. The All-pairs Shortest Paths (ASP) program computes
the shortest path between any two nodes in a 500-node graph.
Each processor is the home node for a contiguous block of
rows of the graph’s shared distance matrix. In iteration k, all
threads (one per processor) read row k of the matrix and use it
to update their own rows.
The communication pattern of ASP is a series of broadcasts
from each processor in turn. Both the RMI and the Jackal pro-
gram implement the broadcast with a spanning tree. A span-
ning tree is used for the shared-memory (Jackal) implementa-
tion to avoid contention on the data of the broadcast source.
The RMI implementation integrates synchronization with the
data messages and uses only one message (and an empty re-
ply) to forward a row to a child in the tree. This message is sent
asynchronously by a special forwarder thread on each node to
avoid latencies on the critical path.
Figure 5: Message counts and data volume for Jackal, relative to RMI. In the top graph, data messages are counted. The numbers under the X axis are the message counts for RMI. In the middle graph, control messages are counted; these are normalized with respect to RMI data messages, since control messages do not occur for RMI. In the bottom graph, data volume is presented; only the Java application data is counted, message headers are ignored. The numbers under the X axis are the RMI data volumes.
In the compiler-optimized Jackal version (with computation migration and array access check lifting enabled), transmission of a broadcast row is reduced to only one round-trip. The speedup of the RMI program remains better because it uses asynchronous forwarding of rows in its spanning-tree broadcast. An alternative RMI implementation with synchronous forwarding gives the same speedup as the Jackal version.
As appears from Fig. 6, the performance of ASP without optimizations is bad indeed. This is because ASP allocates its data sets in its thread constructors; without thread constructor migration, machine 0 is the home node for all data. Even with all runtime optimizations enabled, speedup is low (at most 2 on 16 processors), since machine 0 must service all data and control messages; see Fig. 5. Performance becomes reasonable only when the thread constructor is migrated and at least read-only flushing is enabled.
**SOR.** Successive over-relaxation (SOR) is a well-known iterative method for solving discretized Laplace equations on a grid. The program uses one thread per processor; each thread operates on a number of contiguous rows of the matrix. In each iteration, the thread that owns matrix partition \( t \) accesses (and caches) the last row of partition \( t - 1 \) and the first row of partition \( t + 1 \). We ran SOR with a 2050 × 2050 (16 Mbyte) matrix.
The Jackal version of SOR attains excellent speedup (see Fig. 7). This is entirely due to those Jackal compiler optimizations we did not vary: the compiler determines that it can combine all access checks in SOR’s innermost loop into a single check for all of a row’s elements. The entire row is streamed to the requesting processor after one request. In the Jackal version of SOR, the data set is not allocated in the constructor of the worker-thread objects, but in their `run()` method, which is not executed until the thread executes on its target processor. Data is written only by home nodes; neighbor rows are only read. This makes the DSM access patterns already optimal even before lazy flushing is applied. Since data is allocated from the `run()` method, computation migration brings no improvement either.
**TSP.** TSP solves the well-known Traveling Salesman Problem (TSP) for a 15-city input set. First, processor zero creates a list of partial paths and a distance table between each city. Next, a worker thread on every processor tries to steal and complete partial paths from the shared, centralized, job queue. The cut-off bound is encapsulated in an object that contains the length of the shortest path discovered thus far. To avoid non-deterministic computation (which may give rise to super-linear speedup), the cut-off bound has been set to the actual minimum for this data set.
Communication in TSP stems from accessing the centralized job queue, from flushing the current partial path, and from reading the minimum object. The RMI program and the optimized Jackal programs transmit approximately the same amount of data.
The performance differences caused by the various optimizations are small but telling (see Fig. 8). A leap in performance occurs when computation migration is switched on, and the run-time optimizations add a smaller improvement. TSP is the one application where support of the exclusive state offers discernible improvement. Partial paths are handed out in write mode, and the thread that evaluates the partial path is the only sharer of that path. After its evaluation, the path is susceptible to lazy flushing only if exclusive state is enabled. Read-only mode gives rise to improvement because the distance table that describes the city topography is read-only. This also appears clearly from the message statistics in Fig. 5. When read-only lazy flushing is enabled, the data communication volume is decreased by an order of magnitude.
**Water.** Water is a Java port of the Water-n-squared application from the Splash benchmark suite [25]. The program simulates a collection of 343 water molecules. Each processor is assigned a partition of the molecule set and communicates with other processors to compute intermolecule forces.
Most communication in Water stems from read misses on `Molecule` objects and the subobjects referenced by them (position vectors of the molecule). A molecule’s force, acceleration, and higher order vectors are stored in separate arrays, which are written only by their owner thread.
Unlike the RMI version, the individual molecules are trans-
Figure 7: Speedup for SOR. See the ASP speedup graph for explanations.
Figure 8: Speedup for TSP. See the ASP speedup graph for explanations.
Figure 9: Speedup for Water. See the ASP speedup graph for explanations.
ferred one at a time. Consequently, the Jackal program makes many more roundtrips than the RMI program. In the future, we intend to extend Jackal’s compiler with analysis to allow fetching of the entire sub-array of molecules at once; this would enable bulk communication for Water’s Molecule objects.
As in ASP and TSP, the major performance improvements stem from the compiler optimizations; again, the run-time optimizations do add significantly, but without compiler optimizations the performance is bad indeed. Without compiler optimizations, lazy flushing causes a performance deterioration compared to the basic version. This may be attributed to the extra overhead described in Section 4.2. Enabling of exclusive mode in the right-hand graph of Fig. 9 causes a further performance decrease. The reason is that part of the shared data are allocated from the thread constructor. These data are written by their owner thread, but read by all other threads. Without computation migration, the home node for all these data is processor 0, which is swapped with state control traffic, as depicted in Fig. 4.
5.3 Discussion and future work
From the performance data presented above, a clear conclusion can be drawn. Turning on computation migration presents a major boost in performance (except for SOR, which gives good speedup in all versions). Enabling all lazy flushing optimizations, but disabling computation migration, does not yield even reasonable performance for ASP and Water. This is mainly due to the fact that these applications allocate data from the thread constructor, which is a natural thing to do for a Java program. Disabling of further compiler optimizations would make the resulting performance much less good, since sequential performance is impaired.
However, for all applications except SOR, the runtime optimizations on top of the compiler optimizations yield discernible improvements. The smallest improvement seems to be gained from exclusive state. On the eye, this state seems a sophisticated optimization that covers many important cases. However, its benefits are already reaped by thread constructor migration and home-only state: nearly always, thread constructor migration causes a region that is candidate for exclusive state to lie at its home node.
A fact that cannot be read directly from the graphs is that the total time spent in twinning, patching and differencing of objects is negligible in the optimized application runs. Data that is written is usually only written by a single owner, and thread constructor migration ensures that the owner is the home node. The exception is TSP, but there the partial paths that are actually modified by the worker threads are handed out in exclusive mode, which obviates the need for flushing and hence twin creation, differencing and patching.
One area for future work is dynamic migration of an object’s home node. All control messages would be handled by the new home node, and twinning is unnecessary at the new home node. Possibly, this would make exclusive lazy flushing and thread constructor migration redundant. The protocol required for home node migration seems less complicated than the exclusive state protocol. Currently, the application programmer must be quite concerned on which machine data is allocated, since having it at the wrong home node brings large performance penalties. This is a valid concern not only for DSM machines, since large shared memory machines also have a home node concept. However, home node migration would probably make allocation considerations superfluous.
6. RELATED WORK
Most DSM systems are either page-based [15, 18, 17] or object-based [2, 3, 14] while discarding transparency. Jackal manages pages to implement a shared address space in which regions are stored. This allows shared data to be named by virtual addresses to avoid software address translation. For cache coherence, however, Jackal uses small, software-managed regions rather than pages and therefore largely avoids the false-sharing problems of page-based DSM systems. Like page-based DSMs supporting release consistency, we use twinning and differencing, albeit not over pages but over objects.
Treadmarks and CVM are both page-based systems that use some form of lazy release consistency (LRC). LRC, like our lazy flushing optimization, postpones writing updates to their home nodes. LRC awaits until an acquire is made. Then the new accessor synchronizes with the previous releaser of the lock associated with the data. This allows many state changes to be piggybacked upon synchronization messages. Jackal asynchronously updates region states to support lazy flushing.
CRL [14] is an object based DSM that requires the programmer to annotate his (C) source code with start-read/write and end-read/write calls around accesses to shared regions, so the region to be accessed is locally available. Unlike Jackal, that implements the Java memory model, CRL implements a single writer protocol with sequential consistency. Regions are locally cached until another machine requires the same object, performing some lazy flushing at each end-read/write.
MCRL [11] is an object-based system derived from CRL that implements computation migration. Write operations are shipped to the region’s creating machine, read operations are performed locally. Unlike Jackal, however, it does so unconditionally using some heuristics.
Hyperion [20] rewrites Java byte code to C and instruments the code with access checks. Hyperion caches all shared Java objects, including arrays, in their entirety and is therefore sensitive to false sharing. It does not employ any form of lazy flushing.
Fine-grained DSM systems largely avoid false sharing by using a small unit of cache coherence and software access checks. Shasta [22] uses a binary rewriter to add access checks to an existing executable. All implement some form of lazy flushing to record when a processor is exclusively using a region.
7. CONCLUSION
We have described optimizations for the Jackal RTS. Jackal is a DSM system for Java that consists of an optimizing compiler and a runtime system; we refer to [24] for a description of the system, including compiler optimizations.
We found that the RTS optimizations described in this paper are necessary to gain good performance, but only in conjunction with compiler optimizations. If only one of the compiler optimizations (computation migration) is switched off, performance becomes bad for three of the four applications.
When both compiler and runtime optimizations are enabled, our four Java applications attain reasonable to good performance compared to well-tuned and equivalent RMI applica-
tions. This is the more significant since small data sets were used, to better bring out performance differences.
8. REFERENCES
|
{"Source-Url": "http://cs.adelaide.edu.au/~kfenwick/thesis/papers/JackalPapers/javagrande2001.pdf", "len_cl100k_base": 7830, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31050, "total-output-tokens": 10185, "length": "2e12", "weborganizer": {"__label__adult": 0.0002925395965576172, "__label__art_design": 0.00020122528076171875, "__label__crime_law": 0.00024390220642089844, "__label__education_jobs": 0.0003058910369873047, "__label__entertainment": 5.042552947998047e-05, "__label__fashion_beauty": 0.00013554096221923828, "__label__finance_business": 0.00020229816436767575, "__label__food_dining": 0.0002665519714355469, "__label__games": 0.0004680156707763672, "__label__hardware": 0.0013103485107421875, "__label__health": 0.0003554821014404297, "__label__history": 0.0002142190933227539, "__label__home_hobbies": 7.164478302001953e-05, "__label__industrial": 0.0003886222839355469, "__label__literature": 0.00018465518951416016, "__label__politics": 0.00021517276763916016, "__label__religion": 0.0004117488861083984, "__label__science_tech": 0.0199127197265625, "__label__social_life": 5.4001808166503906e-05, "__label__software": 0.0070037841796875, "__label__software_dev": 0.966796875, "__label__sports_fitness": 0.0002827644348144531, "__label__transportation": 0.0004863739013671875, "__label__travel": 0.00018978118896484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43619, 0.01589]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43619, 0.53631]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43619, 0.90831]], "google_gemma-3-12b-it_contains_pii": [[0, 4936, false], [4936, 9367, null], [9367, 14650, null], [14650, 19878, null], [19878, 26285, null], [26285, 26787, null], [26787, 31304, null], [31304, 31521, null], [31521, 38231, null], [38231, 43619, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4936, true], [4936, 9367, null], [9367, 14650, null], [14650, 19878, null], [19878, 26285, null], [26285, 26787, null], [26787, 31304, null], [31304, 31521, null], [31521, 38231, null], [38231, 43619, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43619, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43619, null]], "pdf_page_numbers": [[0, 4936, 1], [4936, 9367, 2], [9367, 14650, 3], [14650, 19878, 4], [19878, 26285, 5], [26285, 26787, 6], [26787, 31304, 7], [31304, 31521, 8], [31521, 38231, 9], [38231, 43619, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43619, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
109e9761ca0cdf964061c713f2006d1112bf9298
|
Towards a Survival Analysis of Database Framework Usage in Java Projects
Mathieu Goeminne and Tom Mens
Software Engineering Lab, University of Mons, Belgium
Email: { first . last }@umons.ac.be
Abstract—Many software projects rely on a relational database in order to realize part of their functionality. Various database frameworks and object-relational mappings have been developed and used to facilitate data manipulation. Little is known about whether and how such frameworks co-occur, how they complement or compete with each other, and how this changes over time. We empirically studied these aspects for 5 Java database frameworks, based on a corpus of 3,707 GitHub Java projects. In particular, we analysed whether certain database frameworks co-occur frequently, and whether some database frameworks get replaced over time by others. Using the statistical technique of survival analysis, we explored the survival of the database frameworks in the considered projects. This provides useful evidence to software developers about which frameworks can be used successfully in combination and which combinations should be avoided.
I. INTRODUCTION
Many software projects are relying on databases for the proper functioning of the application. To facilitate this data management and manipulation, a wide variety of database frameworks has been proposed and used, especially for software projects developed in popular languages such as Java. These frameworks typically introduce a language-specific abstraction layer to avoid hardcoding and manually adapting SQL queries to any changes occurring in the database schema.
Software developers occasionally replace database frameworks used in their projects, or introduce extra database frameworks that offer additional functionality. In order to help developers cope with this phenomenon, it is necessary to study which database frameworks are used together and how they interact over time.
In this paper, we shed more light on this framework usage in open source Java projects, by carrying out an empirical study on the evolution of a corpus of Java projects in GitHub that use relational database technology. Despite the rising popularity of more recent technologies such as NoSQL, we focus on relational databases, because they are still omnipresent in current-day software projects and because more historical data is available to analyze their usage.
Our longitudinal study of database framework usage in Java projects addresses the following research questions:
\textbf{RQ}0: Which database frameworks are most popular?
\textbf{RQ}1: Which combinations of database frameworks “co-occur” in the projects in which they are used?
\textbf{RQ}2: How long do database frameworks “survive” in the projects in which they occur?
\textbf{RQ}3: Does the introduction of a database framework influence the survivability of another one?
II. SURVIVAL ANALYSIS
Many of these research questions are clearly related to the time-dependent nature of the projects being analyzed, and the occurrence of specific “events” during the projects lifetime (such as the introduction or disappearance of a particular database framework). To answer these questions in a statistically valid way, we therefore resort to the statistical technique of survival analysis [1]. We rely on the CRAN packages \texttt{survival} for computation and \texttt{ggplot2} for visualization.
Survival analysis models the time it takes for a specific event (such as the disappearance of a particular database framework in a Java project at some point in time) to occur. The technique allows to take into account right-censored data, for which it may be unknown whether the event occurred or not because the subject has “disappeared”. For example, we have no precise idea of when a particular database framework will disappear from a Java project if the database framework is still present in the project at the last day of the considered period of study. Since we cannot assume a particular distribution of survival times, we need to resort to non-parametric methods such as the Kaplan-Meier estimator [2]. A survival function models the probability of an arbitrary subject in the dataset to survive \( t \) units of time after the start of the study. A Kaplan-Meier curve visualizes the cumulative probability to survive, or, more generally, that an event occurs. It starts at value 1 (100% probability of
survival at time zero) and decreases monotonically over time. In this study, the observed event is the definitive disappearance of a framework from a project. To test whether there is a difference with statistical significance between two survival distributions we use the `survdiff` function that implements the Mantel-Haenszel test [3].
### III. Related Work
In [4], we empirically analyzed the evolution of the usage of SQL, Hibernate and JPA in a single large open source Java project. The current paper carries out a macro-level study on thousands of projects. Chen et al. [5] proposed a static code analysis framework for detecting and fixing performance issues and potential bugs in ORM usage. Their analysis revealed that the modifications made after analysis caused an important improvement of the studied systems’ response time. Maule et al. [6] studied a commercial object-oriented content management system to statically analyze the impact of database schema changes on the source code. Qiu et al. [7] examined the co-evolution of database schemas and code in ten open-source database applications from various domains. Whereas our focus is on Java projects, they studied specific change types inside the database schema and the impact of such changes on PHP code.
The statistical technique of survival analysis used in this paper has been employed by other software engineering researchers as well. Samoladas et al. [8] predicted the survivability of open source projects over time. Scanniello [9] analyzed dead code in five open source Java software systems. Kyriakakis et al. [10] studied function usage and function removal in five large PHP applications. Claes et al. [11] studied the survival of installability conflicts in Debian packages.
### IV. Data Extraction
We focus on open source projects only since we need full access to the source code development history of the studied projects. We analyse Java projects because Java is one of the most popular programming languages today. More specifically, we study projects taken from the Github Java Corpus proposed by Allamanis and Sutton [12]. They processed GitHub events stored by Github Archive (www.githubarchive.org) and only retained projects marked as Java projects. In order to get a quality corpus, they removed all projects that were never forked according to GitHub. They also compared the ids of commits in order to manually remove projects that are very likely (undeclared) forks of another project. This filtering decreases the probability to obtain strongly related individuals in the considered project population, and hence reduces the risk to obtain statistical results biased by overrepresented (groups of) projects. Among the 14,765 projects proposed in the Github Java Corpus, 13,307 (90.1%) projects have still an available Git repository on 24 March 2015. We considered these Java projects as potential candidates for our empirical analysis, and created a local clone of each of them.
We considered 19 Java database frameworks as potential candidates for our study. These frameworks need to have a direct means for accessing the database. The frameworks were selected by skimming recent scientific publications, Stack Exchange and blog posts. As an additional constraint, since our goal is to study the evolution over time of database framework usage in Java projects, we only consider frameworks of at least 3 years old. Although our list is not exhaustive, it covers the most frequently cited frameworks.
As a baseline, we also included JDBC in our study. Unlike other considered frameworks, it doesn’t provide any abstraction of a database schema and forces the developers to write SQL queries. JDBC is still heavily used because such a low level connection allows to submit complex queries that would be difficult or impossible to express with a higher level database framework.
We determined the presence of each framework in each Java project by analyzing the import statements in Java files, as well as the presence of specific configuration files (e.g., for Hibernate). For each commit of each considered Java project, we retraced a historical view of the files that can be related to a particular framework.
### V. Empirical Analysis
This section addresses our research questions by means of tables, visualizations and statistical tests.
**RQ0: Which database frameworks are most popular?**
Fig. 1: Number of projects (in log scale) in which a given database framework occurs. Threshold 200 shown in red.
Fig. 1 shows the number of considered Java projects in which the considered database frameworks occur. We
observe a high imbalance. Only 5 frameworks (including JDBC), summarized in Table I, occur in more than 200 distinct projects. Of all considered active projects from Github Java Corpus, only 3,707 Java projects used at least one of these 5 frameworks. Only these 5 frameworks will be analyzed for the remaining research questions, because for the other frameworks we do not have sufficient occurrences to obtain statistically significant results.
<table>
<thead>
<tr>
<th>Framework name</th>
<th>URL</th>
<th>Occurs if the project contains at least a file</th>
<th>#projects</th>
</tr>
</thead>
<tbody>
<tr>
<td>JDBC</td>
<td><a href="http://www.oracle.com/technetwork/java/javase/jdbc">www.oracle.com/technetwork/java/javase/jdbc</a></td>
<td>importing java.sql</td>
<td>2,271</td>
</tr>
<tr>
<td>Spring</td>
<td>projects.spring.io/spring-framework</td>
<td>importing org.springframework</td>
<td>1,562</td>
</tr>
<tr>
<td>JPA</td>
<td><a href="http://www.tutorialspoint.com/jpa">www.tutorialspoint.com/jpa</a></td>
<td>importing javax.persistence</td>
<td>1,168</td>
</tr>
<tr>
<td>Vaadin-GWT</td>
<td>vaadin.com</td>
<td>importing com.google.gwt</td>
<td>361</td>
</tr>
<tr>
<td>Hibernate</td>
<td>hibernate.org</td>
<td>whose name ends with .hbm.xml</td>
<td>238</td>
</tr>
</tbody>
</table>
TABLE I: Selected Java database frameworks.
The Spring framework aims to facilitate the implementation of a standard structure in Java applications. An optional Spring extension based on JDBC and an object-relational mapping can provide access to relational and NoSQL databases. JPA is a Java API for describing the relation between a Java entity and its mapped database element. Several frameworks, including Hibernate, can exploit this description for providing such a service. Because the frameworks that actually use these annotations cannot always be determined, we don’t consider JPA annotations as an indicator of the use of any framework but JPA itself. Vaadin is a framework for developing web applications. It introduces the notion of domain layer, which abstracts the database structure through Java classes hosting the business logic of the application.
RQ1: Which combinations of database frameworks “co-occur” in the projects in which they are used?
We identified which of the 5 considered database frameworks occurred throughout the lifetime of each considered project, and we computed all possible intersections of framework occurrence in Fig. 2.
JDBC occurs as the only database framework in 56.3% of all projects. At the other side of the spectrum, Hibernate occurs as the only framework in only 2.9% of all projects. If we look at their intersection, the large majority (82.8%) of all projects that have used Hibernate have also used JDBC during their lifetime.
Something similar can be observed for JDBC and JPA. JPA occurs in isolation in 29.5% of all projects, while almost half of all projects that have used JPA (49.3% to be precise) have also used JDBC during their lifetime.
Similarly, when comparing Hibernate and JPA, we observe that 49.6% of all projects that have used Hibernate have also used JPA, while 44.1% of all projects that have used Hibernate have also used JPA and JDBC.
Fig. 2: Number of Java projects using a given number of database frameworks (over the entire project’s lifetime).
These high numbers could be due to the fact that some database frameworks are used as supporting technologies for others (e.g., Spring typically uses JDBC for database access), while some frameworks are complementing each other (e.g., Vaadin has an optional module called JPAContainer for supporting JPA annotations). To determine for which frameworks this is the case, we studied the “co-occurrence” of different frameworks within the same project. This happens when files relating to both frameworks are present in at least one of the project’s commits (but typically in many more commits).
<table>
<thead>
<tr>
<th># co-occurring fw.</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td>total # frameworks used</td>
<td>2,443</td>
<td>22</td>
<td>16</td>
<td>2</td>
<td>1</td>
</tr>
</tbody>
</table>
TABLE II: Number of projects involving a given number of frameworks, over their entire lifetime and in co-occurrence.
Table II shows vertically the number of projects having used a given number of distinct frameworks over their entire history, and horizontally the maximum number of distinct “co-occurring” frameworks. Almost all values reside on the diagonal, implying that in the large majority of all cases (97.5%, i.e., 1213/1273), different database frameworks used in a project tend to co-occur.
Table III reports the number of projects in which two database frameworks co-occurred at least once during the project’s lifetime. Not surprisingly, we observe that JDBC frequently co-occurs with other frameworks. That lets us suppose that JDBC is used as a supporting technology that provides services not offered by the other frameworks. 80.1% of all projects that used Hibernate have also used JDBC in co-occurrence; 48.4% of all projects that used JPA have used JDBC in co-occurrence; 41.3% of all projects that used Spring have used JDBC in co-occurrence; and 39.6% of all projects that used Vaadin have used JDBC in co-occurrence.
<table>
<thead>
<tr>
<th></th>
<th>Spring</th>
<th>JPA</th>
<th>Vaadin</th>
<th>Hibernate</th>
</tr>
</thead>
<tbody>
<tr>
<td>JDBC</td>
<td>645</td>
<td>565</td>
<td>143</td>
<td>192</td>
</tr>
<tr>
<td>Spring</td>
<td>558</td>
<td>76</td>
<td>156</td>
<td></td>
</tr>
<tr>
<td>JPA</td>
<td>98</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Vaadin</td>
<td></td>
<td>22</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
TABLE III: Number of projects in which pairs of database frameworks co-occur.
Some database frameworks seem to complement one another. For example, 47.8% of all projects using Spring also use JPA. Other database frameworks appear to be in competition. For example, Vaadin co-occurs with Hibernate in only 22 projects, which makes up 9.2% of all projects using Hibernate, and only 6.1% of all projects using Vaadin. To a lesser extent, Vaadin also co-occurs infrequently together with JPA or Spring.
RQ2: How long do database frameworks “survive” in the projects in which they occur?

Fig. 3: Survival curves of database framework occurrence in the considered projects.
Fig. 3 shows the Kaplan-Meier survival curves of the selected frameworks. After their introduction, all database frameworks remain present in more than 45% of the projects. Nevertheless, we observe different trends in framework survivability. For example, in 11.7% of all cases Hibernate is removed 30 days after its introduction. In the same time interval, Spring is only removed from 3.7% of the projects. Three years after its introduction, Hibernate disappears from 27.6% of all projects, while Spring disappears from 14.5% of all projects in the same interval.
<table>
<thead>
<tr>
<th>A →</th>
<th>Spring</th>
<th>JPA</th>
<th>Vaadin</th>
<th>Hibernate</th>
</tr>
</thead>
<tbody>
<tr>
<td>JDBC</td>
<td>< 0.001[-]</td>
<td>0.001[-]</td>
<td>0.242</td>
<td>0.010</td>
</tr>
<tr>
<td>Spring</td>
<td>—</td>
<td>0.030</td>
<td>0.017 < 0.001 [+]</td>
<td></td>
</tr>
<tr>
<td>JPA</td>
<td>—</td>
<td>0.427</td>
<td>< 0.001 [+]</td>
<td></td>
</tr>
<tr>
<td>Vaadin</td>
<td>—</td>
<td>—</td>
<td>0.017</td>
<td></td>
</tr>
</tbody>
</table>
TABLE IV: p-values of tests for difference of survival rates between two database frameworks.
Table IV shows the p-values of the Mantel-Haenszel tests to check for a difference in survival rates for each pair of frameworks. Significant results are shown in boldface, based on significance level α = 0.05 after Bonferroni correction since we perform 10 comparisons. Based on visual comparison of the survival curves, we marked with [+][+] if framework $B$ has a significantly better survival rate than framework $A$, and [-][-] in the opposite case. We observe that JPA and Spring have higher survival rates than JDBC and Hibernate.
RQ3: Does the introduction of a database framework influence the survivability of another one?
In order to determine if the introduction of a framework $B$ influences the survival of a framework $A$ already present in the same project, we computed two survival curves $C_1$ and $C_2$. $C_1$ is based on all projects in which $A$ and $B$ co-occurred, while $B$ was introduced in the project after $A$. $C_2$ considers all projects that have used $A$ while $B$ never co-occurred together with $A$. We performed a visual comparison of $C_1$ (shown in red dashed lines) and $C_2$ (shown in black straight lines) on all pairs of database frameworks.
Fig. 4 compares the survival of Spring and JDBC. Introducing Spring when JDBC is already present seems to improve the survival probability of JDBC in the projects. Conversely, introducing JDBC when Spring is already present does not seem to affect the survival probability of Spring. We observed similar results for other pairs of frameworks. We did not observe any negative impact of the introduction of a framework on already present database frameworks. We used Mantel-Haenszel tests to check for a difference in the survival rates, but did not find any significant evidence, most likely because the considered data sets were too small.
VI. Threats to Validity
Our results should not be generalized beyond Java projects, the considered database frameworks, or the used version control technologies. Our research also suffers from the same threats as other MSR research relying on Git and GitHub [13], [14].
By manually inspecting the names of the considered repositories, we observed that 470 of them appeared to be implicitly part of 117 projects. Clustering these projects into logical projects could provide more insight into the database framework usage analysis.
The detection of frameworks is based on the presence of specific import statements in Java files and specific XML-based configuration files in the project directory. This approach may have lead to false positives, since classes and interfaces made available with an import statement may be unused in the source code, and configuration files may be ignored while running the applications. A more detailed analysis of the source code could reveal if these components are actually used. However, the extra time required by such an analysis could be prohibitive for a large scale study.
During survival analysis, we assumed the probability for an event to occur to be the same for all studied projects. Some external factors may influence this probability, such as a change in the organizational policy.
VII. Conclusions
We studied the usage of five popular database frameworks in a large corpus of GitHub Java projects. We observed differences in survival rates, that did not seem to relate to the framework popularity. We observed an important co-occurrence, especially between JDBC and the other considered frameworks, but other combinations of database frameworks also seem to complement or reinforce one another. Such empirical evidence can be particularly useful to project developers desiring to introduce an additional framework, or to replace an existing framework by another one, as our analysis reveals which combinations and which replacements are more successful (in terms of survival) than others.
The research presented in this paper can be extended in many ways. We could consider projects in other forges or other programming languages. We could also include other Java database frameworks, and relate their survival to their popularity. We also aim to analyze framework survival at file level. Another extension includes mapping technologies for weakly structured databases such as NoSQL. Finally, traditional software metrics could be combined with more specific metrics reflecting the involvement of database frameworks in software projects in order to get a better understanding of the projects status, and particularly their maintainability.
Acknowledgment. This research is part of FRFC research project T.0022.13 financed by the F.R.S.-FNRS.
REFERENCES
|
{"Source-Url": "https://docnum.umons.ac.be/Access/WebOpenAccess/GetDocument.aspx?GuidTicket=294047a5-7a15-4516-b1cd-9b1aa2c512ae&Filename=icsme2015era-preprint.pdf", "len_cl100k_base": 4657, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17048, "total-output-tokens": 5462, "length": "2e12", "weborganizer": {"__label__adult": 0.0002651214599609375, "__label__art_design": 0.0002114772796630859, "__label__crime_law": 0.00027298927307128906, "__label__education_jobs": 0.0008902549743652344, "__label__entertainment": 4.184246063232422e-05, "__label__fashion_beauty": 9.298324584960938e-05, "__label__finance_business": 0.0001817941665649414, "__label__food_dining": 0.00022864341735839844, "__label__games": 0.00028586387634277344, "__label__hardware": 0.0003941059112548828, "__label__health": 0.0003094673156738281, "__label__history": 0.0001399517059326172, "__label__home_hobbies": 6.16908073425293e-05, "__label__industrial": 0.0001977682113647461, "__label__literature": 0.00016236305236816406, "__label__politics": 0.00015413761138916016, "__label__religion": 0.00023543834686279297, "__label__science_tech": 0.006214141845703125, "__label__social_life": 0.00010162591934204102, "__label__software": 0.00740814208984375, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0001983642578125, "__label__transportation": 0.00024235248565673828, "__label__travel": 0.00014007091522216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22895, 0.0348]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22895, 0.48442]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22895, 0.90238]], "google_gemma-3-12b-it_contains_pii": [[0, 4424, false], [4424, 9051, null], [9051, 13318, null], [13318, 17724, null], [17724, 22895, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4424, true], [4424, 9051, null], [9051, 13318, null], [13318, 17724, null], [17724, 22895, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22895, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22895, null]], "pdf_page_numbers": [[0, 4424, 1], [4424, 9051, 2], [9051, 13318, 3], [13318, 17724, 4], [17724, 22895, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22895, 0.21782]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
71f4e39b35019cb9f44d4a54926c08ebdb7c8c06
|
1. General
1.1 Representation
Websites must represent VCU and not the VCU Health.
VCU Health Hosting
Content describing patient or clinical services must be hosted on vcuhealth.org.
1.2 Copyright
Websites must comply with the VCU Intellectual Property Policy.
1.3 Commercial Activity
Websites must not promote commercial activity outside of official university business.
Policy
This requirement is outlined in the Organizational Websites, Management, and Hosting policy.
Examples
Bad: “Come see the Richmond Squirrels play baseball on March 1st.”
Acceptable: “The Division of Community Engagement will be at the Richmond Squirrels game on March 1st providing information about upcoming engagement opportunities.”
1.4 Sexually Explicit Content
Websites containing sexually explicit content must provide a written warning statement and be accessible only through a password mechanism.
Policy
This requirement is outlined in the Organizational Websites, Management, and Hosting policy.
Written Warning Statements
Written warning statements before sexually explicit content need to inform a visitor as to what content they are about to view, giving them the opportunity to view the content at their discretion.
VCU Web Standards & Guidelines, General Requirements, Version 2.2
Using CAS to Restrict Access
Restricting access to sexually explicit content through a password mechanism can be accomplished using CAS (Central Authentication Service). For more technical information, check out the Using CAS on a VCU Website guide.
1.5 Blogs
Websites acting as the primary web presence for a unit must not be a blog.
Blog Components & Links
While a unit’s website may incorporate blog components or link to a blog, sites that only function as a blog cannot be a unit’s primary website or only form of web presence.
Blog Definition
The definition of a blog is defined in the FAQ section of the Web Content, Management and Hosting policy.
1.6 Visibility & Ownership
Websites must be listed in the VCU A-Z Index with at least one site owner or technical contact provided.
Non-Public Websites
For accountability and auditability purposes, websites with a primary audience that is internal (e.g. intranets) also need to be in the A-Z Index. These listings can be marked as “Hidden” so they do not show up on the public facing view.
What is the A-Z Index?
The VCU A-Z Index can be found under the three-dot menu at the top-right of the VCU homepage. The listing provides high visibility for large or small websites by featuring them on the VCU homepage, allowing users to quickly and efficiently navigate to them. The listing is also filterable and searchable, providing a much faster way for users to find or discover websites.
The Importance of Site Owners or Technical Contacts
Including a site owner or technical contact for a website in the VCU A-Z Index provides a single point of contact should a visitor have difficulty finding information or identify a technical issue.
1.7 Archival & Removal
Websites left out-of-date for 12 or more months must be moved to the VCU Website Archive or taken offline.
Archival
If a decision is made to archive the website by moving it to the VCU Website Archive, the site owner will incur an archival cost of $1,000. This cost guarantees the site will remain archived for 5 years. The archival fee is waived for websites regarding university-level (non-unit/non-school specific) resources.
Removal
If a decision is made to take the website offline, a Service Desk ticket must be opened with Technology Services to properly handle the decommission and records management process.
2. Accessibility
2.1 Federal
The following are high-priority accessibility requirements for federally funded organizations.
2.1.1 Skip to Content Links
Websites must include skip to content links on every page.
Skipping to Main Content
All sites must have at least one skip-to-content link which skips a user’s focus past redundant template elements and into the main content. However, it is good practice to have skip links to all relevant content, like the main navigation or footer.
Visibility & Location
The skip to content link must be visible on focus and must be the first link after the opening body element.
Code Example
CSS
```css
#skip-links {
position: absolute;
left: 0;
top: 0;
width: 100%;
margin-left: 0;
list-style: outside none;
}
#skip-links a {
position: absolute;
left: 15px;
top: -100px;
z-index: 10000;
height: auto;
margin: 0 auto;
padding: 10px 15px;
background-color: transparent;
color: #ffba00;
outline: none;
transition: top 0.2s, background-color 0.2s;
}
#skip-links a:focus, #skip-links a:hover {
```
2.1.2 Unique Title Tags
Websites must include unique title tags for every page.
The Importance of Unique Title Tags
Unique title tags help with both accessibility and search engine optimization.
Structure
A website’s title tags should follow the following structure:
- Home page: Department Name | Virginia Commonwealth University
- Interior page: Title of single page | Department Name | Virginia Commonwealth University.
Home pages should not use the word “Home” as a title. Interior pages should include the title of the page and the name of the department.
Example Title
Prospective Students | Biology | HAS | VCU
2.1.3 Input Labels
Websites must provide associative labels for form inputs.
The Importance of Input Labels
All form inputs must have associated labels so screen readers may interpret what content is required for each form input.
Visibility
Labels do not have to be visible, but they must be machine readable.
Code Example (HTML)
<label for="sender">Your email address</label>
The Importance of Input Agnostic Functionality & Navigability
2.1.4 Input Agnostic Functionality & Navigability
Websites must be equally functional and navigable when using the mouse, keyboard, or both.
The Importance of Input Agnostic Functionality & Navigability
All content and functionality, including navigation and dropdown menus, that is reachable by mouse, must be reachable by keyboard. This ensures users can get to where they’re going regardless if they’re using a mouse or using a screen reader.
Tabbing
Functionality and navigability can be tested by using the tab key to hop through focusable elements on the site. Skip links should be the first tab-able elements, then the branding bar, and finally the rest of the site, including dropdown navigation and links.
Visible & Identifiable Focus
Links, as well as other focusable elements such as form inputs and buttons, need focus state styles so the user can identify it as focused. In most cases, the hover state styles can also be used for the focus state styles.
Code Example (CSS)
```css
a:focus, [tabindex]:focus { border: 2px solid purple }
```
Visible & Identifiable Focus
Links, as well as other focusable elements such as form inputs and buttons, need focus state styles so the user can identify it as focused. In most cases, the hover state styles can also be used for the focus state styles.
Refer to the VCU Web Standards & Guidelines, Version 2.2 for more information.
3. Do not use the word “page” or describe its location; Screen readers will already inform the user as to what page they’re on.
Images Containing Text
Images that have relevant text in the image must have the same information in the alt tag.
Social Media
Images posted via social media that have relevant text in the image must have the same information in the alt tag or have the information as text provided along with the image.
Documentation
Visit the wiki for more documentation on image accessibility.
2.1.6 Meaningful Link Text
Websites must provide links with meaningful text and information as to their end location.
Non-Distinguishable Links
This requirement refers to non-distinguishable links, or links that have the same text but point to a different location. Links can be made distinguishable by adding titles or aria-labels (instructing a screen reader how the links are different) or changing the text so that it is only used one time.
Code Example (HTML)
<!-- Using meaningful link text -->
<a href="/about">About Technology Services</a>
<!-- Using aria-labels -->
<a href="/about" aria-label="About Technology Services">About</a>
Documentation
Visit the wiki for more documentation on non-distinguishable links.
2.1.7 PDF Highlighting & Copying
Websites providing PDF files must ensure these files properly allow highlighting text in a logical order and copying its contents to another program.
Logical Header Nesting & Content
All PDF files must have logical header nesting and content.
How to Check for PDF Issues
Use Siteimprove to identify a site’s PDFs that have accessibility issues.
VCU Web Standards & Guidelines, Accessibility Requirements, Version 2.2
2.1.8 Video Captioning
Websites providing videos must ensure they are captioned.
Automatic Captioning
Services that provide automatic captioning, such as Kaltura or YouTube, fulfill this requirement. At the time of writing, Vimeo does not support automatic captioning.
Live Streamed Videos
Live steamed videos must be captioned within 24 hours.
2.1.9 Appropriate Page Language
Websites must have an appropriate language set on every page.
Common Languages & Use
Most websites will have the language set to English, but there are instances where Spanish is applicable. The language may be set on the page’s HTML element or a specific element inside the page’s content.
Code Example
The language of a page can be set to English by adding the following code:
```
<html lang="en">
```
2.1.10 Color Contrast
Websites must utilize suitable color contrast ratios between text and background.
Contrast Ratios
Following WCAG 2.0 AA compliance, color contrast ratios should be at least 4.5:1 for normal text and 3:1 for large text.
Tools
Here’s a list of useful tools for correcting color contrast issues:
VCU Web Standards & Guidelines, Accessibility Requirements, Version 2.2
• Siteimprove will identify pages that have color contrast issues.
• Oto255 is a great tool to lighten or darken colors while trying to fix color contrast issues.
• Lea Verou's WCAG 2.0 color contrast tool helps web developers decide what color combinations provide enough contrast to meet WCAG 2.0 compliance.
• Colour Contrast Check is a useful tool for checking the degree of contrast between foreground and background colors.
Documentation & Resources
Visit the wiki for more documentation and resources for fixing color contrast issues.
2.2 VCU
The following requirements have been established by VCU, and either reflect, elaborate, or build on the Accessibility, Federal (2.1) requirements.
2.2.1 WCAG 2.0 AA Compliance
Websites must pass WCAG 2.0 Level AA accessibility standards.
Status & Compliance
Visit the A-Z Website Manager to check the current compliance status of your website.
How to Ensure WCAG 2.0 AA Compliance
Use the Siteimprove Accessibility Checker Google Chrome extension, and set the filter to AA conformance to prioritize a11y errors. In addition, AChecker can be used to evaluate single web pages for accessibility issues.
Accessibility Resources
Visit the wiki for more accessibility resources for a11y error evaluation.
2.2.2 PDF Accessibility
Websites providing PDF files must ensure these files pass WCAG 2.0 Level AA accessibility standards.
2.2.3 HTML Validation
Websites must not contain HTML validation errors.
AChecker
AChecker can evaluate single webpages for HTML validation errors. A successful AChecker report should show zero “Known Problems” and zero “HTML Validation” errors.
VCU Web Standards & Guidelines, Accessibility Requirements, Version 2.2
The Importance of Text Only
Supplemental, Not a Replacement
Implementation
VCU Technology Services provides a tool that dynamically delivers text only versions of any webpage. Just include the following link throughout your site to utilize this service:
<a href="https://text.vcu.edu/tt/referrer">View text version</a>
Documentation
Visit the wiki for more documentation on text-only versions of websites.
2.2.5 Skip Links
Best Practices
It is encouraged to include skip links to major page sections of the website, like the main navigation and footer, but the skips links must include at least one link to the main content section. The inclusion of at least one skip link to the main content is under the Accessibility, Federal, Skip to Content Links (2.1.1) requirement.
Code Example
Visit the code example found under the Accessibility, Federal, Skip to Content Links (2.1.1) requirement.
VCU Web Standards & Guidelines, Accessibility Requirements, Version 2.2
2.2.6 Readability with Disabled Stylesheets
Websites must be readable with stylesheets disabled.
The Importance of Readability with Disabled Stylesheets
Web page readability with CSS stylesheets disabled is important for users accessing text only versions of your website. Additionally, it ensures equal functionality and navigability regardless of styles.
3. Branding
3.1 Branding Bar
Websites must load an approved VCU branding bar at the top of every page and must not be obstructed from view by visual elements.
VCU Health Branding
Websites may not use the VCU Health logo or branding bar.
Menu & Search Icons
The exception to visual elements on top of the branding bar include approved menu and search icons as detailed in the VCU Brand Standards guide.
Using the Branding Bar Script
All VCU branding bars must be loaded in from the branding.vcu.edu javascript. Implementation instructions can be found at the VCU Academic Branding Bar website.
Including the following script on a website will inject the default (light-gray/black) branding bar on a page:
```html
<script type="text/javascript"
src="/branding.vcu.edu/bar/academic/latest.js"></script>
```
3.2 Unofficial Logos & Seals
Websites must not use unofficial VCU logos or seals on any page.
What are Unofficial Logos & Seals?
This includes, but is not limited to, outdated or modified VCU logos or seals and the use of a VCU logo or seal that does not follow the rules outlined in the VCU Brand Standards guide.
3.3 Unit Context
Websites for a unit that serves a specific part of the university must include the unit name prefixed or in conjunction with its title or parent unit.
VCU Web Standards & Guidelines, Branding Requirements, Version 2.2
Context is Important
Specifying what part of the university your unit serves is important, especially when there are multiple units of the same or similar name dotted around the university. For instance, a site for the School of Medicine’s Technology Services department should be distinguishable from the central VCU Technology Services department.
4. Captioning
4.1 General Public & Employees
Websites containing audio/visual media intended for the general public or any VCU employee who has a relevant accommodation agreement on file with the ADA Coordinator must provide appropriate captioning for these materials.
4.2 Students
Websites containing audio/visual media provided by instructors for students must provide appropriate captioning or transcriptions for these materials and register them with the appropriate disability office.
Registering Media with a Disability Office
Registering these materials with the appropriate disability office is based on campus; The Student Accessibility and Educational Opportunity office serves the Monroe Park Campus, and the Division for Academic Success office serves the MCV Campus.
5. Content
5.1 VCU Homepage Link
Websites must include a link to the VCU homepage.
Details & Example
The VCU homepage link must go to “https://www.vcu.edu” and have “Virginia Commonwealth University” spelled out. This is typically included in the website’s footer in the contact information. Inclusion of the VCU branding bar does not fulfill this requirement.
<a href="https://www.vcu.edu" title="VCU homepage">Virginia Commonwealth University</a>
5.2 Parent Unit Link
Websites for a unit of the university that has a parent unit must provide a link to the parent unit on every page.
Importance & Implementation
A link to a parent unit provides visitors context to the university’s organizational structure as well as ease of navigability to similar resources. This link is typically included in the website’s footer contact information.
5.3 Contact Information
Websites must provide contact information (e.g. address, phone, email) on every page
Required Information
The following contact information is required on every page of a website:
- A physical address or mailing address
- A phone number
- An email address
Importance & Implementation
Providing contact information provides visitors the ability to get additional information by alternative methods. This information is typically included in the website’s footer, but can alternatively be linked to a dedicated contact page.
5.4 Last Updated Date
Websites must provide a date of when the site or its pages were last updated or reviewed on every page.
Importance & Implementation
A last updated date informs visitors on how up-to-date the content is. The following guidelines and suggestions should be referenced when implementing a last updated date:
- The date should be specific to the individual page and not the overall website.
- The terms “Last updated”, “Updated”, or “Last reviewed” may be used.
- Special CMS tags, client-side Javascript, or server-side code are often used to provide a last updated date.
- This information is typically included in the website’s footer.
Out-of-Date Content
Websites are required to be updated or reviewed every 12 months. Therefore, the last updated date on any page should be less than 12 months old.
5.5 Postal Box
Websites must not use the phrase "PO Box ####" on any page.
United States Postal Services Trademark
Due to a trademark by the United States Postal Services, websites are not allowed to use the phrase “PO Box ####”. Instead, use “Box ####” when referring to a postal box.
5.6 Inclusion Links
Websites must include links to the required inclusion resources.
Required Inclusion Links
The following are the required inclusion resources every website must link to. These links are typically included in the website’s footer.
Accessibility Link
To better provide inclusive access to VCU’s resources, all websites must include a link to VCU’s Accessibility Resources page on every page.
Technical Support (Webmaster) Link
To better provide centralized technical support for university websites, all websites must include a mailto link to the VCU Webmaster on every page. This email is monitored by the Technology Services IT Support Center who route messages to the responsible party. Additional methods for technical assistance may be linked alongside the VCU Webmaster mailto link.
Privacy Link
All websites must include a link to the VCU privacy policy on every page. The privacy policy informs users on what information is collected and how it is used while using VCU digital resources. Custom privacy policies may be linked so long as it references and links to the official VCU privacy policy.
5.7 404 Page
Websites must provide a helpful 404 error page.
The Importance of a Helpful 404 Page
Helpful 404 pages guide users when they stumble across a resource that no longer exists or has moved. These page should be relevant and tailored specifically for the website.
Implementation & Example
Configuration of a 404 page depends on the platform the website is hosted on. Please contact Web Services if technical assistance in implementing a 404 page is required. A good example of a helpful 404 page is the VCU homepage 404.
5.8 Search
Websites must include an input field to the VCU search tool.
Exceptions
If a website provides its own equivalent search functionality, or is a single page site, the inclusion of an input to the VCU search tool is not required.
Implementation
To learn more on how to include an input to the VCU search tool on a website, visit the article on adding a basic VCU search box to your site at the VCU Technology Services website.
5.9 Course Information
Websites must not contain duplicate information from the VCU Bulletin.
Supplementary Information
If course information must be provided on a website, this information should only supplement and link to the corresponding VCU Bulletin page.
Importance
The VCU Bulletin is the main resource for course information. Course information that duplicates a VCU Bulletin page, instead of supplementing and linking to a VCU Bulletin page, can cause confusion for students when outdated content is found or information between both locations do not match.
6. Content Management
6.1 CMS Platform
Websites requiring a content management system (CMS) must use the approved content management platform for VCU, TerminalFour.
Requesting a CMS Managed Website
If a new website needs to be setup in TerminalFour, please complete and submit a Web Request form. Make sure to fill out the appropriate options regarding CMS usage.
6.2 Direct Edit Link
Websites must contain a link to the TerminalFour “Direct Edit” mode of the site in the footer of every page.
The Importance of the Direct Edit Link
By including the direct edit link in the footer of a website, it ensures users with valid access though TerminalFour can easily edit the page as soon as they see an issue on the site. It also gives VCU Technology Services a clear indication to whether the site is using TerminalFour or managed manually, making technical support much easier.
How to Implement a Direct Edit Link
TerminalFour has a built-in tag to use with a site’s HTML to generate a direct edit link:
\<t4 type="edit-page" action="direct-edit" text="Edit" /\>
6.3 Navigation Tags
Websites must use appropriate TerminalFour navigation tags to generate links used to navigate the site.
The Importance of Navigation Tags
Using navigation tags to generate links used to navigate a website ensures it can be properly navigated within the TerminalFour “Preview” or “Direct Edit” modes. In addition, it also ensures a link will automatically be updated in case a section is renamed or moved.
Best Practices
Only links to external websites or resources should be hardcoded.
VCU Web Standards & Guidelines, Content Management Requirements, Version 2.2
6.4 Configurable Global Header & Footer
Websites must provide configurable global header and footer sections within their TerminalFour page layout(s).
The Importance of Configurable Global Header & Footer Sections
Providing configurable global header and footer sections in a website’s page layout(s) during development ensures a user can easily modify and management things like styling, scripts, and the use of plugins.
7. Design & Browser Compatibility
7.1 Modern Support
Websites must utilize HTML5 and work on modern browsers.
7.2 Structure & Layout
Websites must not use tables as the primary means of site structure or layout.
7.3 External Stylesheets
Websites must utilize external CSS stylesheets for styling.
Best Practices
While the use of inline styles can sometimes be necessary, excessive use of inline styles is not permitted.
7.4 Favicon
Websites must provide at least a 16px by 16px favicon.
7.5 Development Environments & Resources
Websites not under development must not serve assets such as images, CSS, and Javascript from development environments.
7.6 Flash
Websites must not use Flash.
Security Vulnerabilities & End-of-Life
Not only does Flash have numerous security vulnerabilities, Adobe, in collaboration with large web and computer companies, will stop updating and distributing Flash Player by the end of 2020.
For more information regarding Flash’s end-of-life, check out Adobe’s Flash & The Future of Interactive Content article.
8. Domain
8.1 Top-Level Domains
Websites must use a vcu.edu top-level domain.
Exceptions
If a website requires a non vcu.edu top-level domain name due to a specific business need, an exception must be granted by VCU University Relations.
8.2 Subdomains
Websites must not include the term “VCU” in their subdomain.
Examples of Non-Compliance
- vcuarts.vcu.edu
- vcuhas.vcu.edu
- vcuts.vcu.edu.
8.3 Personal Names
Websites must not include a person’s name in a domain or subdomain.
Exceptions & Personal Websites
If a unit is permanently named after a person, a website for said unit is exempt from this rule.
A website for a person, but not unit a named after that person, should be made through VCU People Accounts.
8.4 Hosting
Websites must use a top-level domain hosted on a VCU approved web server.
9. Mobile
9.1 Responsive & Mobile-Friendly
Websites must be responsive and mobile-friendly.
The Importance of Responsiveness & Mobile-Friendliness
As web browsing on small devices becomes more popular, your website’s mobile presence becomes increasingly more important. While traditional websites may work on small devices, they are often not optimally designed for them.
9.2 Viewport Meta Tags
Websites must utilize viewport meta tags to properly handle device width.
The Importance of Viewport Meta Tags
The viewport meta tag allows designers and developers to better control how a website is rendered on small devices.
Documentation
Information regarding viewport meta tags can be found at Google’s [Web Fundamentals guide on multi-device responsive design](https://developers.google.com/web/fundamentals/multidevice/mobile-optimized).
9.3 Google Mobile-Friendly Test
Websites must pass the [Google Mobile-Friendly Test](https://mobile-friendly-test.google.com).
9.4 Tap Target Area
Websites must have buttons and inputs that have a decent tap target area.
Documentation
Information regarding decent tap target areas can be found at Google’s [Web Fundamentals guide on multi-device responsive design](https://developers.google.com/web/fundamentals/multidevice/mobile-optimized).
9.5 Availability, Visibility, & Navigability
Websites must provide content that is equally available, visible, and navigable regardless of viewport size or viewing device.
10. Security
10.1 HTTPS
Websites must be loaded over HTTPS with a valid certificate.
10.2 Secure Connection
Websites must have a secure connection.
Identifying Secure Connections
Secure connections are signified by a green lock icon next to the URL in the URL bar of most modern browsers.
Loading Resources
Non-compliance is often due to loading resources explicitly over HTTP. This can easily be resolved by loading resources explicitly over HTTPS or in select use cases using relative protocol.
The following code shows an example of noncompliance due to resourcing loading, and 2 examples of how to potentially resolve the issue.
<!-- Loading explicitly over HTTP, noncompliant -->
<link type="stylesheet" href="http://example.vcu.edu/media/styles.css">
<!-- Loading explicitly over HTTPS, compliant, preferred -->
<link type="stylesheet" href="https://example.vcu.edu/media/styles.css">
<!-- Loading over relative protocol, compliant, select use cases -->
<link type="stylesheet" href="/example.vcu.edu/media/styles.css">
10.3 Authentication & Form Requests
Websites that contain pages requiring authentication or forms requesting sensitive data must send requests over SSL with a secure connection.
10.4 Redirects
Websites must not automatically redirect visitors to an external non vcu.edu domain.
VCU Web Standards & Guidelines, Security Requirements, Version 2.2
|
{"Source-Url": "https://webstandards.vcu.edu/media/web-standards/docs/vcu-wsg-supplemental.pdf", "len_cl100k_base": 5884, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 75824, "total-output-tokens": 7032, "length": "2e12", "weborganizer": {"__label__adult": 0.0005564689636230469, "__label__art_design": 0.003498077392578125, "__label__crime_law": 0.0021076202392578125, "__label__education_jobs": 0.07354736328125, "__label__entertainment": 0.00031948089599609375, "__label__fashion_beauty": 0.00039076805114746094, "__label__finance_business": 0.0059051513671875, "__label__food_dining": 0.0004374980926513672, "__label__games": 0.0007271766662597656, "__label__hardware": 0.001338958740234375, "__label__health": 0.0005679130554199219, "__label__history": 0.0006108283996582031, "__label__home_hobbies": 0.00032448768615722656, "__label__industrial": 0.00046133995056152344, "__label__literature": 0.000911712646484375, "__label__politics": 0.0004346370697021485, "__label__religion": 0.0006608963012695312, "__label__science_tech": 0.0082855224609375, "__label__social_life": 0.0003600120544433594, "__label__software": 0.1998291015625, "__label__software_dev": 0.69775390625, "__label__sports_fitness": 0.0002582073211669922, "__label__transportation": 0.00041556358337402344, "__label__travel": 0.0005574226379394531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27601, 0.0285]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27601, 0.33427]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27601, 0.84869]], "google_gemma-3-12b-it_contains_pii": [[0, 1284, false], [1284, 2990, null], [2990, 3636, null], [3636, 4749, null], [4749, 5761, null], [5761, 7219, null], [7219, 8911, null], [8911, 10089, null], [10089, 11797, null], [11797, 12768, null], [12768, 13128, null], [13128, 14492, null], [14492, 14843, null], [14843, 15629, null], [15629, 17031, null], [17031, 18563, null], [18563, 20050, null], [20050, 20822, null], [20822, 22475, null], [22475, 22899, null], [22899, 23947, null], [23947, 24757, null], [24757, 26047, null], [26047, 26220, null], [26220, 27601, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1284, true], [1284, 2990, null], [2990, 3636, null], [3636, 4749, null], [4749, 5761, null], [5761, 7219, null], [7219, 8911, null], [8911, 10089, null], [10089, 11797, null], [11797, 12768, null], [12768, 13128, null], [13128, 14492, null], [14492, 14843, null], [14843, 15629, null], [15629, 17031, null], [17031, 18563, null], [18563, 20050, null], [20050, 20822, null], [20822, 22475, null], [22475, 22899, null], [22899, 23947, null], [23947, 24757, null], [24757, 26047, null], [26047, 26220, null], [26220, 27601, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27601, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27601, null]], "pdf_page_numbers": [[0, 1284, 1], [1284, 2990, 2], [2990, 3636, 3], [3636, 4749, 4], [4749, 5761, 5], [5761, 7219, 6], [7219, 8911, 7], [8911, 10089, 8], [10089, 11797, 9], [11797, 12768, 10], [12768, 13128, 11], [13128, 14492, 12], [14492, 14843, 13], [14843, 15629, 14], [15629, 17031, 15], [17031, 18563, 16], [18563, 20050, 17], [20050, 20822, 18], [20822, 22475, 19], [22475, 22899, 20], [22899, 23947, 21], [23947, 24757, 22], [24757, 26047, 23], [26047, 26220, 24], [26220, 27601, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27601, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
00f195d9dfe8469c5d952b42835b01abb8314423
|
Analyzing the effects of formal methods on the development of industrial control software
Citation for published version (APA):
DOI:
10.1109/ICSM.2011.6081983
Document status and date:
Published: 01/01/2011
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher’s website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
providing details and we will investigate your claim.
Download date: 18. Apr. 2022
Analyzing the Effects of Formal Methods on the Development of Industrial Control Software
Jan Friso Groote
Eindhoven University of Technology
Eindhoven, The Netherlands
Email: j.f.groote@tue.nl
Ammar Osaiweran
Eindhoven University of Technology
Eindhoven, The Netherlands
Email: a.a.h.osaiweran@tue.nl
Jacco H. Wesselius
BU Interventional X-ray
Philips Healthcare
Best, The Netherlands
Email: jacco.wesselius@philips.com
Abstract—Formal methods are being applied to the development of software of various applications at Philips Healthcare. In particular, the Analytical Software Design (ASD) method is being used as a formal technology for developing defect-free control software of highly sophisticated X-ray equipments. In this paper we analyze the effects of applying ASD to the development of various control software units developed for the X-ray machines. We compare the quality of these units with other units developed in traditional development methods. The results indicate that applying ASD as a formal technology for developing control software could result in fewer defects.
I. INTRODUCTION
In industrial systems control software is becoming increasingly complex with more concurrency playing a crucial role. In conventional software development of such type of systems, errors are considered inevitable. Techniques for early defect prevention are widely encouraged as software practitioners are pushed to get software into execution quickly on tight schedules.
Establishing the correctness of these systems is widely known to pose serious challenges for traditional testing techniques, used by conventional design development methods. Selective test cases are invented with prior awareness of code internals, often done by the code developers themselves or specialized test personnel, mainly to cover key functions, error cases, etc. On completion of testing, software is known to pass certain tests, but can still fail for cases not tested.
It is claimed that formal methods allow the development of complex software under a firm mathematical foundation resulting in high quality, more correct software compared to conventional design methods. For example, model checking techniques have been widely applied to the verification of discrete behavior of various industrial critical systems [13], [15]. Virulent concurrency errors have been discovered that would not have been unveiled through traditional testing. In some circumstances these uncovered errors caused serious damage or loss of property [10].
For the purpose of obtaining high quality software, Philips Healthcare is extensively investigating and applying formal methods in the development of its software components. More precisely, Philips Healthcare incorporates the Analytical Software Design\(^1\) (ASD) method to the development of various software components of X-ray machines. An early report on applying ASD to industrial control software can be found in [2].
The ASD method centers its fundamentals on developing mathematically verified software. It employs state machine models to formally specify and verify behavior of components. From these models, source code can also be generated automatically. When ASD models have been formally verified, the code generated from such models is considered to be correct, meaning a.o. that sets of components match their prescribed interfaces. ASD employs a design method that mitigates the state space explosion problem by compositionally designing and verifying components in isolation.
Analyzing the quality effects of applying formal technologies to large-scale systems is a barely addressed issue. The best we could find is [3], [18], where it is claimed that near-zero defects can be obtained compared to traditionally developed software.
The purpose of our study is to report about how the ASD method was tightly integrated as a main process in the development of various control units of complex X-ray machines, and we further demonstrate the issues encountered during its application, providing third-party evaluation. Then, we carefully analyze the effects of formal methods on the quality of developed software by comparing the defect rates of a number of software units that incorporate formal methods with others developed using conventional methods. For each unit we carefully analyze every defect submitted along the development of the unit.
As we will see the results may appear incredible since the widespread view in industry is that applying formal mathematical methods on sizable software products is impractical. The results indicate that better quality software can be obtained from formal technologies compared to software developed by traditional development methods. This paper is arranged as follows. Section II sketches the basic concepts of ASD. In Section III we show how ASD is being applied in the development of various software units. We compare the effectiveness of applying ASD in Section IV.
\(^1\)Supplied by Verum Software Technologies B.V., the Netherlands, www.verum.com.
II. PRINCIPLES OF ANALYTICAL SOFTWARE DESIGN
ASD is a component-based, model-driven technology that combines the application of formal mathematical methods such as Sequence-Based Specification (SBS) [12], Communicating Sequential Processes (CSP) [16] and the model checker Failure Divergence Refinement (FDR) [5] with software development methods such as Stepwise Refinement, and Component-Based Software Development [4].
A fundamental principle of ASD is to consider a software design as interacting components, communicating with one another or their environment via channels. As a common practice in ASD, system functionality is decomposed into components in levels (e.g., hierarchical structure) to systematically develop and verify these components in isolation. For example, Figure 1.a depicts a hierarchal structure of system components that include a controller (Ctr), a sensor and a lock.

Developing any ASD software component typically requires two models: an interface model and a design model. The interface model specifies the external behavior of the component, whereas the design model describes the concrete behavior. Both interface and design models are state machines described in a tabular format, see Figure 2, which depicts the specification of the Sensor interface model presented in Figure 1.b. The model is described using the ASD industrial tool, called the ASD ModelBuilder.
To ensure correctness and consistency, the ASD ModelBuilder automatically translates the ASD models to formal mathematical models such as CSP [11] for the formal verification, and systematically generates a corresponding source code implementation such as C++ or C# (following the state machine pattern in [7]). The details of such translations are omitted here as they are not relevant for this article.
The objective of incorporating model checking in ASD is that, unlike testing, model checking is comprehensive, and can cover all possible execution scenarios. Unlike conventional verification, it is automatic, as the model checking tool requires no human intervention. Such verifications can be completed in a day’s effort. The formal behavioral verification (and also code generation) of the ASD models are done automatically with the click of a button.
Testing is not carried out for code generated from ASD models. Traditional testing such as function and statement coverage is performed for the handwritten part of the unit. A complete unit that comprises ASD components and manually written components is further tested as a black box before the code is delivered to the system.
Below we summarize the steps required for developing an ASD component, given a structure of components. We consider the Ctr component from Figure 1 as an example.
1) **External behavior specification.** First, the interface model of a component under development is specified, such that it describes the external behavior exposed to its clients. All interactions with used components located at a lower level are not included in the specification. For example ICtr is the interface model of the forthcoming Ctr component, where interactions with the lock and the sensor components are not present.
2) **External specification of boundary components.** Similarly, the interface models of components located at the lower level are created. They describe also the external behavior exposed to the component being developed. For instance, the ILock and ISensor interface models describe the external behavior exposed to the Ctr component. All other internal interactions at lower levels not visible to Ctr are ignored.
3) **Concrete, functional specification.** After that, a design model of the component is created. The concrete behavior of the component is described including the interaction with used components. For example the Ctr design model includes method invocations from and to the lower level Lock and Sensor components. Invoked methods might supply data in their parameters. This data is not checked in the behavioral verification.
4) **Formal behavioral verification using model checking.** In this step CSP processes can be generated from the interface and design models constructed previously. A combined model that includes the parallel composition of the design model plus the interface models of the used components is generated automatically. The model is checked for deadlock, livelock, and illegal invocations using FDR; these are checked automatically and separately using the ModelBuilder. Additional properties can be specified in CSP and verified against the combined model if required.
5) **Formal refinement of external and internal specifications.** The combined model must be a correct refinement of the interface model of the component being developed because the interface model is used by the client components. The formal refinement check is established using the failure or failure-divergence refinement supported by FDR, where the interface process is the specification and the combined model is the implementation. When the formal refinement check is accomplished, the interface model represents all lower level components.
6) **Code generation.** In this step source code is generated and integrated with the rest of the system in the target programming language.
7) **Recursive development of components.** For each comp-
ponent at a higher or lower level the steps 1 to 7 can be repeated until the system is completed. This provides the possibility to develop components in a top-down, middle-out, or bottom-up fashion, in parallel with developing some manually coded modules.
III. THE APPLICATION OF ASD IN SOFTWARE DEVELOPMENT
Philips Healthcare incorporated the ASD technology in the development of control software at the end of 2006. Initially, the technology was used to formally specify and verify protocols of interactions among internal interfaces of subsystems of an X-ray machine. One of the primary subsystems incorporating ASD is the Back-end Xray (BeX) subsystem [20], [17], [19], [1].
Below we report about two consecutive projects of BeX starting from January 2008 till the end of 2010. The projects include a total of 36 software designers, architects, and engineers, of which nine attended ASD training courses. The nine ASD users are highly skilled in developing software using conventional methods, but have limited background in formal mathematical methods.
Since the ASD method was new to the development teams, the ASD method imposed a learning curve, and therefore extra efforts and investments were required before reaping its benefits. At the early stages of applying ASD, four part-time ASD consultants were present, devoted approximately half of their time helping development teams to quickly learn the technology and its practices.
In this section we sketch how ASD has been incorporated in the development process of several software units of BeX, highlighting the flow of events followed during the project.
Incorporating ASD to the development of BeX
The software units of BeX were developed in a series of consecutive increments, each of which included the implementation of a subset of user functions. Since ASD comprises formal technologies, incorporating the method requires certain adaptations to the traditional development process. Figure 3 depicts the flow of ASD events in a development increment. Note that these steps are preceded by brainstorming sessions where team members explore several design alternatives without being precise.
Requirements. This step included the definition of the requirements for function, reliability, performance, characterization of usage conditions, target programming language for code generation, and the operating system.
Incremental planning. In this step, functions to be implemented through each increment were selected with established work breakdown estimations and a tight schedule. For each function to be implemented the time, efforts, deadlines, risks, etc., were clearly identified.
Software design. In this step, the distribution of components was accomplished with well-defined responsibilities and interfaces. Designs of software components commenced as working drafts until team reviews had been accomplished, and
design improvements resulting from each team review session were incorporated.
The effort of obtaining a suitable ASD architectural design for some units was higher than normal since ASD does not support all design or architectural patterns, with which the novice and experienced developers were acquainted. For example, the technology is hardly suitable for modeling the object-oriented design patterns presented in [7], so that designers quickly ran into problems when trying to model their object-oriented designs in ASD.
Therefore, more effort was required to substitute the object-oriented designs (and the design culture) by structured component-based, action-oriented designs that include components with highly abstracted encapsulated state machines and well-defined interfaces. Modeling such type of designs in ASD is straightforward, but indeed obtaining such designs required the designers to become experienced.
The main obstacle most designers had encountered at earlier stages of the design process was not only providing structured designs of components but also maintaining a proper degree of abstraction and distributing the complexity among the components in levels. Designers frequently rushed into the state space explosion problem for some components that contained too detailed behavior. Hence, the detailed behavior was pulled out from such complex components to other newly or existing components to further circumvent the state space explosion problem. Such kind of alignment activities were performed often during the design process of the ASD components.
We noticed that not all designers could compose designs that suit the ASD method. Only few designers were able to quickly learn the ASD technology and come up with designs suitable for the ASD method, although they generally had limited knowledge in formal mathematical methods, and some of them were even not highly skilled programmers. Typically, the experienced and highly valued programmers were not always good ASD designers.
Functional specification. In this step, each ASD component under development was specified in isolation following the ASD recipe. The external and concrete behavior of each component was described using the ASD ModelBuilder. Whenever a design did not suit the ASD specification or verification, the structure of the software was adapted.
Behavioral verification. For each unit, the behavioral verification using model checking was done in a component-wise manner. Race conditions, deadlocks, livelocks, and illegal interactions violating the interaction protocols were discovered, causing adapting the behavioral model or redesigning the affected components.
It is notable that the state space explosion kicked in during verification of various components. We learned that alternative designs can help to avoid this problem and make verification doable [9], [8]. In some cases, the explosion of states of a complex component was circumvented by decomposing the component further into a number of smaller components.
Specification review, code generation, and code integration. The specification of all ASD models had to be reviewed by team members, row-by-row, for traceability and correctness against the requirements. Once verification was completed, the design models were automatically translated into the target language, in this case, C#. Changes to generated code were not permitted. The generated code was integrated with the rest of the product code by implementing glue code of proper adapters and wrappers. Integration of the code of ASD components was always smooth with no error ever reported. Integration errors occurred when integrating ASD code with the manually developed code. Other errors were due to the data part of the generated code which was not formally verified.
Testing. Since code generated from ASD models was already verified using model checking, the code was not a target of function coverage or statement coverage tests, which applies to all manually written code of each software unit. Unit testing was started after the generated code was integrated with the manually written code. The units were further examined using statistical testing, supplied by the ASD method, for certifying compliance of software components.
<table>
<thead>
<tr>
<th>Unit</th>
<th>DM</th>
<th>IM</th>
<th>Rule cases</th>
<th>States</th>
<th>Time (sec)</th>
<th>Hours</th>
</tr>
</thead>
<tbody>
<tr>
<td>Orchestration</td>
<td>8</td>
<td>26</td>
<td>2,857</td>
<td>15,954,291</td>
<td>1,847</td>
<td>1288</td>
</tr>
<tr>
<td>FEClient</td>
<td>1</td>
<td>15</td>
<td>5779</td>
<td>1,996,830</td>
<td>230</td>
<td>696</td>
</tr>
<tr>
<td>Xraylp</td>
<td>1</td>
<td>6</td>
<td>1,051</td>
<td>2,874</td>
<td>0</td>
<td>268</td>
</tr>
</tbody>
</table>
TABLE I
ASD DATA IN BeX UNITS
End of increment. This step was mainly devoted to solving problems and fixing defects raised during the development of the units. Few defects related to the ASD code were committed. After a careful analysis of the cause of these defects we found that the main source was the data part of the code. Correctness verification of data is not supported by ASD at the moment of writing this article. Defects related to the control part of the generated code were barely found. After all defects had been fixed, the subsequent increment was started, implementing new user functions.
Three units of BeX used ASD for the development of their control parts. In table I the statistical data related to the units are depicted. For each unit the total number of specified design models (DM) and interface models (IM) is depicted. The total number of rule cases specified for each unit is also shown.
A rule case is a row in a table of an interface or design model, specified and reviewed by team members. The table also depicts the total number of states generated by the model checker to check potential deadlocks (other statistics related to illegal or refinement checks are omitted). In case a unit comprises more than one design model, we sum up all generated states of each individual design model. This applies also to the verification time taken by the model checker FDR.
The last column gives an insight into the effort spent for specifying and reviewing the ASD models. In fact filling in the tables is a straightforward activity, but special attention was given to prevent human errors easily caused by cloning rule cases.
Notable is the Orchestration unit, which was initially designed in a way causing a state explosion in many of its components. Since developers could not proceed to code generation without formal correctness using model checking, components were redesigned such that model checking was a straightforward activity. As can be seen from the table the sum of the generated states of all Orchestration components is only 15 million states, which can be calculated in half an hour. Generally, when the verification time of a single component exceeds one hour, further decomposition or redesign activities were immediately considered to reduce the complexity.
### IV. Quality Results
We analyzed every defect submitted along the development process of the units. All defects are stored in a bug tracking database, which is part of a code management system. Defects related to each unit were carefully revised, one by one, by analyzing the type and cause of each defect, and how it particularly affected the quality of the code. Defects related to documentation (e.g., specification or requirement documents) are excluded from the calculations.
Table II summarizes the accomplished work and reports about the quality results of BeX software units. For each unit the number of effective (logical) lines of code (LOC) written manually, and those generated automatically from ASD models are reported. The total number of submitted defects of each unit is depicted in the table. The numbers represent the errors captured during in-house design, implementation, integration, and testing phases (i.e., not post-release errors). The last column contains defect rates, e.g., the rate for the Orchestration unit is 0.5 errors per KLOC, and for the FEClient unit is 0.4 errors per KLOC.
As can be seen from the table, the units that include ASD components reveal minor reported defects, averaging to 0.86 defects per KLOC. This level of quality compares favorably to the standard of 1-25 defects per KLOC for conventionally developed software in industrial settings [14]. Defects left behind by ASD correctness verification tend to be straightforward faults easily found and fixed, not deep interface or design errors.
Typical errors found in the units developed with ASD were misspellings of variables in the parameters of methods, e.g., having a parameter named ‘SelectionType’ instead of ‘selectionType’ caused the generation of two independent variables. Some sequencing errors were also present. For instance, one case had been reported in a unit where external components were activated before the internal components. Due to the high level description of ASD these errors were easily found and fixed, compared to some hardly reproducible errors found in the manually coded modules.
The conventionally developed units did not undergo formal correctness verification. However, the units were strictly examined at different levels of code and design reviews, unit test, integration test, and system test. Traditionally developed units of BeX are already of good quality.
Other factors besides software errors can play a key role for defects to emerge. For example, some defects of the Viewing unit appeared due to migrating to new services supplied by external suppliers. Over 40% of the depicted defects of this unit are cosmetic errors (e.g., “Annotation text: font size not changed”), which don’t cause potential failures during the execution of the system.
The members of teams attribute the ultimate quality of the developed units to the rigor and disciplines enforced by the ASD technology. Although the ASD developed code comprises fewer defects, the required development time was higher compared to developing the same code in the conventional way. But the key advantage of applying the ASD method to the progress of the projects is that less time was required to resolve problems found in testing at later stages [6].
On completion of the in-house development of the units, the software is sent to the test teams. The teams require unit owners to supply complete test and verification documents, that provide evidences of 100% requirement and function coverage, and at least 80% statement coverage for their code, before any subsystem test activity is started. In general, test teams understand that any code exhibiting over 20 “allowable errors” for the entire subsystem in early testing will be rejected and go back into design and review. But, this did rarely
<table>
<thead>
<tr>
<th>ASD used</th>
<th>Unit</th>
<th>Lines of code</th>
<th>Defects</th>
<th>ASD%</th>
<th>Manual defects</th>
<th>ASD defects</th>
<th>Total defects</th>
<th>Defects/KLOC</th>
</tr>
</thead>
<tbody>
<tr>
<td>No</td>
<td>Acquisition</td>
<td>6,140</td>
<td>0</td>
<td>6,140</td>
<td>0.00%</td>
<td>0</td>
<td>0</td>
<td>6.279</td>
</tr>
<tr>
<td>No</td>
<td>BEC</td>
<td>7,007</td>
<td>0</td>
<td>7,007</td>
<td>0.00%</td>
<td>0</td>
<td>0</td>
<td>6.279</td>
</tr>
<tr>
<td>No</td>
<td>EPX</td>
<td>7138</td>
<td>0</td>
<td>7138</td>
<td>0.00%</td>
<td>0</td>
<td>0</td>
<td>6.279</td>
</tr>
<tr>
<td>No</td>
<td>FEAdapter</td>
<td>13,190</td>
<td>0</td>
<td>13,190</td>
<td>0.00%</td>
<td>0</td>
<td>0</td>
<td>6.279</td>
</tr>
<tr>
<td>Yes</td>
<td>FEClient</td>
<td>15,462</td>
<td>12,153</td>
<td>27,615</td>
<td>44.01%</td>
<td>9</td>
<td>2</td>
<td>1.365</td>
</tr>
<tr>
<td>Yes</td>
<td>Orchestration</td>
<td>3,970</td>
<td>12,862</td>
<td>16,832</td>
<td>69.13%</td>
<td>3</td>
<td>4</td>
<td>0.981</td>
</tr>
<tr>
<td>No</td>
<td>QA</td>
<td>23,303</td>
<td>23,303</td>
<td>0</td>
<td>0.00%</td>
<td>0</td>
<td>0</td>
<td>0.981</td>
</tr>
<tr>
<td>No</td>
<td>Status Area</td>
<td>8,969</td>
<td>0</td>
<td>8,969</td>
<td>0.00%</td>
<td>52</td>
<td>0</td>
<td>5.798</td>
</tr>
<tr>
<td>No</td>
<td>TSM</td>
<td>6,681</td>
<td>0</td>
<td>6,681</td>
<td>0.00%</td>
<td>7</td>
<td>0</td>
<td>1.048</td>
</tr>
<tr>
<td>No</td>
<td>UI Guidance</td>
<td>20,458</td>
<td>0</td>
<td>20,458</td>
<td>0.00%</td>
<td>23</td>
<td>0</td>
<td>1.124</td>
</tr>
<tr>
<td>No</td>
<td>Viewing</td>
<td>19,684</td>
<td>0</td>
<td>19,684</td>
<td>0.00%</td>
<td>294</td>
<td>0</td>
<td>14.936</td>
</tr>
<tr>
<td>Yes</td>
<td>XRayIP</td>
<td>14,270</td>
<td>2,188</td>
<td>16,458</td>
<td>13.29%</td>
<td>27</td>
<td>0</td>
<td>1.641</td>
</tr>
</tbody>
</table>
Table II: Statistical data during the in-house construction of BeX units.
occur. To insure the quality of delivered code, the code was thoroughly examined by test teams using various test techniques, of which details are outside the scope of this paper.
V. CONCLUSION
We have demonstrated that formal methods supplied by the ASD technology can influence the quality of industrial control software. We explained how the ASD method was tightly integrated to the development process of various software units. We analyzed the effectiveness of the method on sizable industrial software, by comparing a number of units developed using conventional methods with units incorporating formal technologies. The target of this study was the software of a subsystem of a complex X-ray machine, developed at Philips Healthcare.
The rigor of the ASD method eliminates design errors earlier and results in reduced development time. Few errors were discovered after applying the technology throughout the construction process of the units, but these errors were generally simple to find and fixed.
The extra time needed to design and implement the software in a formal way was more than the time required for developing the same software using conventional development methods, but the gain is that there were less problems to be resolved in late stages of the projects.
ACKNOWLEDGMENT
We wish to thank Paul Alexander, Bert Folmer, Tom Fransen, Amit Ray, Ron Swinkels, Marco van der Wijst and the anonymous reviewers for their useful comments and suggestions on the text.
REFERENCES
|
{"Source-Url": "https://pure.tue.nl/ws/files/3443068/731730232514457.pdf", "len_cl100k_base": 6174, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21564, "total-output-tokens": 7527, "length": "2e12", "weborganizer": {"__label__adult": 0.0004067420959472656, "__label__art_design": 0.0003066062927246094, "__label__crime_law": 0.00036072731018066406, "__label__education_jobs": 0.0006937980651855469, "__label__entertainment": 5.257129669189453e-05, "__label__fashion_beauty": 0.00017893314361572266, "__label__finance_business": 0.0003597736358642578, "__label__food_dining": 0.0004041194915771485, "__label__games": 0.0005750656127929688, "__label__hardware": 0.0014286041259765625, "__label__health": 0.0006995201110839844, "__label__history": 0.00018930435180664065, "__label__home_hobbies": 0.0001176595687866211, "__label__industrial": 0.001178741455078125, "__label__literature": 0.00021398067474365232, "__label__politics": 0.0001722574234008789, "__label__religion": 0.0004019737243652344, "__label__science_tech": 0.0279388427734375, "__label__social_life": 6.747245788574219e-05, "__label__software": 0.004817962646484375, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.000385284423828125, "__label__transportation": 0.0007867813110351562, "__label__travel": 0.00017368793487548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33081, 0.03781]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33081, 0.55897]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33081, 0.92521]], "google_gemma-3-12b-it_contains_pii": [[0, 2397, false], [2397, 7471, null], [7471, 12871, null], [12871, 15766, null], [15766, 22047, null], [22047, 28331, null], [28331, 33081, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2397, true], [2397, 7471, null], [7471, 12871, null], [12871, 15766, null], [15766, 22047, null], [22047, 28331, null], [28331, 33081, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33081, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33081, null]], "pdf_page_numbers": [[0, 2397, 1], [2397, 7471, 2], [7471, 12871, 3], [12871, 15766, 4], [15766, 22047, 5], [22047, 28331, 6], [28331, 33081, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33081, 0.125]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
95ed04bcfbcf64ab7438cda5a3ea8c5171fe1938
|
Learning Object Repositories with Dynamically Reconfigurable Metadata Schemata
Joaquín Gayoso-Cabada, Daniel Rodríguez-Cerezo, José-Luis Sierra
Fac. Informática
Universidad Complutense de Madrid
Spain
{jgayoso,drcerezo,jlsierra}@fdi.ucm.es
Abstract—In this paper we describe a model of learning object repository in which users have full control on the metadata schemata. Thus, they can define new schemata and they can reconfigure existing ones in a collaborative fashion. As consequence, the repository must react to changes in schemata in a dynamic and responsive way. Since schemata enable operations like navigation and search, dynamic reconfigurability requires clever indexing strategies, resilient to changes in these schemata. For this purpose, we have used conventional inverted indexing approaches and we have also devised a hierarchical clustering-based indexing model. By using Clarry, a system for managing learning object repositories in the field of the Humanities, we provide some experimental results that show how the hierarchical clustering-based model can outperform the more conventional inverted indexes-based solutions.
Keywords—learning object repository, metadata schemata, dynamic reconfigurability, learning object indexing, browsing
I. INTRODUCTION
The dominant trend in the production of Learning Object (LO) repositories [15] follows a top-down approach, based on the heavy use of standards and recommendations (e.g., metadata standards like LOM [10], packaging proposals like IMS CP [21], SCORM [5] or IMS Common Cartridge [6], and interoperability proposals like IMS DRI[1] or OAI-PMH[2]). These standardization efforts make it possible, for instance, the federation and interoperability of LO repositories in distributed networks (being AGREGA [17] a well-known example in the context of Spain).
However, the top-down approach is not particularly oriented to facilitate the inductive creation of domain-specific metadata schemata (i.e., the schemata that norm how LOs are described). It is a critical aspect in learning settings like the Humanities, in which metadata schemata must be frequently created, revised and modified in parallel to the creation of the repositories [20].
In order to facilitate the inductive construction and refinement of metadata schemata, in this paper we describe how to support a more bottom-up approach, according to which communities of users (e.g., instructors, researchers and students) collaborate in the construction of these schemata in addition to use these to describe learning materials. This collaboration supposes not only to define new schemata and/or use existing ones, but also to reconfigure these schemata. As consequence, the repository must react to the changes in schemata accordingly. In addition, since typically schemata are reconfigured with experimental and/or exploratory purposes in mind, it is necessary to ensure that users don’t need to wait for long periods until the schemata reconfigurations are reflected in the repository; on the contrary, ideally they should be able to realize the reconfiguration’s effects immediately after changing the schemata. From a system architecture perspective, this is a particularly demanding requirement, since reconfigurations in schemata can affect to the way in which the repository is browsed and / or searched. Thus, in this paper we introduce indexing strategies able to face with these strong requirements posed by dynamic reconfigurability.
The rest of the paper is organized as follows. Section II introduces our model of repository with dynamically reconfigurable metadata schemata. Section III analyzes dynamic reconfigurability in these repositories. Section IV proposes some indexing approaches to enable dynamic reconfigurability and provides some comparative results. Section V analyzes some related works. Finally, section VI outlines the final conclusions and some lines of future work.
II. THE REPOSITORY MODEL
This section introduces our model of repository with dynamically reconfigurable metadata schemata. Subsection II.A describes the repository’s structure, and subsections II.B, II.C, II.D and II.E their different parts (resources, metadata schemata, LOs, and navigation maps).
A. Structure of the repository
According to our model, repositories comprise the following parts:
- A set of resources. These resources are the atomic digital assets that integrate the LOs.
- A set of metadata schemata. These schemata characterize how to describe the types of objects that can integrate the repository.
- A set of LOs. These LOs aggregate resources and simpler LOs in educationally-meaningful clusters.
- A navigation map. This map makes it possible to navigate the repository using the structures imposed on LOs by metadata schemata.
Fig. 1 sketches an example of repository structured according to our model (it is a repository concerning artistic
---
1 www.imsglobal.org/digitalrepositories
2 www.openarchives.org/OAI/openarchivesprotocol.html
objects from the Prehistoric and Protohistoric artistic periods in Spain).
<table>
<thead>
<tr>
<th>Metadata Schemata</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="image" alt="Diagram of Metadata Schemata" /></td>
</tr>
</tbody>
</table>
**Navigation map**
<table>
<thead>
<tr>
<th>LOs</th>
<th>Resources</th>
<th>LOs</th>
<th>Resources</th>
<th>LOs</th>
<th>Resources</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="image" alt="Diagram of LOs" /></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Resources</th>
<th>LOs</th>
<th>Resources</th>
<th>LOs</th>
<th>Resources</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="image" alt="Diagram of Resources" /></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Fig. 1. A small repository
**B. Resources**
Resources in our model can be any digital entity with educational value. Therefore, resources can be archives of different types (images, sound or video archives, electronic documents, e-books, etc.), external resources identified by a URL, or even entities of more abstract nature (tuples of a table in a relational database, records in a bibliographical catalog, elements in an XML document, rows in a spreadsheet, etc). Each resource has associated a unique identifier, which is useful to refer the resource from LOs.
For instance, the repository of Fig. 1 includes six image archives as resources, corresponding to photographs of different artistic objects (Fig. 1 actually shows thumbnails of these images).
**C. Metadata Schemata**
Metadata schemata are a cornerstone aspect of the repositories. In our proposal, users can freely create new schemata and editing existing ones. In this way, it is necessary to adopt a schemata model general and agnostic-enough to accommodate a great variety of users’ expressive needs. For this purpose, our model is inspired by generalized markup languages (e.g., SGML or XML) [2]. In this way, each schema, in addition to have a unique name, is a hierarchical arrangement of elements. Each element is characterized by a descriptive name, and it can be of one of the following two types:
- **Description element**. These elements introduce descriptive values.
- **Structural element**. These elements do not introduce values, but they are useful to create intermediate structures.
Thus, by providing suitable hierarchies of structural and description elements, it is possible to mimic the description capabilities of common metadata schemata (e.g., LOM).
For instance, the repository of Fig. 1 includes one single schema, named artwork, oriented to provide a simplified description of an artistic object in terms of its artistic style, and, within this cultural style, in terms of the geographical area and the cultural period.
**D. Learning Objects**
Concerning LOs, they comprise the following parts:
- A (possible empty) set of references to resources (references are made by id).
- A (possible empty) set of references to other LOs.
- A metadata document. It is a tree-like structure conforming one metadata schema. For this purpose, suitable values are assigned to the description elements (this assignment does not need to be complete: by default, values will be initialized to \( ⊥ \)).
The repository of Fig. 1 includes one LO for each resource included in the repository (notice, however, that this one-to-one correspondence between resources and LOs cannot be necessarily extrapolated to other repositories). For each LO there is a metadata document indicating the artistic style, geographical area and cultural period associated to the LO.
---
3 In concrete implementations it is possible to restrict editions to privileged users (e.g., instructors), as well as to introduce a more complex permission system.
E. Navigation map
Finally, the navigation map is a directed graph in which:
- Nodes represent sets of LOs, and arcs are labelled with element–value pairs used to narrow down the LOs: an arc’s target node will contain only those LOs exhibiting the element–value pair in the source node.
- The structure of the map is constrained by the schemata hierarchies. In this way, nodes only can be narrowed down with element–value pairs comprising child elements of elements present in incoming arcs.
- There is also a root node, which represents the overall set of LOs. It can be narrowed down by a special element $S$, whose values are the different schemata names, and whose child elements are the schemata root elements.
Fig. 1 also shows a navigation map for the repository. Notice how each path in this map is constrained by the schemata structure (in this way browsing starts by selecting a value for the artistic style, and then continues by selecting a value either for the geographical area or for the artistic period).
III. RECONFIGURABILITY
In this section we address the concern of dynamically reconfiguring the metadata schemata of a repository. Subsection III.A analyzes how this reconfiguration is carried out and the effects in the different parts of the repository. Subsection III.B describes how avoid such effects in LOs representation. Subsection III.C describes, in its turn, how to deal with navigation.
A. Reconfigurable Metadata Schemata
Our model lets users reconfigure metadata schemata by rearranging the hierarchical organization of elements. For instance, Fig. 2a shows an example concerning the repository in Fig. 1, which primes the artistic period as primary classification focus instead the cultural style (as in the example of Fig. 1).
Since the organization of a repository ultimately relies on its schemata, by reconfiguring these schemata the overall repository’s structure is also reconfigured. More precisely:
- The metadata documents of each LO must be changed to reflect the new hierarchical organization of elements. As an example, this effect is made apparent in Fig. 2b.
- The navigation map is also deeply affected by the reconfiguration. For instance, Fig. 2c shows how, after reconfiguring the schema of the repository of Fig. 1, the navigation map is also altered to reflect the change in focus represented by the reconfiguration (entering by period and refining by style or by area instead of entering by style and refining by period or by area).
B. Reconfigurable metadata documents
In order to address the effect of schemata’ reconfigurations in metadata documents, it is needed to find document representations resilient to reorganizations of the element hierarchies. Fortunately, since all the metadata documents conforming a particular schema share a common structure (indeed, that represent by the schema), the solution in this case is easy: documents can be represented as tables assigning values to elements in the schemata instead of the whole hierarchical structure. Fig. 3a exemplifies this representation for the repository in Fig. 1. Notice that these tables remain invariant whatever the reorganizations carried out in the element hierarchies. In addition, the additional cost incurred by the representation is negligible: one indirection level. Indeed, structure recovering is a simple matter of traversing the corresponding metadata schema and of querying the table for each traversed element.
C. Reconfigurable navigation maps
The reconfiguration of the navigation map is a substantially more convolved matter. Indeed, as Fig. 2 makes apparent, a simple reconfiguration in a metadata schemata can involve a complete reconfiguration of the underlying navigation map. Therefore, it is needed to look for alternatives to the explicit
representation of such a map. Subsection III.B describes how avoid such effects in LOs representation. Subsection III.C describes, in its turn, how to deal with navigation.
Subsection IV.A describes how avoid such effects in LOs representation. Subsection IV.B describes how avoid such effects in LOs representation. Subsection IV.C describes, in its turn, how to deal with navigation.
Ideally, it would be convenient to provide a structure able to represent all the possible navigations induced by all the possible reconfigurations of the schemata in a compact and unified way. For this purpose, it is needed to free element-value pairs from the hierarchical organizations induced by these schemata. Therefore, a plain set of element-value pairs must be considered and, in each interaction state of the navigation process, the applicability of all the meaningful selections must be envisioned. The result can be represented as a finite state machine, which we call a navigation automaton. This automaton will consist of states labelled by sets of LOs, and transitions labelled by element-value pairs.
More precisely:
- There will be an initial state labelled by all the LOs in the repository.
- Given a state \( S \) labelled by a set of LOs \( O \), for each element-value pair \( e=v \) in the metadata document of some LO in \( O \) there will be a state \( S' \) labelled by all the LOs in \( O \) with \( e=v \) in their metadata documents, as well as a transition from \( S \) to \( S' \) labelled by \( e=v \).
Fig. 3b shows the navigation automaton for the repository in Fig. 1. Notice that the navigation automaton does not depend on the hierarchical organization of elements in the schemata, but only on the element-value pairs in the metadata documents. Therefore, it is not affected by reconfigurations in the schemata.
Unfortunately, and although the explicit availability of the navigation automaton provides an efficient and elegant solution to navigation in the presence of reconfigurable schemata, in some cases the number of states in this automaton can grow very fast (in the worst case, exponentially with respect to the repository’s size). This fact can be realized by identifying states in navigation automata as formal concepts in concept lattices (such as these are understood in formal concept analysis [18])1. The most extreme case, in which the number of states is \( 2^n-1 \) (with \( n \) the number of LOs), arises, for instance, by distinguishing each pair of metadata documents in a single element-value pair2.
This worst-case exponential grow ratio conforms a theoretical barrier that can hinder the explicit representation of the navigation automaton, especially in live and open scenarios as those faced by a general-purpose LO repository. Therefore, it can be recommendable to look for alternative indexing approaches.
IV. INDEXING APPROACHES
This section introduces two indexing approach to enable the dynamic recreation of navigation automata: inverted indexes (subsection IV.A) and navigation dendrograms (subsection (i.e., harder than NP-complete). Thus, the exponential factor underlying the intrinsic complexity of the problem can hinder the direct applicability of the technique on repositories of moderate or large sizes.
1 This construction is actually suggested by the proof of theorem 1 in [12]
IV.B). Subsection IV.C provides some experimental results comparing both approaches.
A. Inverted indexes
Inverted indexes are standard artifacts used for information retrieval [24]. Basically, for each element-value pair, an inverted index associates the set of LOs including such a pair in its metadata document. Fig. 4a shows an example of inverted index for the repository in Fig. 1.
Notice that this kind of inverted index can be used to determine the set of selected objects in each navigation path by intersecting the sets associated with the element-value pairs traversed. The cost of evaluating the cited intersection operations constitutes the main shortcoming of the approach. While there has been extensive research in performing these intersection operations efficiently [3], the cost is not negligible. On the positive side is the availability of many mature implementations and frameworks that can be used in a straightforward way to support the technique. For instance, in our experiences, we used Lucene [14] for such a purpose.
B. Navigation dendrograms
In order to avoid the proliferation of intersection operations, which is characteristic of inverted indexes representations, we have designed a tree-shaped indexing scheme inspired by dendrograms in hierarchical clustering [11]. The resulting structures are called navigation dendrograms.
Nodes in a navigation dendrogram represent subsets of the overall LO set. The LO set associated to a node is not explicitly stored in this node. Instead, each LO is only hosted in one node (the LO’s host node). LOs placed in a node are called the mentioned node’s own LOs. The overall LO set of a node is given by its own LOs and by all the own LOs of its descendants. Finally, in order to partition the LO space, each node has a set of filtering element-value pairs associated, so that all the own LOs in the node and in all their descendants must include these filtering pairs in their metadata documents.
Navigation dendrograms can be built to contain as most $2K$ nodes ($K$ being the number of LOs in the repository). In addition, navigation can be articulated by maintaining a set of dendrogram’s nodes. Then, when an element-value pair is selected, this set is refined as follows:
- Nodes containing the selected pair in their filtering sets or having an ancestor accomplishing such a condition are preserved.
- Nodes having any descendant containing the selected pair in its filtering set are replaced by all the descendant accomplishing such a condition.
- Any other node is discarded.
By maintaining all the required information to carry out this refinement in the nodes (i.e., filtering pairs of node’s ancestors, and references to descendants per filtering pairs) this process can be carried out very efficiently. Indeed, the resulting structure is a non-deterministic version of the navigation automaton that explicitly avoids the aforementioned potential exponential factor.
Fig. 4b shows an example of navigation dendrogram for the repository in Fig. 1.
C. Experimental evaluation
In order to compare the two approaches described, we implemented both on Clavy, an experimental system for managing LO repositories with reconfigurable metadata schemata.
We also set up an experiment consisting of adding the LOs in Chasqui [20], a repository of 6283 LOs on Precolombian American archeology, to Clavy and to simulate runs concerning navigation and schemata reconfiguration operations. Each run interleaved 100 LO insertion with $0.1n$ navigation operations randomly interleaved with $0.01n$ reconfigurations ($n$ being the number of LOs inserted so far). Each navigation operation consisted, in turn, of selecting a feasible element-value pair, computing the next interaction state, and visiting all the LOs filtered. Reconfiguration operations, then, consisted of feasible interchanges of two randomly selected elements, followed by a navigation step. Inverted indexes were managed using Lucene, while navigation dendrograms were managed using our
---
1 http://clavy.fdi.ucm.es
2 By feasible we mean avoiding cycles in the resulting schema.
own implementation. In both cases, in-memory indexes were used to avoid side effects of persistence, disturbing the experiment.
Fig. 5 shows the results obtained from the two runs (experiment run on a PC with Windows 10, with a 3.4GHz Intel microprocessor, and with 8Gb of DDR3 RAM). The vertical axis corresponds to the number of operations carried out so far. The horizontal axis corresponds to accumulated time (in seconds). As is made apparent, the dendrogram-based approach clearly outperforms the inverted indexes (even though we are using a highly optimized framework, like Lucene, for inverted indexing vs. our own in-house experimental implementation for dendrograms).
V. RELATED WORK
Our proposal is similar to browsing systems for browsing information spaces that, like ours, envision the possibility that the user reconfigures the underlying metadata schemata (e.g., [8][19]). However, these systems are typically supported by general-purpose semantic web or relational database solutions instead of by model-specific indexing approaches.
A seminal work on using concept lattices to organize and navigate information spaces is [4]. Some recent systems using concept lattices as their underlying indexing structure are [7][22]. However, all these approaches face the theoretical limit imposed by the intrinsic complexity of formal concept analysis. It is why we proposed a simpler but still practical approximation based on navigation dendrograms.
Inverted indexes have been extensively used to support hierarchical navigation (e.g., guided by faceted thesauri). Works like [23] describe efficient approaches to enable this navigation. However, all these approaches are based on the assumption of pre-established and immutable schemata. As noticed in [1], if this assumption is left out, inverted indexes can become costly due to the set operations involved.
Finally, it is worthwhile to notice that clustering techniques has been extensively used in open metadata schemata (i.e., folksonomy-like systems) to enable the discovering of useful semantic relationships among terms in order to provide better guidance to users (e.g., [9][13][16]). Thus, clustering in these approaches is oriented to enhance users’ navigation efficiency, while our navigation dendrograms are oriented to enhance the internal efficiency of the supporting software.
VI. CONCLUSIONS AND FUTURE WORK
In this paper we have addressed the problem of dynamic reconfigurability in LO repositories. Since metadata schemata can be rearranged in unexpected ways, it is necessary to use internal representation mechanisms resilient to these changes. In the case of metadata documents we have shown how a tabular representation of the assignment of values to elements in the schemata suffices. However, dealing with the navigation system is substantially more cumbersome. We have shown how a concept lattice-like representation (which we have called a navigation automaton) can elegantly address this concern. However, this representation exhibits a potential exponential factor that, at least in theory, hinders its applicability (especially in live and open settings, in which schemata evolution cannot be envisioned to priori). For this purpose, we have proposed alternative indexing approaches (one based on inverted indexes, and another one based on dendrograms). We have also provided some evidence of how dendrograms can outperform inverted indexes.
We are currently working on optimizing and persisting our representations. In addition, we want to further study the practical grow ratio of the navigation automaton in real-world scenarios, to support arbitrary Boolean queries, and to run more empirical evaluations.
ACKNOWLEDGEMENTS
This work has been supported by the BBVA Foundation (grant HUM14_251) and Spanish Ministry of Economy and Competitiveness (grant TIN2014-52010-R)
REFERENCES
[15] Polsani, P. Use and Abuse of Reusable Learning Objects. JODI 3(4)2003
|
{"Source-Url": "https://gredos.usal.es/jspui/bitstream/10366/131558/1/978-84-9012-630-1_359.pdf", "len_cl100k_base": 4963, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22999, "total-output-tokens": 6474, "length": "2e12", "weborganizer": {"__label__adult": 0.0005826950073242188, "__label__art_design": 0.001885414123535156, "__label__crime_law": 0.0007643699645996094, "__label__education_jobs": 0.188232421875, "__label__entertainment": 0.0003154277801513672, "__label__fashion_beauty": 0.00048470497131347656, "__label__finance_business": 0.0012197494506835938, "__label__food_dining": 0.0006470680236816406, "__label__games": 0.0009660720825195312, "__label__hardware": 0.00130462646484375, "__label__health": 0.0012903213500976562, "__label__history": 0.0015172958374023438, "__label__home_hobbies": 0.0003218650817871094, "__label__industrial": 0.0009279251098632812, "__label__literature": 0.00246429443359375, "__label__politics": 0.00070953369140625, "__label__religion": 0.0009765625, "__label__science_tech": 0.229736328125, "__label__social_life": 0.0006427764892578125, "__label__software": 0.09521484375, "__label__software_dev": 0.4677734375, "__label__sports_fitness": 0.00038909912109375, "__label__transportation": 0.0009603500366210938, "__label__travel": 0.0006008148193359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27076, 0.02029]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27076, 0.55531]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27076, 0.85212]], "google_gemma-3-12b-it_contains_pii": [[0, 5015, false], [5015, 8495, null], [8495, 12294, null], [12294, 15646, null], [15646, 19776, null], [19776, 25968, null], [25968, 27076, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5015, true], [5015, 8495, null], [8495, 12294, null], [12294, 15646, null], [15646, 19776, null], [19776, 25968, null], [25968, 27076, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27076, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27076, null]], "pdf_page_numbers": [[0, 5015, 1], [5015, 8495, 2], [8495, 12294, 3], [12294, 15646, 4], [15646, 19776, 5], [19776, 25968, 6], [25968, 27076, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27076, 0.06475]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
91592d6e6a20d8372a4302fce38aed9e23f96c62
|
Stream Programming: Luring Programmers into the Multicore Era
Bill Thies
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Spring 2008
Multicores are Here
# of cores
1 2 4 8 16 32 64 128 256 512
Multicores are Here
Hardware was responsible for improving performance
Multicores are Here
Now, performance burden falls on programmers
Is Parallel Programming a New Problem?
- No! Decades of research targeting multiprocessors
- Languages, compilers, architectures, tools...
- What is different today?
1. **Multicores vs. multiprocessors.** Multicores have:
- New interconnects with non-uniform communication costs
- Faster on-chip communication than off-chip I/O, memory ops
- Limited per-core memory availability
2. **Non-expert programmers**
- Supercomputers with >2048 processors today: 100 [top500.org]
- Machines with >2048 cores in 2020: >100 million [ITU, Moore]
3. **Application trends**
- Embedded: 2.7 billion cell phones vs 850 million PCs [ITU 2006]
- Data-centric: YouTube streams 200 TB of video daily
Streaming Application Domain
• For programs based on streams of data
– Audio, video, DSP, networking, and cryptographic processing kernels
– Examples: HDTV editing, radar tracking, microphone arrays, cell phone base stations, graphics
Streaming Application Domain
- **For programs based on streams of data**
- Audio, video, DSP, networking, and cryptographic processing kernels
- Examples: HDTV editing, radar tracking, microphone arrays, cell phone base stations, graphics
- **Properties of stream programs**
- Regular and repeating computation
- Independent filters with explicit communication
- Data items have short lifetimes
Brief History of Streaming
Models of Computation
- Petri Nets
- Kahn Proc. Networks
- Comp. Graphs
- Communicating Sequential Processes
- Synchronous Dataflow
Modeling Environments
- Ptolemy
- Matlab/Simulink
- Gabriel
- Grape-II
- etc.
Languages / Compilers
- Lucid
- Id
- Sisal
- Erlang
- Occam
- LUSTRE
- pH
- C
- lazy
- VAL
- Occam
- LUSTRE
- pH
Brief History of Streaming
Models of Computation
- Petri Nets
- Kahn Proc. Networks
- Synchronous Dataflow
- Comp. Graphs
- Communicating Sequential Processes
Modeling Environments
- Ptolemy
- Matlab/Simulink
- Gabriel
- Grape-II
- etc.
Languages / Compilers
- Lucid
- Id
- Sisal
- Erlang
- Esterel
- C
- lazy
- VAL
- Occam
- LUSTRE
- pH
Strengths
- Elegance
- Generality
Weaknesses
- Unsuitable for static analysis
- Cannot leverage deep results from DSP / modeling community
# Brief History of Streaming
## Models of Computation
- Petri Nets
- Kahn Proc. Networks
- Synchronous Dataflow
- Comp. Graphs
- Communicating Sequential Processes
## Modeling Environments
- Ptolemy
- Matlab/Simulink
- Gabriel
- Grape-II
- etc.
## Languages / Compilers
- Lucid
- Id
- Sisal
- Erlang
- Esterel
- C
- lazy
- VAL
- Occam
- LUSTRE
- pH
- StreamIt
- Brook
- Cg
- StreamC
## Strengths
- Elegance
- Generality
## Weaknesses
- Unsuitable for static analysis
- Cannot leverage deep results from DSP / modeling community
---
**“Stream Programming”**
StreamIt: A Language and Compiler for Stream Programs
- **Key idea:** design language that enables static analysis
- **Goals:**
1. Expose and exploit the parallelism in stream programs
2. Improve programmer productivity in the streaming domain
- **Project contributions:**
- Language design for streaming [CC'02, CAN'02, PPoPP'05, IJPP'05]
- Automatic parallelization [ASPLOS'02, G.Hardware'05, ASPLOS'06]
- Domain-specific optimizations [PLDI'03, CASES'05, TechRep'07]
- Cache-aware scheduling [LCTES'03, LCTES'05]
- Extracting streams from legacy code [MICRO'07]
- User + application studies [PLDI'05, P-PHEC'05, IPDPS'06]
- 7 years, 25 people, 300 KLOC
- 700 external downloads, 5 external publications
StreamIt: A Language and Compiler for Stream Programs
- **Key idea**: design language that enables static analysis
- **Goals**:
1. Expose and exploit the parallelism in stream programs
2. Improve programmer productivity in the streaming domain
- **I contributed to**:
- Language design for streaming [CC'02, CAN'02, PPoPP'05, IJPP'05]
- Automatic parallelization [ASPLOS'02, G.Hardware'05, ASPLOS'06]
- Domain-specific optimizations [PLDI'03, CASES'05, TechRep'07]
- Cache-aware scheduling [LCTES'03, LCTES'05]
- Extracting streams from legacy code [MICRO'07]
- User + application studies [PLDI'05, P-PHEC'05, IPDPS'06]
- 7 years, 25 people, 300 KLOC
- 700 external downloads, 5 external publications
StreamIt: A Language and Compiler for Stream Programs
- Key idea: design language that enables static analysis
- Goals:
1. Expose and exploit the parallelism in stream programs
2. Improve programmer productivity in the streaming domain
- This talk:
- Language design for streaming [CC'02, CAN'02, PPoPP'05, IJPP'05]
- Automatic parallelization [ASPLOS'02, G.Hardware'05, ASPLOS'06]
- Domain-specific optimizations [PLDI'03, CASES'05, TechRep'07]
- Cache-aware scheduling [LCTES'03, LCTES'05]
- Extracting streams from legacy code [MICRO'07]
- User + application studies [PLDI'05, P-PHEC'05, IPDPS'06]
- 7 years, 25 people, 300 KLOC
- 700 external downloads, 5 external publications
Part 1: Language Design
*William Thies, Michal Karczmarek, Saman Amarasinghe (CC’02)*
*William Thies, Michal Karczmarek, Janis Sermulins, Rodric Rabbah, Saman Amarasinghe (PPoPP’05)*
StreamIt Language Basics
• High-level, architecture-independent language
– Backend support for uniprocessors, multicores (Raw, SMP), cluster of workstations
• Model of computation: synchronous dataflow
– Program is a graph of independent filters
– Filters have an atomic execution step with known input / output rates
– Compiler is responsible for scheduling and buffer management
• Extensions to synchronous dataflow
– Dynamic I/O rates
– Support for sliding window operations
– Teleport messaging [PPoPP’05]
Representing Streams
• Conventional wisdom: stream programs are graphs
– Graphs have no simple textual representation
– Graphs are difficult to analyze and optimize
• Insight: stream programs have structure
unstructured structured
Structured Streams
- Each structure is single-input, single-output
- Hierarchical and composable
Radar-Array Front End
MP3 Decoder
Bitonic Sort
FM Radio with Equalizer
Ground Moving Target Indicator (GMTI)
99 filters
3566 filter instances
Example Syntax: FMRadio
```c
void->void pipeline FMRadio(int N, float lo, float hi) {
add AtoD();
add FMDemod();
add splitjoin {
split duplicate;
for (int i=0; i<N; i++) {
add pipeline {
add LowPassFilter(lo + i*(hi - lo)/N);
add HighPassFilter(lo + i*(hi - lo)/N);
}
}
join roundrobin();
}
add Adder();
add Speaker();
}
```
StreamIt Application Suite
- Software radio
- Frequency hopping radio
- Acoustic beam former
- Vocoder
- FFTs and DCTs
- JPEG Encoder/Decoder
- MPEG-2 Encoder/Decoder
- MPEG-4 (fragments)
- Sorting algorithms
- GMTI (Ground Moving Target Indicator)
- DES and Serpent crypto algorithms
- SSCA#3 (HPCS scalable benchmark for synthetic aperture radar)
- Mosaic imaging using RANSAC algorithm
Total size: 60,000 lines of code
Control Messages
- Occasionally, low-bandwidth control messages are sent between actors.
- Often demands precise timing:
- Communications: adjust protocol, amplification, compression
- Network router: cancel invalid packet
- Adaptive beamformer: track a target
- Respond to user input, runtime errors
- Frequency hopping radio
- Traditional techniques:
- Direct method call (no timing guarantees)
- Embed message in stream (opaque, slow)
Idea 2: Teleport Messaging
- Looks like method call, but timed relative to data in the stream
```java
TargetFilter x;
if newProtocol(p) {
x.setProtocol(p) @ 2;
}
```
```java
void setProtocol(int p) {
reconfig(p);
}
```
- Exposes dependences to compiler
- Simple and precise for user
- Adjustable latency
- Can send upstream or downstream
Part 2: Automatic Parallelization
Michael I. Gordon, William Thies, Saman Amarasinghe (ASPLOS’06)
Streaming is an Implicitly Parallel Model
- Programmer thinks about functionality, not parallelism
- More explicit models may…
- Require knowledge of target [MPI] [cG]
- Require parallelism annotations [OpenMP] [HPF] [Cilk] [Intel TBB]
- Novelty over other implicit models?
[Erlang] [MapReduce] [Sequoia] [pH] [Occam] [Sisal] [Id] [VAL] [LUSTRE] [HAL] [THAL] [SALSA] [Rosette] [ABCL] [APL] [ZPL] [NESL] […]
\[ Exploiting streaming structure for robust performance \]
Parallelism in Stream Programs
Task parallelism
- Analogous to thread (fork/join) parallelism
Parallelism in Stream Programs
Task parallelism
- Analogous to thread (fork/join) parallelism
Data parallelism
- Analogous to DOALL loops
Pipeline parallelism
- Analogous to ILP that is exploited in hardware
Evaluation: Fine-Grained Data Parallelism
Throughput Normalized to Single Core StreamIt
- BitonicSort
- Channel/Vocoder
- DCT
- DES
- FFT
- Filterbank
- FMRadio
- Serpent
- TDE
- MPEG2-subset
- Vocoder
- Radar
- Geometric Mean
Raw Microprocessor
- 16 inorder, single-issue cores with D$ and I$
- 16 memory banks, each bank with DMA
- Cycle accurate simulator
Evaluation: Fine-Grained Data Parallelism
Good Parallelism! Too Much Synchronization!
Throughput Normalized to Single Core StreamIt
Fine-Grained Data
BitonicSort ChannelVocoder DCT DES FFT Fillerbank FMRadio Serpent TDE MPEG2-subset Vocoder Radar Geometric Mean
Coarsening the Granularity
Coarsening the Granularity
```
Coarsening the Granularity
Splitter
BandPass Compress Process Expand
BandPass Compress Process Expand
BandStop
BandStop
Joiner
Adder
```
Coarsening the Granularity
Coarsening the Granularity
Evaluation: Coarse-Grained Data Parallelism
Good Parallelism! Low Synchronization!
Simplified Vocoder
Target a 4-core machine
Data Parallelize
Target a 4-core machine
Data + Task Parallel Execution
Target a 4-core machine
We Can Do Better
Target a 4-core machine
Coarse-Grained Software Pipelining
Evaluation: Coarse-Grained
Task + Data + Software Pipelining
Throughput Normalized to Single Core Streamit
- Fine-Grained Data
- Coarse-Grained Task + Data
- Coarse-Grained Task + Data + Software Pipeline
Evaluation:
- Coarse-Grained Task + Data + Software Pipelining
Evaluation: Coarse-Grained Task + Data + Software Pipelining
Throughput Normalized to Single Core Stream
Best Parallelism!
Lowest Synchronization!
Parallelism: Take Away
• Stream programs have abundant parallelism
– However, parallelism is obfuscated in language like C
• Stream languages enable new & effective mapping
– In C, analogous transformations impossibly complex
– In StreamC or Brook, similar transformations possible
[Khailany et al., IEEE Micro’01] [Buck et al., SIGGRAPH’04] [Das et al., PACT’06] […]
• Results should extend to other multicores
– Parameters: local memory, comm.-to-comp. cost
– Preliminary results on Cell are promising [Zhang, dasCMP’07]
Part 3: Domain-Specific Optimizations
Andrew Lamb, William Thies, Saman Amarasinghe (PLDI’03)
Sitij Agrawal, William Thies, Saman Amarasinghe (CASES’05)
DSP Optimization Process
- Given specification of algorithm, minimize the computation cost
DSP Optimization Process
• Given specification of algorithm, minimize the computation cost
– Currently done by hand (MATLAB)
DSP Optimization Process
• Given specification of algorithm, minimize the computation cost
– Currently done by hand (MATLAB)
• Can compiler replace DSP expert?
– Library generators limited [Spiral][FFTW][ATLAS]
– Enable unified development environment
Focus: Linear State Space Filters
• Properties:
– Outputs are linear function of inputs and states
– New states are linear function of inputs and states
• Most common target of DSP optimizations
– FIR / IIR filters
– Linear difference equations
– Upsamplers / downsamplers
– DCTs
\[
\begin{align*}
x' &= Ax + Bu \\
y &= Cx + Du
\end{align*}
\]
Focus: Linear State Space Filters
\[ u \]
\[ x' = Ax + Bu \]
\[ y = Cx + Du \]
**Focus: Linear Filters**
```
float->float filter Scale {
work push 2 pop 1 {
float u = pop();
push(u);
push(2*u);
}
}
```
float->float filter Scale {
work push 2 pop 1 {
float u = pop();
push(u);
push(2*u);
}
}
Combining Adjacent Filters
\[ y = Du \]
\[ z = Ey \]
\[ z = EDu \]
\[ z = Gu \]
Combination Example
6 mults output
Filter 1
Filter 2
\[ E = \begin{bmatrix} 4 & 5 & 6 \end{bmatrix} \]
\[ D = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} \]
Combined Filter
\[ G = \begin{bmatrix} 32 \end{bmatrix} \]
1 mults output
The General Case
• If matrix dimensions mis-match? Matrix expansion:
Original
\[
[D]
\]
\[\text{pop} = \sigma\]
Expanded
\[
\begin{bmatrix}
[D] & [D] \\
[D] & [D]
\end{bmatrix}
\]
The General Case
- If matrix dimensions mis-match? Matrix expansion:
\[
\begin{align*}
A^e &= A^n A_{pre} \\
B^e &= \begin{bmatrix}
A^n B_{pre} & A^{n-1}B & A^{n-2}B & \ldots & B \\
\end{bmatrix} \\
C^e &= \begin{bmatrix}
CA_{pre} \\
CA A_{pre} \\
\vdots \\
CA^{n-1}A_{pre}
\end{bmatrix} \\
D^e &= \begin{bmatrix}
CB_{pre} & D & 0 & 0 & \ldots & 0 & 0 \\
CAB_{pre} & CB & D & 0 & \ldots & 0 & 0 \\
CA^2B_{pre} & CAB & CB & D & \ldots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
CA^{n-1}B_{pre} & CA^{n-2}B & CA^{n-3}B & CA^{n-3}B & \ldots & CB & D
\end{bmatrix}
\end{align*}
\]
## The General Case
### Pipelines
\[
A = \begin{bmatrix}
A_1 & 0 \\
B_2C_1 & A_2
\end{bmatrix} \quad A_{\text{pre}} = \begin{bmatrix}
A_1^e & 0 \\
B_{\text{pre}2}C_1^e & A_{\text{pre}2}
\end{bmatrix}
\]
\[
B = \begin{bmatrix}
B_1 \\
B_2D_1
\end{bmatrix}
B_{\text{pre}} = \begin{bmatrix}
B_1^e \\
B_{\text{pre}2}D_1^e
\end{bmatrix}
\]
\[
C = \begin{bmatrix}
D_2C_1 & C_2
\end{bmatrix}
\]
\[
D = D_2D_1
\]
\[
\text{initVec} = \begin{bmatrix}
\text{initVec}_1 \\
\text{initVec}_2
\end{bmatrix}
\]
### Feedback Loops
\[
x_1' = A_1x_1 + B_1u_1 = A_1x_1 + B_1y = A_1x_1 + B_1(C_2x_2 + D_{2,1}u + D_2C_3x_3)
\]
\[
= A_1x_1 + B_1C_2x_2 + B_1D_{2,1}u + B_1D_2C_3x_3
\]
\[
x_2' = A_2x_2 + B_2u_2 = A_2x_2 + B_2u + B_{2,2}y_3 = A_2x_2 + B_{2,1}u + B_2C_3x_3
\]
\[
y_2' = C_2x_2 + D_2u_2 = C_2x_2 + D_{2,1}u + D_{2,2}y_3 = C_2x_2 + D_{2,1}u + D_2C_3x_3
\]
\[
x_3' = A_3x_3 + B_3u_3 = A_3x_3 + B_3y_1 = A_3x_3 + B_3(C_1x_1 + D_1u_1)
\]
\[
= A_3x_3 + B_3(C_1x_1 + D_1y) = A_3x_3 + B_3(C_1x_1 + D_1(C_2x_2 + D_{2,1}u + D_{2,2}C_3x_3))
\]
\[
= A_3x_3 + B_3C_1x_1 + B_3D_1C_2x_2 + B_3D_1D_{2,1}u + B_3D_1D_{2,2}C_3x_3
\]
The General Case
**Splitjoins**
\[
A = \begin{bmatrix}
A_s & 0 & 0 & \ldots & 0 \\
A_{1rs} & A_{1rr} & 0 & \ldots & 0 \\
A_{2rs} & 0 & A_{2rr} & \ldots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
A_{krr} & 0 & 0 & \ldots & A_{krs}
\end{bmatrix}
\]
\[
B = \begin{bmatrix}
B_s \\
B_{1r} \\
B_{2r} \\
\vdots \\
B_{kr}
\end{bmatrix}
\]
\[
C = \begin{bmatrix}
C_{1s1} & C_{1r1} & 0 & \ldots & 0 \\
C_{2s1} & C_{2r1} & 0 & \ldots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
C_{ks1} & 0 & 0 & \ldots & C_{kr1} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
C_{sk1} & 0 & 0 & \ldots & C_{krk}
\end{bmatrix}
\]
\[
D = \begin{bmatrix}
D_{11} \\
D_{21} \\
\vdots \\
D_{k1} \\
D_{1k} \\
D_{2k} \\
\vdots \\
D_{kk}
\end{bmatrix}
\]
\[
C_i = \begin{bmatrix}
C_{is1} & C_{ir1} \\
C_{is2} & C_{ir2} \\
\vdots & \vdots \\
C_{iexecutions} & C_{irexexecutions}
\end{bmatrix}
\]
\[
D_i = \begin{bmatrix}
D_{i1} \\
D_{i2} \\
\vdots \\
D_{iexecutions}
\end{bmatrix}
\]
\[
A_{pre} = \begin{bmatrix}
0 & 0 & 0 & \ldots & 0 \\
0 & A_{pre1rr} & 0 & \ldots & 0 \\
0 & 0 & A_{pre2rr} & \ldots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \ldots & A_{prekrr}
\end{bmatrix}
\]
\[
B_{pre} = \begin{bmatrix}
B_{pres} \\
B_{pre1r} \\
B_{pre2r} \\
\vdots \\
B_{prekr}
\end{bmatrix}
\]
\[
\text{initVec} = \begin{bmatrix}
\delta \\
\text{initVec}_{1r} \\
\text{initVec}_{2r} \\
\vdots \\
\text{initVec}_{kr}
\end{bmatrix}
\]
Floating-Point Operations Reduction
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Flops Removed (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>FIR</td>
<td>80%</td>
</tr>
<tr>
<td>RateConvert</td>
<td>80%</td>
</tr>
<tr>
<td>TargetDetect</td>
<td>80%</td>
</tr>
<tr>
<td>FMRadio</td>
<td>80%</td>
</tr>
<tr>
<td>Radar</td>
<td>80%</td>
</tr>
<tr>
<td>FilterBank</td>
<td>80%</td>
</tr>
<tr>
<td>Vocoder</td>
<td>80%</td>
</tr>
<tr>
<td>Oversample</td>
<td>80%</td>
</tr>
<tr>
<td>DTOA</td>
<td>0.3%</td>
</tr>
</tbody>
</table>
Floating-Point Operations Reduction
Floating-Point Operations Reduction
![Bar chart showing floating-point operations reduction for different benchmarks. The benchmarks include FIR, RateConvert, TargetDetect, FMRadio, Radar, FilterBank, Vocoder, Oversample, and DTOA. The reduction is shown as a percentage for both linear and frequency domains. The chart indicates a significant reduction in operations for most benchmarks, with some showing a 140% increase.]
Radar (Transformation Selection)
Radar (Transformation Selection)
Radar
Maximal Combination and Shifting to Frequency Domain
Using Transformation Selection
2.4 times as many FLOPS
half as many FLOPS
Execution Speedup
On a Pentium IV
Execution Speedup
Additional transformations:
1. Eliminating redundant states
2. Eliminating parameters
(non-zero, non-unary coefficients)
3. Translation to the compressed domain
On a Pentium IV
StreamIt: Lessons Learned
- In practice, I/O rates of filters are often matched [LCTES’03]
- Over 30 publications study an uncommon case (CD-DAT)
- Multi-phase filters complicate programs, compilers
- Should maintain simplicity of only one atomic step per filter
- Programmers accidentally introduce mutable filter state
```c
void>int filter SquareWave() {
work push 2 {
push(0);
push(1);
}
}
```
```c
void>int filter SquareWave() {
int x = 0;
work push 1 {
push(x);
x = 1 - x;
}
}
```
Future of StreamIt
- Goal: influence the next big language
**Origins of C++**
- Structural influence
- Feature influence
- Academic origin
Source: B. Stroustrup, *The Design and Evolution of C++*
Research Trajectory
• **Vision:** Make emerging computational substrates universally accessible and useful
1. **Languages, compilers, & tools for multicores**
– I believe new language / compiler technology can enable scalable and robust performance
– Next inroads: expose & exploit flexibility in programs
2. **Programmable microfluidics**
– We have developed programming languages, tools, and flexible new devices for microfluidics
– Potential to revolutionize biology experimentation
3. **Technologies for the developing world**
– TEK: enable Internet experience over email account
– Audio Wiki: publish content from a low-cost phone
– uBox / uPhone: monitor & improve rural healthcare
Conclusions
• A parallel programming model will succeed only by luring programmers, making them do less, not more
• Stream programming lures programmers with:
– Elegant programming primitives
– Domain-specific optimizations
• Meanwhile, streaming is implicitly parallel
– Robust performance via task, data, & pipeline parallelism
• We believe stream programming will play a key role in enabling a transition to multicore processors
Contributions
– Structured streams
– Teleport messaging
– Unified algorithm for task, data, pipeline parallelism
– Software pipelining of whole procedures
– Algebraic simplification of whole procedures
– Translation from time to frequency
– Selection of best DSP transforms
Acknowledgments
• Project supervisors
– Prof. Saman Amarasinghe – Dr. Rodric Rabbah
• Contributors to this talk
– Michael I. Gordon (Ph.D. Candidate) – leads StreamIt backend efforts
– Andrew A. Lamb (M.Eng) – led linear optimizations
– Sitij Agrawal (M.Eng) – led statespace optimizations
• Compiler developers
– Kunal Agrawal – Jasper Lin – Janis Sermulins
– Allyn Dimock – Michal Karczmarek – Phil Sung
– Qiuyuan Jimmy Li – David Maze – David Zhang
• Application developers
– Basier Aziz – Shirley Fung – Ali Meli
– Matthew Brown – Hank Hoffmann – Satish Ramaswamy
– Matthew Drake – Chris Leger – Jeremy Wong
• User interface developers
– Kimberly Kuo – Juan Reyes
|
{"Source-Url": "http://people.csail.mit.edu/thies/jobtalk08.pdf", "len_cl100k_base": 6406, "olmocr-version": "0.1.53", "pdf-total-pages": 77, "total-fallback-pages": 0, "total-input-tokens": 110954, "total-output-tokens": 9590, "length": "2e12", "weborganizer": {"__label__adult": 0.00043392181396484375, "__label__art_design": 0.00037217140197753906, "__label__crime_law": 0.0003294944763183594, "__label__education_jobs": 0.0004813671112060547, "__label__entertainment": 7.933378219604492e-05, "__label__fashion_beauty": 0.0001703500747680664, "__label__finance_business": 0.00018310546875, "__label__food_dining": 0.0003628730773925781, "__label__games": 0.000644683837890625, "__label__hardware": 0.0042724609375, "__label__health": 0.0005035400390625, "__label__history": 0.0002359151840209961, "__label__home_hobbies": 0.00011837482452392578, "__label__industrial": 0.0006008148193359375, "__label__literature": 0.00015437602996826172, "__label__politics": 0.00025963783264160156, "__label__religion": 0.0005626678466796875, "__label__science_tech": 0.0305328369140625, "__label__social_life": 7.194280624389648e-05, "__label__software": 0.004886627197265625, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.0004317760467529297, "__label__transportation": 0.0006594657897949219, "__label__travel": 0.0001995563507080078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20695, 0.02322]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20695, 0.1777]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20695, 0.63428]], "google_gemma-3-12b-it_contains_pii": [[0, 182, false], [182, 290, null], [290, 362, null], [362, 428, null], [428, 1154, null], [1154, 1394, null], [1394, 1801, null], [1801, 2180, null], [2180, 2662, null], [2662, 3226, null], [3226, 3952, null], [3952, 4673, null], [4673, 5375, null], [5375, 5560, null], [5560, 6087, null], [6087, 6326, null], [6326, 6424, null], [6424, 6446, null], [6446, 6446, null], [6446, 6446, null], [6446, 6446, null], [6446, 6458, null], [6458, 6471, null], [6471, 6495, null], [6495, 6567, null], [6567, 7005, null], [7005, 7429, null], [7429, 7882, null], [7882, 8236, null], [8236, 8514, null], [8514, 8990, null], [8990, 9085, null], [9085, 9296, null], [9296, 9296, null], [9296, 9658, null], [9658, 9936, null], [9936, 9963, null], [9963, 10139, null], [10139, 10166, null], [10166, 10193, null], [10193, 10277, null], [10277, 10321, null], [10321, 10363, null], [10363, 10419, null], [10419, 10461, null], [10461, 10496, null], [10496, 10767, null], [10767, 10916, null], [10916, 11456, null], [11456, 11610, null], [11610, 11702, null], [11702, 11830, null], [11830, 12090, null], [12090, 12453, null], [12453, 12535, null], [12535, 12691, null], [12691, 12796, null], [12796, 12877, null], [12877, 13114, null], [13114, 13300, null], [13300, 13911, null], [13911, 15027, null], [15027, 16467, null], [16467, 16959, null], [16959, 17385, null], [17385, 17418, null], [17418, 17451, null], [17451, 17451, null], [17451, 17588, null], [17588, 17588, null], [17588, 17623, null], [17623, 17823, null], [17823, 18370, null], [18370, 18570, null], [18570, 19283, null], [19283, 20001, null], [20001, 20695, null]], "google_gemma-3-12b-it_is_public_document": [[0, 182, true], [182, 290, null], [290, 362, null], [362, 428, null], [428, 1154, null], [1154, 1394, null], [1394, 1801, null], [1801, 2180, null], [2180, 2662, null], [2662, 3226, null], [3226, 3952, null], [3952, 4673, null], [4673, 5375, null], [5375, 5560, null], [5560, 6087, null], [6087, 6326, null], [6326, 6424, null], [6424, 6446, null], [6446, 6446, null], [6446, 6446, null], [6446, 6446, null], [6446, 6458, null], [6458, 6471, null], [6471, 6495, null], [6495, 6567, null], [6567, 7005, null], [7005, 7429, null], [7429, 7882, null], [7882, 8236, null], [8236, 8514, null], [8514, 8990, null], [8990, 9085, null], [9085, 9296, null], [9296, 9296, null], [9296, 9658, null], [9658, 9936, null], [9936, 9963, null], [9963, 10139, null], [10139, 10166, null], [10166, 10193, null], [10193, 10277, null], [10277, 10321, null], [10321, 10363, null], [10363, 10419, null], [10419, 10461, null], [10461, 10496, null], [10496, 10767, null], [10767, 10916, null], [10916, 11456, null], [11456, 11610, null], [11610, 11702, null], [11702, 11830, null], [11830, 12090, null], [12090, 12453, null], [12453, 12535, null], [12535, 12691, null], [12691, 12796, null], [12796, 12877, null], [12877, 13114, null], [13114, 13300, null], [13300, 13911, null], [13911, 15027, null], [15027, 16467, null], [16467, 16959, null], [16959, 17385, null], [17385, 17418, null], [17418, 17451, null], [17451, 17451, null], [17451, 17588, null], [17588, 17588, null], [17588, 17623, null], [17623, 17823, null], [17823, 18370, null], [18370, 18570, null], [18570, 19283, null], [19283, 20001, null], [20001, 20695, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20695, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20695, null]], "pdf_page_numbers": [[0, 182, 1], [182, 290, 2], [290, 362, 3], [362, 428, 4], [428, 1154, 5], [1154, 1394, 6], [1394, 1801, 7], [1801, 2180, 8], [2180, 2662, 9], [2662, 3226, 10], [3226, 3952, 11], [3952, 4673, 12], [4673, 5375, 13], [5375, 5560, 14], [5560, 6087, 15], [6087, 6326, 16], [6326, 6424, 17], [6424, 6446, 18], [6446, 6446, 19], [6446, 6446, 20], [6446, 6446, 21], [6446, 6458, 22], [6458, 6471, 23], [6471, 6495, 24], [6495, 6567, 25], [6567, 7005, 26], [7005, 7429, 27], [7429, 7882, 28], [7882, 8236, 29], [8236, 8514, 30], [8514, 8990, 31], [8990, 9085, 32], [9085, 9296, 33], [9296, 9296, 34], [9296, 9658, 35], [9658, 9936, 36], [9936, 9963, 37], [9963, 10139, 38], [10139, 10166, 39], [10166, 10193, 40], [10193, 10277, 41], [10277, 10321, 42], [10321, 10363, 43], [10363, 10419, 44], [10419, 10461, 45], [10461, 10496, 46], [10496, 10767, 47], [10767, 10916, 48], [10916, 11456, 49], [11456, 11610, 50], [11610, 11702, 51], [11702, 11830, 52], [11830, 12090, 53], [12090, 12453, 54], [12453, 12535, 55], [12535, 12691, 56], [12691, 12796, 57], [12796, 12877, 58], [12877, 13114, 59], [13114, 13300, 60], [13300, 13911, 61], [13911, 15027, 62], [15027, 16467, 63], [16467, 16959, 64], [16959, 17385, 65], [17385, 17418, 66], [17418, 17451, 67], [17451, 17451, 68], [17451, 17588, 69], [17588, 17588, 70], [17588, 17623, 71], [17623, 17823, 72], [17823, 18370, 73], [18370, 18570, 74], [18570, 19283, 75], [19283, 20001, 76], [20001, 20695, 77]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20695, 0.01549]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
fc61828418c1165f6b517a45ca7480d4f30c0d32
|
Introduction to Graph Cloud Services, Database, and Analytics
Xavier Lopez, Senior Director Product Management, Oracle
Zhe Wu, Architect, Oracle
Masahiro Yoshioka, Principal Engineer, IT Solutions Division, Mazda
October 2, 2017
Safe Harbor Statement
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
Program Agenda
1. Product Introduction
2. Use Cases
3. Feature Overview
4. Demo
5. Mazda Example
Oracle’s Spatial and Graph Strategy
On Premise and Oracle Cloud
Oracle Database Spatial and Graph
Oracle Big Data Spatial and Graph
Spatial and Graph in Oracle Cloud
Two Graph Data Models
Property Graph Model
- Path Analytics
- Social Network Analysis
- Entity analytics
RDF Data Model
- Data federation
- Knowledge representation
- Semantic Web
Use Case
Graph Model
Industry Domain
Social Network Analysis
- Financial
- Retail, Marketing
- Social Media
- Smart Manufacturing
Linked Data Semantic Web
- Life Sciences
- Health Care
- Publishing
- Finance
Graph Database Features:
- Scalability and Performance
- Graph analytics
- Graph Visualization
- Graph Query Language
- Standard interfaces
- Integration with Machine Learning tools
Graph Product Options
Oracle Big Data Spatial and Graph
- Available for Big Data platform/BDCS
- Hadoop, HBase, Oracle NoSQL
- Supported both on BDA and commodity hardware
- CDH and Hortonworks
- Database connectivity through Big Data Connectors or Big Data SQL
- Included in Big Data Cloud Service
Oracle Spatial and Graph (DB option)
- Available with Oracle 12.2 / DBCS
- Using tables for graph persistence
- Graph views on relational data
- In-database graph analytics
- Sparsification, shortest path, page rank, triangle counting, WCC, sub graphs
- SQL queries possible
- Included in Database Cloud Service
Use Cases
Graph Analysis for Business Insight
Identify Influencers
Discover Graph Patterns in Big Data
Generate Recommendations
Some Use Case Scenarios
• **Finance**
– Customer 360, Fraud detection
• **Public Sector**
– Tax Evasion, Crime network analysis
• **Retail**
– Recommendation, sentiment analysis
• **Manufacturing**
– Analyzing complex bill of materials (BoM)
Financial Services
Applying Graph Analysis To Improve Customer Service
- Model customer relationship to products, services, people, places.
- Analyze money customer’s flow between non-bank to bank accounts
- Combine internal CRM data with enterprise and social media content
- Identify high-value customers across business divisions
- Enhance new product/service opportunities
- Provide Real-time recommendations
Tax Fraud Analysis
Chinese Province Tax Office
Challenge:
– Modeling relationships between individuals and corporations
– Ingest documents, social media, web content, and publically available open data
– Create a ‘picture’ of the taxpayer network
• Taxpayer relationship with other taxpayers
• If a company structure, identify associated directors and shareholders in that company
• Relationship between taxpayer’s and their associates’ financial affairs
• Identify relevant intermediaries acting on behalf of taxpayer
– Explore tax evasion and fraud, trigger a formal case investigation
Analyzing Blockchain Ledger Transactions
Land Management, Banking, Public Services
• Distributed Ledgers being adopted in Finance, Public Sector
• Load and manage massive transactions from a distributed digital ledger
• Efficiently traverse a blockchain transaction graph
• Query and visualize – search for patterns of activity
Public Security: Analyzing Criminal Networks
Chinese Police Department
Business Requirement
– Model relationships between known and suspected criminals
– Ingest documents, social media, web content, chat rooms, flight records, hotel stay registries, and publically available open datasets.
How graph analysis solves the problem
• Search for known individuals in web of content
• Analyze relationship with other criminals, travel history, addresses, employers
• Relationship between suspects and their financial affairs
Courtesy Tom Sawyer Perspectives
IT Network Modeling & Monitoring
• Model cyber network topology as a Graph
• Identify CyberNetwork intrusions
– Combine deep learning with graph analytics
• Visualize real-time state of CyberNetwork
• Analyze impact of component failure on an IoT system?
– Reachability analysis: understand which routines, libraries, servers, routers are affected by a modification
Automotive Manufacturing
Support high variance, short innovation cycles of complex autos
Graph View of Enterprise Data
- Unified graph representation of BoM, Configuration, CAE, Simulation...
- Generate “graph view” of relational data, or model instance data as graph
- Apply graph query and search across BoM and configuration models
- Apply graph analytics
- Scale to trillions of nodes and edges
Feature Overview
The Property Graph Data Model
• A set of vertices (or nodes)
– each vertex has a unique identifier.
– each vertex has a set of in/out edges.
– each vertex has a collection of key-value properties.
• A set of edges (or links)
– each edge has a unique identifier.
– each edge has a head/tail vertex.
– each edge has a label denoting type of relationship between two vertices.
– each edge has a collection of key-value properties.
https://github.com/tinkerpop/blueprints/wiki/Property-Graph-Model
Relational Model vs. Graph Model
• Relational Model
• Graph Model
Copyright © 2017, Oracle and/or its affiliates. All rights reserved.
Courtesy: Tom Sawyer 2016
Architecture of Property Graph Support
Graph Analytics
- Parallel In-Memory Graph Analytics/Graph Query (PGX)
Graph Data Access Layer (DAL)
- Blueprints & Lucene/SolrCloud
Scalable and Persistent Storage Management
- Oracle RDBMS
- Apache HBase
- Oracle NoSQL Database
Java APIs
- REST/Web Service/Notebooks
- Java, Groovy, Python, ...
Property Graph formats
- GraphML
- GML
- Graph-SON
- Flat Files
Java APIs/JDBC/SQL/PLSQL
RDF (RDF/XML, N-Triples, N-Quads, TriG, N3, JSON)
Architecture of Property Graph Support
Graph Data Access Layer (DAL)
- Blueprints & Lucene/SolrCloud
Graph Analytics
- Parallel In-Memory Graph Analytics/Graph Query (PGX)
Java APIs
Java APIs/JDBC/SQL/PLSQL
Scalable and Persistent Storage Management
- Oracle RDBMS
- Apache HBase
- Oracle NoSQL Database
Property Graph formats
- GraphML
- GML
- Graph-SON
- Flat Files
REST/Web Service/Notebooks
Java, Groovy, Python, …
Architecture of Property Graph Support
Graph Data Access Layer (DAL)
- Graph Analytics
- Parallel In-Memory Graph Analytics/Graph Query (PGX)
Apache Spark
Java APIs
Graph Analytics
- Blueprints & Lucene/SolrCloud
Java APIs/JDBC/SQL/PLSQL
Scalable and Persistent Storage Management
- Oracle RDBMS
- Apache HBase
- Oracle NoSQL Database
Property Graph formats
- GraphML
- GML
- Graph-SON
- Flat Files
REST/Web Service/Notebooks
Java, Groovy, Python, …
Architecture of Property Graph Support
Graph Data Access Layer (DAL)
- Bluesprints & Lucene/SolrCloud
- Java APIs/DB/SQL/PLSQL
- Parallel In-Memory Graph Analytics/Graph Query (PGX)
Graph Analytics
Apache Spark
- REST/Web Service/Notebooks
- Scala and Persistent Storage Management
- Java, Groovy, Python, ...
Rich set of built-in parallel graph algorithms
- Detecting Components and Communities
- Tarjan’s, Kosaraju’s, Weakly Connected Components, Label Propagation (w/ variants), Soman and Narang’s Spacification
- Evaluating Community Structures
- Conductance, Modularity, Clustering Coefficient (Triangle Counting), Adamic-Adar
- Link Prediction
- SALSA (Twitter’s Who-to-follow)
- Ranking and Walking
- Pagerank, Personalized Pagerank, Betweenness Centrality (w/ variants), Closeness Centrality, Degree Centrality, Eigenvector Centrality, HITS, Random walking and sampling (w/ variants)
- Path-Finding
- Hop-Distance (BFS), Dijkstra’s, Bi-directional Dijkstra’s, Bellman-Ford’s
- Other Classics
- Vertex Cover, Minimum Spanning-Tree (Prim’s)
... and parallel graph mutation operations
- The original graph
- Create Bipartite Graph
- Create Undirected Graph
- Filter-Expression
- Sort-By-Degree (Renumbering)
- Filtered Subgraph
- Simplify Graph
- Left Set: “a,b,e”
Graph Analysis Algorithms can be very hard to code ...
BDSG and OSG Property Graph comes with 40+ pre-built algorithms
• Example: Find the size of the 2-hop network of vertices (Gremlin+Python)
```python
sum([v.query() \
.direction(blueprints.Direction.OUT).count() \n for v in OPGIterator(v0.query() \n .direction(blueprints.Direction.OUT) \n .vertices().iterator())])
```
• Single API call instead
– Analysis in memory, in parallel
• Results can be persisted in Graph store and accessed from Oracle Database
– Big Data SQL, Connectors
Text Search through Apache Lucene/SolrCloud
Why?
– Contribute to the performance of graph traversal queries
– Constrained to be uniform in type among the indexed elements (vertices or edges)
Automatic Indexes
– Automatic update based on a subset of property keys
– Avoid linear scan to access an element by key/value
Manual Indexes
– Maintained by users
– Fasten up text searches by a particular key/value pair
– Sub-graphs based on a set of (existing or temporary) properties
Visualizing Property Graphs (with Cytoscape)
• Cytoscape supports Property Graph
• Connects to Oracle Database, Oracle NoSQL Database, or Apache HBase
• Runs Page Rank, Clustering, Shortest Path, etc
• Alternative to command-line for in-memory analytics once base graph created
Additional Graph Visualization Partners
TomSawyer, Cambridge Intelligence, Linkurios, Vis.js,...
Pattern matching using PGQL
• SQL-like syntax but with graph pattern description and property access
– Interactive (real-time) analysis
– Supporting aggregates, comparison, such as max, min, order by, group by
• Finding a given pattern in graph
– Fraud detection
– Anomaly detection
– Subgraph extraction
– ...
• Proposed for standardization by Oracle
– Specification available on-line
– Open-sourced front-end (i.e. parser)
https://github.com/oracle/pgql-lang
Zeppelin Frontend
- Apache Zeppelin
- **Multi-purpose notebook** for data analysis and visualization
- Enables to embed interactive execution inside Browsers
- Renders execution results with plots and tables within Browsers
- PGX provides a hook (interpreter) for Zeppelin integration
Interacting with the Graph
• Access through APIs
– Implementation of Apache Tinkerpop Blueprints APIs
– Based on Java, REST plus SolR Cloud/Lucene support for text search
– SQL/PLSQL for property graph functions in Oracle Database
• Scripting
– Groovy, Python, Javascript, ...
– Zeppelin integration, Javascript (Node.js) language binding
• Graphical UIs
– Cytoscape, plug-in available for BDSG
– Commercial Tools such as TomSawyer Perspectives, Ogma
Enhancing ML and Data Analytics with Graphs
• Graph analysis can enhance the quality of ML and data analytics
• Graph representation helps discover hidden information about the data
– Multi-hop relationship between data entities
• This can be used to further improve predictive models in R, Advanced Analytics, machine learning
Distributed Graph Analysis Engine
Handling extremely large graphs
• Oracle Big Data Spatial and Graph uses very compact graph representation
– Can fit graph with ~23bn edges into one BDA node
• Distributed implementation scales beyond this
– Processing even larger graphs with several machines in a cluster (scale-out)
– Interconnected through fast network (Ethernet or, ideally, Infiniband)
• Integrated with YARN for resource management
– Same client interface, but not all APIs implemented yet
• Again, much faster than other implementations
– Comprehensive performance comparison with GraphX, GraphLab
Demo
We Have Many Property Graph Demos
Demo booth at Moscone West SOA 127 (Oracle’s Graph Database)
- Fraud Detection
- Graph Construction
- Notebooks
- Deep Learning Integration
- Graph Studio
- Network Intrusion Detection
- Bitcoin/Blockchain
- Recommender System
- Graph Visualization
Mazda Example
Who Is MAZDA... ?
1920 Founded as 『Toyo Cork Kogyo Co., Ltd
1927 Renamed as 『Toyo Kogyo Co., Ltd
1929 Started the production of motorcycle
1984 Renamed as 『Mazda Motor Corporation
2020 Centennial anniversary
Sales price was around $3.5 ~ $3.8 then.
1931 Three-wheeler truck
1960 Mazda R360
(The very first passenger vehicle)
## Corporate Profile
<table>
<thead>
<tr>
<th>Company name</th>
<th>Mazda Motor Corporation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Founded</td>
<td>January 30, 1920</td>
</tr>
<tr>
<td>Headquarters</td>
<td>Hiroshima / Japan</td>
</tr>
<tr>
<td>Revenue</td>
<td>$30 Billion (FYE Mar 2017)</td>
</tr>
<tr>
<td>Retail Volume</td>
<td>1.5 million units (same FY as above)</td>
</tr>
<tr>
<td>Number of employees</td>
<td>48,749 (consolidated) (same FY as above)</td>
</tr>
<tr>
<td>R&D center</td>
<td>5 locations (Hiroshima, Yokohama, US, Germany, China)</td>
</tr>
<tr>
<td>Production Site</td>
<td>3 factories in Japan (Hiroshima Plant (Head Office, Ujina), Hofu Plant (Nishinoura, Nakanoseki), 7 factories overseas)</td>
</tr>
<tr>
<td></td>
<td>China, Thailand, Mexico, Vietnam, Malaysia, Russia</td>
</tr>
</tbody>
</table>
Mazda Plant
Mazda Plant
Mazda’s Problem
Imagine
Auto Manufacturer
Vehicle
Vehicle
Parts
(constructed by small parts)
Mazda’s Problem
Data Structure
Relational ?
Many Business Domain
Finance
Sale / Marketing
Production
Bill Of Materials
...
Graph ?
Which Data Structure is better for each Data ?
### Mazda’s PoC
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Itel0</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Itel1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Itel2</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Itel3</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Itel4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Itel5</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Itel6</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Itel7</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Mazda’s PoC (4th Stage)
Total number of Edges : 53,993,161
Total number of Nodes : 7,099,473
Mazda’s PoC (4th Stage)
Number of Nodes are shown in blue color
Number of Edges are shown in black color
- **N1**: Number ofNodes = 2,798,431
- Number of Edges = 8,395,290
- **N2**: Number ofNodes = 4,219,057
- **N3**: Number ofNodes = 16,835,933
- **N4**: Number ofNodes = 4,219,057
- **Na**: Number ofNodes = 2,086
- Number of Edges = 39,213
- **Nb**: Number ofNodes = 39,213
- Number of Edges = 39,213
- **Nc**: Number ofNodes = 23,727
- Number of Edges = 6,027
- **Ne**: Number ofNodes = 6,111
- Number of Edges = 553,773
- **Ni**: Number ofNodes = 21,119,156
- Number of Edges = 317,814
- **Nm**: Number ofNodes = 896,765
- Number of Edges = 2,291,840
Performance (PGQL Query)
<table>
<thead>
<tr>
<th>Nm</th>
<th>Num</th>
<th>Query time (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>aaaaaaaaaaa</td>
<td>62</td>
<td>43</td>
</tr>
<tr>
<td>bbbbbbbbbb</td>
<td>66</td>
<td>51</td>
</tr>
<tr>
<td>cccccccccc</td>
<td>78</td>
<td>46</td>
</tr>
</tbody>
</table>
Summary (Current Result)
• Performance is Good!
• Issues: Refinement of complex PGQL queries
• Next Step: On-going collaboration with Oracle Team
• Oracle Japan, US Development, Oracle Labs
Overview: Complete Graph Solution
• Distributed graph database
• Distributed in-memory analytics
• Graph Visualization
• Graph Query Language (PGQL)
• Standard interfaces
• Available on premise and Oracle Cloud
### Spatial and Graph Sessions
<table>
<thead>
<tr>
<th>Date/Time</th>
<th>Title</th>
<th>Location</th>
</tr>
</thead>
</table>
| Monday, Oct. 2 | 2:15 pm - 3:00 pm
Leveraging the Power of Graph Analytics to Fight Financial Crimes [CON2495] | Park Central (Floor 2) – Metropolitan III |
| Tuesday, Oct. 3 | 5:45 pm – 6:30 pm
### Spatial and Graph Demos
<table>
<thead>
<tr>
<th>Date/Time</th>
<th>Title</th>
<th>Location</th>
</tr>
</thead>
<tbody>
<tr>
<td>Monday - Wednesday</td>
<td>Oracle’s Spatial Technologies for Database, Big Data, and the Cloud</td>
<td>Moscone West Exhibit Hall 1st floor Oracle Cloud Platform > Analytics & Big Data, pod SOA 131</td>
</tr>
<tr>
<td>Monday - Wednesday</td>
<td>Oracle’s Graph Database and Analytics for Database, Big Data, and the Cloud</td>
<td>Moscone West Exhibit Hall 1st floor Oracle Cloud Platform > Analytics & Big Data, pod SOA 127</td>
</tr>
</tbody>
</table>
Call for speakers is now open with rolling acceptances.
|
{"Source-Url": "https://pic.huodongjia.com/ganhuodocs/2017-11-20/1511161877.36.pdf", "len_cl100k_base": 5361, "olmocr-version": "0.1.48", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 64940, "total-output-tokens": 6685, "length": "2e12", "weborganizer": {"__label__adult": 0.0003917217254638672, "__label__art_design": 0.0005960464477539062, "__label__crime_law": 0.0006833076477050781, "__label__education_jobs": 0.00243377685546875, "__label__entertainment": 0.00016963481903076172, "__label__fashion_beauty": 0.00018274784088134768, "__label__finance_business": 0.005207061767578125, "__label__food_dining": 0.00038695335388183594, "__label__games": 0.000667572021484375, "__label__hardware": 0.0014123916625976562, "__label__health": 0.0005054473876953125, "__label__history": 0.0003786087036132813, "__label__home_hobbies": 0.00010460615158081056, "__label__industrial": 0.0010423660278320312, "__label__literature": 0.0002963542938232422, "__label__politics": 0.0004172325134277344, "__label__religion": 0.00038242340087890625, "__label__science_tech": 0.158935546875, "__label__social_life": 0.00019109249114990232, "__label__software": 0.08056640625, "__label__software_dev": 0.74365234375, "__label__sports_fitness": 0.0002810955047607422, "__label__transportation": 0.0007472038269042969, "__label__travel": 0.0002562999725341797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18794, 0.01816]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18794, 0.05149]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18794, 0.78248]], "google_gemma-3-12b-it_contains_pii": [[0, 231, false], [231, 693, null], [693, 791, null], [791, 961, null], [961, 1360, null], [1360, 1543, null], [1543, 2162, null], [2162, 2172, null], [2172, 2293, null], [2293, 2547, null], [2547, 2962, null], [2962, 3562, null], [3562, 3895, null], [3895, 4453, null], [4453, 4824, null], [4824, 5225, null], [5225, 5242, null], [5242, 5753, null], [5753, 5918, null], [5918, 6400, null], [6400, 6831, null], [6831, 7294, null], [7294, 7607, null], [7607, 8587, null], [8587, 9146, null], [9146, 9629, null], [9629, 9908, null], [9908, 10005, null], [10005, 10486, null], [10486, 10779, null], [10779, 11247, null], [11247, 11580, null], [11580, 12200, null], [12200, 12205, null], [12205, 12489, null], [12489, 12503, null], [12503, 12838, null], [12838, 13535, null], [13535, 13547, null], [13547, 13559, null], [13559, 13658, null], [13658, 13841, null], [13841, 15811, null], [15811, 15905, null], [15905, 16587, null], [16587, 16823, null], [16823, 17018, null], [17018, 17230, null], [17230, 18738, null], [18738, 18794, null], [18794, 18794, null]], "google_gemma-3-12b-it_is_public_document": [[0, 231, true], [231, 693, null], [693, 791, null], [791, 961, null], [961, 1360, null], [1360, 1543, null], [1543, 2162, null], [2162, 2172, null], [2172, 2293, null], [2293, 2547, null], [2547, 2962, null], [2962, 3562, null], [3562, 3895, null], [3895, 4453, null], [4453, 4824, null], [4824, 5225, null], [5225, 5242, null], [5242, 5753, null], [5753, 5918, null], [5918, 6400, null], [6400, 6831, null], [6831, 7294, null], [7294, 7607, null], [7607, 8587, null], [8587, 9146, null], [9146, 9629, null], [9629, 9908, null], [9908, 10005, null], [10005, 10486, null], [10486, 10779, null], [10779, 11247, null], [11247, 11580, null], [11580, 12200, null], [12200, 12205, null], [12205, 12489, null], [12489, 12503, null], [12503, 12838, null], [12838, 13535, null], [13535, 13547, null], [13547, 13559, null], [13559, 13658, null], [13658, 13841, null], [13841, 15811, null], [15811, 15905, null], [15905, 16587, null], [16587, 16823, null], [16823, 17018, null], [17018, 17230, null], [17230, 18738, null], [18738, 18794, null], [18794, 18794, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18794, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18794, null]], "pdf_page_numbers": [[0, 231, 1], [231, 693, 2], [693, 791, 3], [791, 961, 4], [961, 1360, 5], [1360, 1543, 6], [1543, 2162, 7], [2162, 2172, 8], [2172, 2293, 9], [2293, 2547, 10], [2547, 2962, 11], [2962, 3562, 12], [3562, 3895, 13], [3895, 4453, 14], [4453, 4824, 15], [4824, 5225, 16], [5225, 5242, 17], [5242, 5753, 18], [5753, 5918, 19], [5918, 6400, 20], [6400, 6831, 21], [6831, 7294, 22], [7294, 7607, 23], [7607, 8587, 24], [8587, 9146, 25], [9146, 9629, 26], [9629, 9908, 27], [9908, 10005, 28], [10005, 10486, 29], [10486, 10779, 30], [10779, 11247, 31], [11247, 11580, 32], [11580, 12200, 33], [12200, 12205, 34], [12205, 12489, 35], [12489, 12503, 36], [12503, 12838, 37], [12838, 13535, 38], [13535, 13547, 39], [13547, 13559, 40], [13559, 13658, 41], [13658, 13841, 42], [13841, 15811, 43], [15811, 15905, 44], [15905, 16587, 45], [16587, 16823, 46], [16823, 17018, 47], [17018, 17230, 48], [17230, 18738, 49], [18738, 18794, 50], [18794, 18794, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18794, 0.0726]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
ed569e68b1df4e81a15e12c6b2a860d95d51e706
|
Design and Implementation of Deep Neural Networks
Wintersemester 2019/20
Salim Ullah
Chair for Processor Design,
Fakultät Informatik Technische Universität Dresden
1 Introduction
Deep learning (DL) is a subfield of machine learning that is a set of algorithms that is inspired by the structure and function of the brain. The design of a deep neural network (DNN) requires an in-depth understanding of the problem, analyzing application requirements and resource limitations. Based on the analysis, a DNN model is generated, trained, validated, and reiterated. There exist a variety of DNN frameworks for building, training, evaluating, and optimizing a DNN. Fig. 1 summarizes the utilization of state-of-the-art DNN frameworks. These scores are calculated by combining usage, search volume, related publications, and GitHub activity. TensorFlow is the second machine learning framework that Google created and used to design, build, and train deep learning models. It was built to run on multiple CPUs or GPUs and even mobile operating systems, and it has several wrappers in several languages like Python, C++ or Java. It is the most popular deep learning framework today. Gmail, Uber, Airbnb, Nvidia, and lots of other prominent brands are using it. We will also be using TensorFlow in these exercises for defining, training, and evaluating DNNs. These exercises are mainly based on the tutorials provided by https://www.tensorflow.org/tutorials, https://www.easy-tensorflow.com/tf-tutorials/basics/graph-and-session, https://www.guru99.com/what-is-tensorflow.html, https://www.datacamp.com/community/tutorials/tensorflow-tutorial

Figure 1: Deep Learning Framework Power Scores 2018
1.1 Installation of TensorFlow
TensorFlow is tested and supported on the following 64-bit systems:
1. Ubuntu 16.04 or later
2. Windows 7 or later
3. macOS 10.12.6 (Sierra) or later (no GPU support)
4. Raspbian 9.0 or later
Install TensorFlow with Python’s pip package manager as shown in Listing 1. We would be using Python 3.6. It is recommended to create a virtual environment for all the packages.
```python
# Requires the latest pip
pip install --upgrade pip
# Installation of virtualenv
pip install virtualenv
virtualenv -p /usr/bin/python3.6 venv
# activate the environment
source venv/bin/activate
# install matplotlib and numpy
pip install numpy
pip install matplotlib
# install Tensorflow 2.0
pip install tensorflow
# Or preview build for CPU/GPU (unstable)
pip install tf-nightly
```
Listing 1: Installation of TensorFlow using pip
TensorFlow separates the definition of computations from their execution. Its architecture works in three parts:
1. Preprocessing the data
2. Building the GRAPH (model), it represents the data flow of the computations
3. Running a SESSION, it executes the operations in the graph
### 1.2 What is a Tensor
TensorFlow programs use a data structure called tensor to represent all the data. Any type of data you plan to use for your model can be stored in Tensors. A Tensor is a multi-dimensional array (0-D tensor: scalar, 1-D tensor: vector, 2-D tensor: matrix, and so on). Similar to NumPy ndarray objects, tensor objects of class tensorflow.Tensor have a data type and a shape. Additionally, Tensors objects can reside in accelerator memory (like a GPU). TensorFlow offers a rich library of operations (tensorflow.add, tensorflow.matmul, tensorflow.linalg.inv etc.) that consume and produce tensorflow.Tensors. These operations automatically convert native Python types. For example, Listing 2 shows the conversion of Python data types to Tensors and matrix multiplication of two tensors. The expected output of each command is also shown in Listing 2.
```python
import tensorflow as tf
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([[1, 2, 3]]))
```
Line No.1 in the listing imports tensorflow. This gives Python access to all of TensorFlow’s classes, methods, and symbols. Using this command, TensorFlow library will be imported under the alias tf so that later we can use it instead of typing the whole term tensorflow each time. The most obvious differences between NumPy arrays and tf.Tensors are: (a)Tensors can be backed by accelerator memory (like GPU, TPU), (2) Tensors are immutable.
Operator overloading is also supported
```python
print(tf.square(2) + tf.square(3))
```
```python
x = tf.matmul([[1]], [[2, 3]])
print(x)
p(x.shape)
p(x.dtype)
```
# Generated Output
```
# tf.Tensor(3, shape=(), dtype=int32)
# tf.Tensor([4 6], shape=(2,), dtype=int32)
# tf.Tensor(25, shape=(), dtype=int32)
# tf.Tensor(13, shape=(), dtype=int32)
# tf.Tensor([[2 3]], shape=(1, 2), dtype=int32)
(1, 2)
<dtype: 'int32'>
```
Listing 2: Examples of TensorFlow commands
### 1.3 Computational Graph
A computational graph (or graph in short) is a series of TensorFlow operations arranged into a graph of nodes. It means a graph is just an arrangement of nodes that represent the operations in your model. For example, for the function \( f(x, y) = x^2y + y + 2 \), the computational graph generated by TensorFlow would be something like as shown in Fig. 2. The graph is composed of a series of nodes connected to each other by edges. Each node in the graph is called op (short for operation). So there is one node for each operation; either for operations on tensors (like math operations) or generating tensors (like variables and constants). Each node takes zero or more tensors as inputs and produces a tensor as an output.
Example 1.1. Let’s start with a basic arithmetic operation like addition to demonstrate a graph. The code adds two values, say \( a=2 \) and \( b=3 \), using TensorFlow. To do so, we need to call `tf.add()`. The `tf.add()` has three arguments ‘x’, ‘y’, and ‘name’ where \( x \) and \( y \) are the values to be added together and name is the operation name, i.e. the name associated to the addition node on the graph. Listing 3 describes the example code for this operation. This code creates two input nodes (for inputs \( a=2 \) and \( b=3 \)) and one output node for the addition operation. Each operation in TensorFlow can be assigned an optional name, as shown by the name=‘Add’ segment. The output ‘c’ is a tensor of the same data type as the input tensors to the `tf.add` operation. When we print out the variable c (i.e. the output Tensor of the addition operation), it prints out the Tensor information; its name (Add), shape (1 means scalar), and type (32-bit integer). However, it does not print out the result (2+3=5). This behavior is due to the reason that lines No. 2–5 only defines

the computational graph. The sample computational graph is shown in Fig 3. To evaluate the nodes, we must run the computational graph either with a Function call (TensorFlow 2 only) or within a Session (TensorFlow 1.X only). The written code only generates the graph, which only determines the expected sizes of Tensors and operations to be executed on them. However, it does not assign a numeric value to any of the Tensors i.e., TensorFlow does not execute the graph unless it is specified to do so with a function/session.
**Session: Effective for TensorFlow 1.X**
To compute anything, a graph must be launched in a session. Technically, session places the graph ops on hardware such as CPUs or GPUs and provides methods to execute them. In our example, to run the graph and get the value for ‘c’ the code in Listing 4 will create a session and execute the graph by running ‘c’. This code creates a Session object (assigned to sess), and then (the second line) invokes its run method to execute enough of the computational graph to evaluate output ‘c’. This means that it only runs that part of the graph which is necessary to get the value of c. In this example, it runs the whole graph. Remember to close the session at the end of the session. That is done using the last line in the above code. The code in Listing 5 does the same thing and is more commonly used. The only difference is that there is no need to close the session at the end as it gets closed automatically.
```python
import tensorflow as tf
a = 2
b = 3
c = tf.add(a, b, name='Add')
print(c)
```
Listing 3: Addition of two tensors
```python
# Generated Output
# Tensor("Add:0", shape=(), dtype=int32)
```
Graph

Figure 3: Generated graph for Listing 3 visualized in Tensorboard
```python
sess = tf.Session()
print(sess.run(c))
sess.close()
```
Listing 4: Session to run graph defined in Listing 3
```python
# Generated Output
# 5
```
```python
with tf.Session() as sess:
print(sess.run(c))
```
Listing 5: Session to run graph defined in Listing 3
**Function call: Effective for TensorFlow 2**
A session.run() call is almost like a function call: You specify the inputs and the function to be called, and you get back a set of outputs. In TensorFlow 2.0, you can decorate a Python function using tf.function() to mark it for JIT compilation so that TensorFlow runs it as a single graph (Functions 2.0 RFC). This mechanism allows TensorFlow 2.0 to gain all of the benefits of graph mode.
import tensorflow as tf
a = 2
b = 3
@tf.function
def f():
c = tf.add(a, b, name='Add')
return c
print(f())
print(f().numpy())
Listing 6: Graph generation and execution using TensorFlow 2 for the graph defined in Listing 3
Example 1.2. Consider the code in Listing 7 and corresponding generated computational graph in Fig. 4. Given this graph, if we fetch the pow_op operation, it will first run the add_op and mul_op to get their output tensor and then run pow_op on them to compute the required output value. In other words, useless_op will not be executed as it’s output tensor is not used in executing the pow_op operation. This specially saves a significant amount of time for us when dealing with huge networks with hundreds and thousands of operations.
Listing 7: Computational graph generation and execution for different operations using TensorFlow 1.X
import tensorflow as tf
x = 2
y = 3
add_op = tf.add(x, y, name='Add')
mul_op = tf.multiply(x, y, name='Multiply')
pow_op = tf.pow(add_op, mul_op, name='Power')
useless_op = tf.multiply(x, add_op, name='Useless')
with tf.Session() as sess:
pow_out, useless_out = sess.run([pow_op, useless_op])
Listing 8 shows the implementation of the graph in Fig. 4 in TensorFlow 2. A common usage pattern in TensorFlow 1.X was the “kitchen sink” strategy, where the union of all possible computations was preemptively laid out, and then selected tensors were evaluated via session.run(). In TensorFlow 2.0, users should refactor their code into smaller functions which are called as needed. In general, it’s not necessary to decorate each of these smaller functions with tf.function; only use tf.function to decorate high-level computations.
@tf.function
def f():
add_op = tf.add(x, y, name='Add')
mul_op = tf.multiply(x, y, name='Multiply')
pow_op = tf.pow(add_op, mul_op, name='Power')
useless_op = tf.multiply(x, add_op, name='Useless')
return pow_op, useless_op
print(f())
Listing 8: Computational graph generation and execution for different operations using TensorFlow 2
Example 1.3. Fit a linear model
You will create a simple linear model, \( f(x) = x \ast W + b \), which has two variables: \( W \) (weights) and \( b \) (bias). You will synthesize data such that a well trained model would have \( W = 3.0 \) and \( b = 2.0 \). The following concepts would be used for building the model:
- **Variables:** Use tf.Variable to represent weights in a model. A tf.Variable object stores a value and implicitly reads from this stored value. There are operations (tf.assign_sub, tf.scatter_update, etc.) that manipulate the value stored in a TensorFlow variable. Trainable variables (created by tf.Variable or tf.compat.v1.get_variable, where trainable=True is default in both cases) are automatically watched. Tensors can be manually watched by invoking the watch method on this context manager.
- **Gradient tapes:** TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Listing 9 provides an example of automatic differentiation using gradient tapes.
```
import tensorflow as tf
x = tf.constant(3.0)
with tf.GradientTape() as g:
g.watch(x)
y = x * x
dy_dx = g.gradient(y, x) # Will compute to 6.0
```
Listing 9: Gradient tape example
Building and training a model consists of the following steps:
- Define the model.
- Define a loss function.
- Obtain training data.
- Run through the training data and use an "optimizer" to adjust the variables to fit the data.
**Define the model:** Let’s define a simple class to encapsulate the variables and the computation. Consider Listing 10:
```
import tensorflow as tf
import matplotlib.pyplot as plt
class Model(object):
def __init__(self):
# Initialize the weights to '5.0' and the bias to '0.0'
# In practice, these should be initialized to random values (for example, with 'tf.random.normal')
```
```python
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
```
Listing 10: Model definition for linear model
Define a loss function: A loss function measures how well the output of a model for a given input matches the target output. The goal is to minimize this difference during training. Let’s use the standard L2 loss, also known as the least square errors. Consider Listing 11:
```python
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
```
Listing 11: Loss function for linear model
Obtain training data: First, synthesize the training data by adding random Gaussian (Normal) noise to the inputs. The corresponding code is available in Listing 12:
```python
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
```
Listing 12: Training data for linear model
Before training the model, visualize the loss value by plotting the model’s predictions in red and the training data in blue. Consider Listing 13 and Fig. 5:
```python
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
```
Listing 13: Pre-Training loss of the model’s prediction
Define a training loop: With the network and training data, train the model using gradient descent to update the weights variable (W) and the bias variable (b) to reduce the loss. There are many variants of the gradient descent scheme that are captured in tf.train.Optimizer. But here you will implement the basic math yourself with the help of tf.GradientTape for automatic differentiation and tf.assign_sub for decrementing a value (which combines tf.assign and tf.sub). Refer to Listing 14, Fig. 6 and Fig. 7:
```python
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
# Repeatedly run through the training data
model = Model()
# Collect the history of W-values and b-values to plot later
Ws, bs = [], []
ePOCHS = range(10)
```
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
print('Epoch %2d: W =%1.2f b =%1.2f, loss =%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# Let's plot it all
plt.plot(epochs, Ws, 'r', epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--', [TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.xlabel('Epochs')
plt.ylabel('Value')
plt.show()
# Plot again the outputs of the trained model
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.legend(['Training Data', 'Prediction'])
plt.xlabel('Inputs')
plt.ylabel('Outputs')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
# Generated loss values
# Epoch 0: W=5.00 b=0.00, loss=8.96391
# Epoch 1: W=4.59 b=0.39, loss=6.11565
# Epoch 2: W=4.27 b=0.70, loss=4.28201
# Epoch 3: W=4.01 b=0.95, loss=3.10103
# Epoch 4: W=3.80 b=1.16, loss=2.34008
# Epoch 5: W=3.64 b=1.33, loss=1.84957
# Epoch 6: W=3.51 b=1.48, loss=1.53327
# Epoch 7: W=3.41 b=1.57, loss=1.32923
# Epoch 8: W=3.33 b=1.66, loss=1.19756
# Epoch 9: W=3.26 b=1.73, loss=1.12257
# Current loss: 1.057693
Listing 14: Training loop for training W and b
2 Building and Training an Artificial Neural Network
The Iris classification
A machine learning program could classify flowers based on photographs. We are going to classify Iris flowers based on the length and width measurements of their sepals and petals. The Iris genus entails about 300 species, but our program will only classify the following three:
- Iris setosa
- Iris virginica
- Iris versicolor
We will be using the following steps:
- Import and parse the dataset (using Datasets API).
- Select the type of model.
- Train the model (using Keras API).
- Evaluate the model’s effectiveness.
Configure imports and download the training dataset
A dataset of 120 Iris flowers with the sepal and petal measurements is already available as a CSV file. Listing [15] imports the required packages and downloads the training data. The first line in the CSV file is a header containing information about the dataset:
- There are 120 total examples. Each example has four features and one of three possible label names.
- The first four fields are features: these are the characteristics of an example. Here, the fields hold float numbers representing flower measurements.
- The last column is the label: this is the value we want to predict. For this dataset, it’s an integer value of 0, 1, or 2 that corresponds to a flower name.
Figure 7: Output of the linear model after training
```
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pyplot as plt
import tensorflow as tf
# Download training data
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("Local copy of the dataset file: {}").format(train_dataset_fp)
# Create a tf.data.Dataset for the model
TensorFlow’s Dataset API handles many common cases for loading data into a model. Since the dataset is a CSV-formatted text file, use the make_csv_dataset function to parse the data into a suitable format. Each label is associated with string name (for example, “setosa”), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:
- Iris setosa
- Iris versicolor
- Iris virginica
# column order in CSV file
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("Features: {}".format(feature_names))
print("Label: {}".format(label_name))
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
```
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
Listing 16: Converting data from CSV format to Tensorflow dataset format
The `make_csv_dataset` function returns a tf.data.Dataset of (features, label) pairs, where features is a dictionary: `feature_name': value. These Dataset objects are iterable. Let’s look at a batch of features in Listing 17.
features, labels = next(iter(train_dataset))
Listing 17: Printing a sample of dataset
To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: (batch_size, num_features). This function uses the tf.stack method which takes values from a list of tensors and creates a combined tensor at the specified dimension, as described in Listing 18. The features element of the Dataset are now arrays with shape (batch_size, num_features). Listing 18 also shows few examples:
def pack_features_vector(features, labels):
features = tf.stack(list(features.values()), axis=1)
return features, labels
train_dataset = train_dataset.map(pack_features_vector)
# Example
features, labels = next(iter(train_dataset))
Listing 18: Printing a sample of dataset
Neural Network Model Selection
We need to select the kind of model to train. Fig. 8 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer. When this model is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species.
Create a model using Keras
The TensorFlow tf.keras API is the preferred way to create models and layers. The tf.keras.Sequential model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two Dense layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer’s input_shape parameter corresponds to the number of features from the dataset, and is required. The model is defined in Listing 19. The activation function determines the output shape of each node in the layer.
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,), input_shape # required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
Listing 19: Keras sequential model
Using the model
Let’s have a quick look at what this model does to a batch of features. Listing 20 provides a batch of features to the model and prints the output of the last layer (line 1 and 2). Here, each example returns a logit for each
Logits is the vector of raw (non-normalized) predictions that a classification model generates, which is ordinarily then passed to a normalization function. If the model is solving a multi-class classification problem, logits typically become an input to the softmax function (line 6). The softmax function then generates a vector of (normalized) probabilities with one value for each possible class. Taking the \texttt{tf.argmax} (line 9) across classes gives us the predicted class index. But, the model hasn’t been trained yet, so these aren’t good predictions:
```python
predictions = model(features)
print(predictions[:5])
# Conversion of logits to probabilities using softmax
# take maximum of the probabilities to find the identified class
print("Prediction: \{\}".format(tf.argmax(predictions, axis=1)))
print("Labels: \{\}".format(labels))
```
Listing 20: Using the model without training
Train the model
Training is the stage of machine learning when the model is gradually optimized, or the model learns the dataset. The Iris classification problem is an example of supervised machine learning: the model is trained from examples that contain labels.
As we did previously, we would define the loss and gradient functions. Both training and evaluation stages need to calculate the model’s loss. This measures how off a model’s predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value. Our model will calculate its loss using the \texttt{tf.keras.losses.SparseCategoricalCrossentropy} function which takes the model’s class probability predictions and the desired label, and returns the average loss across the examples. Consider Listing 21
Line 1 defines the loss object. This loss object is used by the custom loss function (lines 3–5). Lines 8 and 9 show an example of using the loss function. Use the \texttt{tf.GradientTape} context to calculate the gradients used to optimize your model(lines 12–15). Finally, an optimizer applies the computed gradients to the model’s variables to minimize the loss function. TensorFlow has many optimization algorithms available for training. This model uses the \texttt{tf.keras.optimizers.SGD} that implements the \textit{stochastic gradient descent (SGD)} algorithm. The learning rate sets the step size to take for each iteration.
```python
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
```
# Example of using loss function
l = loss(model, features, labels)
print("Loss test: {}".format(l))
# Gradient definition using tf.GradientTape
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
# Optimizer for reducing the loss using gradients
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
Listing 21: Defining the loss gradient and optimization functions
Training loop
A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:
- Iterate each epoch. An epoch is one pass through the dataset.
- Within an epoch, iterate over each example in the training Dataset grabbing its features (x) and label (y).
- Using the example’s features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model’s loss and gradients.
- Use an optimizer to update the model’s variables.
- Keep track of some stats for visualization.
- Repeat for each epoch.
These steps are performed by Listing 22. While it’s helpful to print out the model’s training progress, it’s often more helpful to see this progress. We can create basic charts using the matplotlib module in Listing 23.
## Note: Rerunning this cell uses the same model variables
```python
# Keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg(loss_value) # Add current batch loss
epoch_accuracy(y, model(x))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result()))
```
Listing 22: Training loop implementation
Listing 22: Training loop
```python
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
```
Listing 23: Visualize the loss function and accuracy
**Setup the test dataset**
Evaluating the model is similar to training the model. The biggest difference is the examples come from a separate test set rather than the training set. The setup for the test Dataset is similar to the setup for training Dataset. Unlike the training stage, the model only evaluates a single epoch of the test data. In Listing[24] we iterate over each example in the test set and compare the model’s prediction against the actual label. This is used to measure the model’s accuracy across the entire test set:
```python
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
```
Listing 24: Evaluating the trained model
Combining all the pieces together, Listing 25 shows the complete code for data downloading, model development, training and evaluation.
```python
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pyplot as plt
import tensorflow as tf
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url)
# column order in CSV file
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
```
feature_names = column_names[: -1]
label_name = column_names[-1]
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
um_epochs=1)
def pack_features_vector(features, labels):
"""Pack the features into a single array."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
train_dataset = train_dataset.map(pack_features_vector)
model = tf.keras.Sequential([tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)])
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
## Note: Rerunning this cell uses the same model variables
# Keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg(loss_value) # Add current batch loss
# Compare predicted label to actual label
epoch_accuracy(y, model(x))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result()))
Listing 25: Final code for all pieces of the model
|
{"Source-Url": "https://cfaed.tu-dresden.de/files/Images/people/chair-pd/Lectures/ETI/Exercises/tensorflow.pdf", "len_cl100k_base": 7506, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 39201, "total-output-tokens": 8680, "length": "2e12", "weborganizer": {"__label__adult": 0.0003559589385986328, "__label__art_design": 0.0009002685546875, "__label__crime_law": 0.00030875205993652344, "__label__education_jobs": 0.0019779205322265625, "__label__entertainment": 0.00015413761138916016, "__label__fashion_beauty": 0.00022101402282714844, "__label__finance_business": 0.0002366304397583008, "__label__food_dining": 0.0005002021789550781, "__label__games": 0.0007295608520507812, "__label__hardware": 0.0020046234130859375, "__label__health": 0.00067901611328125, "__label__history": 0.0003180503845214844, "__label__home_hobbies": 0.000202178955078125, "__label__industrial": 0.0007214546203613281, "__label__literature": 0.00024509429931640625, "__label__politics": 0.0002930164337158203, "__label__religion": 0.0006380081176757812, "__label__science_tech": 0.1925048828125, "__label__social_life": 0.00016760826110839844, "__label__software": 0.0193939208984375, "__label__software_dev": 0.7763671875, "__label__sports_fitness": 0.00035119056701660156, "__label__transportation": 0.0005021095275878906, "__label__travel": 0.0002579689025878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31517, 0.02851]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31517, 0.87445]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31517, 0.76075]], "google_gemma-3-12b-it_contains_pii": [[0, 167, false], [167, 1870, null], [1870, 4340, null], [4340, 6754, null], [6754, 9246, null], [9246, 10940, null], [10940, 13209, null], [13209, 15607, null], [15607, 16854, null], [16854, 18192, null], [18192, 19531, null], [19531, 22226, null], [22226, 24766, null], [24766, 27135, null], [27135, 29416, null], [29416, 31467, null], [31467, 31517, null]], "google_gemma-3-12b-it_is_public_document": [[0, 167, true], [167, 1870, null], [1870, 4340, null], [4340, 6754, null], [6754, 9246, null], [9246, 10940, null], [10940, 13209, null], [13209, 15607, null], [15607, 16854, null], [16854, 18192, null], [18192, 19531, null], [19531, 22226, null], [22226, 24766, null], [24766, 27135, null], [27135, 29416, null], [29416, 31467, null], [31467, 31517, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31517, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31517, null]], "pdf_page_numbers": [[0, 167, 1], [167, 1870, 2], [1870, 4340, 3], [4340, 6754, 4], [6754, 9246, 5], [9246, 10940, 6], [10940, 13209, 7], [13209, 15607, 8], [15607, 16854, 9], [16854, 18192, 10], [18192, 19531, 11], [19531, 22226, 12], [22226, 24766, 13], [24766, 27135, 14], [27135, 29416, 15], [29416, 31467, 16], [31467, 31517, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31517, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
61b9a03cc4ad56635f59bb252ad1ae349a2accb6
|
Notes and Updates
- Super quick breeze through Chapter 3
- You learned most (all) of this in CSE140
- Homework Due Tues! (no lab Mon)
- Candy for latency/throughput people
- Homework due NO LATER than 2:05pm
- No handing it in at the end of class
- QUESTIONS!
Do you use the web system?
- 1) In Class:
- Yes, why?
- No, why not?
- 2) After class:
- Yes, why?
- No why, not?
- 3) Before class?
Performance Beyond one Program
<table>
<thead>
<tr>
<th>Program</th>
<th>Computer A</th>
<th>Computer B</th>
<th>Computer C</th>
</tr>
</thead>
<tbody>
<tr>
<td>Program 1</td>
<td>1</td>
<td>10</td>
<td>20</td>
</tr>
<tr>
<td>Program 2</td>
<td>1000</td>
<td>100</td>
<td>20</td>
</tr>
<tr>
<td>Total Time</td>
<td>1001</td>
<td>110</td>
<td>40</td>
</tr>
</tbody>
</table>
- Which machine is fastest?
How to summarize performance
- Arithmetic Mean
\[
\frac{1}{n} \sum_{i=1}^{n} \text{Time}_i
\]
- Weighted Arithmetic Mean
\[
\sum_{i=1}^{n} \text{Time}_i \times \text{Weight}_i \quad (\text{Weights total to 1})
\]
- Harmonic Mean
\[
\frac{n}{\sum_{i=1}^{n} \frac{1}{\text{Rate}_i}}
\]
- Geometric Mean
\[
\sqrt[n]{\prod_{i=1}^{n} \frac{\text{Execution Time Ratio}_i}{1}}
\]
Performance Beyond one Program
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>W(1)</th>
<th>W(another)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Program 1</td>
<td>1</td>
<td>10</td>
<td>20</td>
<td>.5</td>
<td>.999</td>
</tr>
<tr>
<td>Program 2</td>
<td>1000</td>
<td>100</td>
<td>20</td>
<td>.5</td>
<td>.001</td>
</tr>
<tr>
<td>AM: W(1)</td>
<td>500</td>
<td>55</td>
<td>20</td>
<td></td>
<td></td>
</tr>
<tr>
<td>AM: W(ather)</td>
<td>2</td>
<td>10</td>
<td>20</td>
<td></td>
<td></td>
</tr>
<tr>
<td>GM</td>
<td>31.6</td>
<td>31.6</td>
<td>20</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Summarizing Performance
- Even an “unweighted” arithmetic mean IS weighted
- The longer the running time, the greater the impact of that code on the mean
- Geometric means of normalized execution times are consistent no matter which machine is faster
- Ratios of geometric means always give equal weights to all benchmarks - no matter execution times
- Geometric mean does not necessarily prediction execution time for any mix of the programs
Another way of “measuring” performance:
- It’s hard to convince manufacturers to run your program (unless you’re a BIG customer)
- A benchmark is a set of programs that are representative of a class of problems.
- measure one feature of system
- e.g. memory accesses or communication speed
- most compute-intensive part of applications
- e.g. Linpack and NAS kernel b’marks (for supercomputers)
- Full application:
- (int and float) (for Unix workstations)
- Other suites for databases, web servers, graphics,...
SPEC89 and the compiler
Darker bars show performance with compiler improvements (same machine as light bars)
SPEC on Pentium III and Pentium 4
- What do you notice?
Other SPECs
- HPC (High Performance Computing)
- Quantum Chemistry, Weather Modeling, Seismic
- JVM (Java)
- JAppletServer
- Web
- Mail
- JBB Java Business Benchmark
- SFS System File Server
Test many things other than the CPU speed - test entire system performance
Performance Beyond the CPU
- We (and this book) concentrate on the CPU as a lone entity
- For a while (Chap 7,8,9)
- Memory: A very important part
- The CPU can only do work if it has data to work on
- Latency and Bandwidth were our metrics
- Due to modern processor design, improving speed of integer operations by 10% will (likely) NOT speed up ANYTHING!
Key Points
- Be careful how you specify performance
- Use times faster, practice!
- Execution time = instructions *CPI *cycle time
- Make the common case FAST!
- Amdahl's Law
- Use real applications to measure performance
- Make sure their workload represents the one you care about!
- Use geometric mean to report performance on suites of programs or benchmarks
Chapter 4: Arithmetic for Computers
NOTE: Much of this material you should already know from CSE140 (up through 3.5)
*THIS IS JUST A REVIEW*
Binary Numbers
Consider a 4-bit binary number
<table>
<thead>
<tr>
<th>Decimal</th>
<th>Binary</th>
<th>Decimal</th>
<th>Binary</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0000</td>
<td>4</td>
<td>0100</td>
</tr>
<tr>
<td>1</td>
<td>0001</td>
<td>5</td>
<td>0101</td>
</tr>
<tr>
<td>2</td>
<td>0010</td>
<td>6</td>
<td>0110</td>
</tr>
<tr>
<td>3</td>
<td>0011</td>
<td>7</td>
<td>0111</td>
</tr>
</tbody>
</table>
Examples of binary arithmetic:
\[
3 + 2 = 5 \\
3 + 3 = 6
\]
\[
\begin{array}{cccc}
0 & 0 & 1 & 1 \\
+ & 0 & 0 & 1 & 0 \\
\end{array}
\]
\[
\begin{array}{cccc}
0 & 0 & 1 & 1 \\
+ & 0 & 0 & 1 & 1 \\
\end{array}
\]
What about negative integers?
- Desirable features of a number system ...
- obvious representation of 0,1,2...
- uses adder for addition
- easy to recognize exceptions (like overflow)
- single value of 0
- equal coverage of positive and negative numbers
- easy detection of sign
- easy negation
Some Alternatives
• Sign Magnitude -- MSB is sign bit
-1 → 1001
-5 → 1101
• One’s complement -- flip all bits to negate
-1 → 1110
-5 → 1010
Two’s Complement Representation
- Positive numbers: normal binary representation
- Negative numbers: flip bits (0 ↔ 1), then add 1
<table>
<thead>
<tr>
<th>Decimal</th>
<th>Two’s Complement Binary</th>
</tr>
</thead>
<tbody>
<tr>
<td>-8</td>
<td>1000*</td>
</tr>
<tr>
<td>-7</td>
<td>1001</td>
</tr>
<tr>
<td>-6</td>
<td>1010</td>
</tr>
<tr>
<td>-5</td>
<td>1011</td>
</tr>
<tr>
<td>-4</td>
<td>1100</td>
</tr>
<tr>
<td>-3</td>
<td>1101</td>
</tr>
<tr>
<td>-2</td>
<td>1110</td>
</tr>
<tr>
<td>-1</td>
<td>1111</td>
</tr>
<tr>
<td>0</td>
<td>0000</td>
</tr>
<tr>
<td>1</td>
<td>0001</td>
</tr>
<tr>
<td>2</td>
<td>0010</td>
</tr>
<tr>
<td>3</td>
<td>0011</td>
</tr>
<tr>
<td>4</td>
<td>0100</td>
</tr>
<tr>
<td>5</td>
<td>0101</td>
</tr>
<tr>
<td>6</td>
<td>0110</td>
</tr>
<tr>
<td>7</td>
<td>0111*</td>
</tr>
</tbody>
</table>
Smallest 4-bit number: -8
Biggest 4-bit number: 7
Two's Complement Arithmetic:
So cool for adders
Uses simple adder for + and - numbers
\[
\begin{align*}
7 + (-6) &= 1 \\
3 + (-5) &= -2
\end{align*}
\]
<table>
<thead>
<tr>
<th>Decimal</th>
<th>2's Complement Binary</th>
<th>Decimal</th>
<th>2's Complement Binary</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0000</td>
<td>-1</td>
<td>1111</td>
</tr>
<tr>
<td>1</td>
<td>0001</td>
<td>-2</td>
<td>1110</td>
</tr>
<tr>
<td>2</td>
<td>0010</td>
<td>-3</td>
<td>1101</td>
</tr>
<tr>
<td>3</td>
<td>0011</td>
<td>-4</td>
<td>1100</td>
</tr>
<tr>
<td>4</td>
<td>0100</td>
<td>-5</td>
<td>1011</td>
</tr>
<tr>
<td>5</td>
<td>0101</td>
<td>-6</td>
<td>1010</td>
</tr>
<tr>
<td>6</td>
<td>0110</td>
<td>-7</td>
<td>1001</td>
</tr>
<tr>
<td>7</td>
<td>0111</td>
<td>-8</td>
<td>1000</td>
</tr>
</tbody>
</table>
Arithmetic -- The heart of instruction execution
A One Bit ALU
- This 1-bit ALU will perform AND, OR, and ADD
**The Disadvantage of Ripple Carry**
- **Simple Adders are Ripple Carry**
- The carry bit may have to propagate from LSB to MSB
- Worst case delay for an N-bit RC adder:
```
A0 B0 CarryIn0 1-bit ALU Result0 CarryOut0
A1 B1 CarryIn1 1-bit ALU Result1 CarryOut1
A2 B2 CarryIn2 1-bit ALU Result2 CarryOut2
A3 B3 CarryIn3 1-bit ALU Result3 CarryOut3
```
**A Partial Carry Lookahead Adder**
- It is very expensive to build a "full" carry lookahead adder
- Just imagine the length of the equation for Cin31
- Common practices:
- Connect several N-bit Lookahead Adders to form a big adder
- Example: connect four 8-bit carry lookahead adders to form a 32-bit partial carry lookahead adder
```
/\ \
8 8 8 8 8 8 8 8
8-bit Carry Lookahead Adder 8-bit Carry Lookahead Adder 8-bit Carry Lookahead Adder 8-bit Carry Lookahead Adder
8 8 8 8 8 8 8
```
*Worst-case delay??*
Key Points
- Two’s complement is standard +/- numbers.
- Achieves almost all of our goals.
- CPU clock speed is driven by adder delay (and mult and div)
- Adder is used in loads, stores and branches as well as arithmetic.
- Thus, using a carry-lookahead adder is important!
Chapter 5:
The Processor: Datapath and Control
The Single Cycle Processor
The Multicycle Processor
Note: Some of the material in this lecture are COPYRIGHT 1998 MORGAN KAUFMANN PUBLISHERS, INC. ALL RIGHTS RESERVED.
Figures may be reproduced only for classroom or personal education use in conjunction with our text and only when the above line is included.
The Performance Big Picture
- Execution Time = Instrs * CPI * Cycle Time
- Processor design (datapath and control) will determine:
- Clock cycle time
- Clock cycles per instruction
- Starting today:
- Single cycle processor:
- Advantage: CPI = 1
- Disadvantage: long cycle time
What parts of MIPS?
- We won't implement all of MIPS
- Memory instructions
- Arithmetic/Logical (and just a subset of these, but you should be able to figure out how to add many of them)
- BEQ and J (last)
- Basic load/store architecture with these steps:
- Read PC and Fetch Inst
- Read Registers
- Do Math
- Write memory/registers
- Repeat
- Graphically?
Basics (page 287 minus a few things) of Single Cycle Datapath Design
What’s Datapath? What’s Control?
Processor Design: Logic Components, Time, the Clock
- Review: Section 5.2
- Registers, Memory - these things we need to get values out of and write new values into
- Based on the clock cycle
- Our cycle will be based around the rising clock edge
- Set values to be ready for that edge, and then a read or write will happen at that edge
- Do your work to calculate what to read or write in the “rest” of the cycle
Processor Design
- We're ready to implement the MIPS “core”
- load-store instructions: lw, sw
- reg-reg instructions: add, sub, and, or, slt
- control flow instructions: beq
- First, we need to fetch an instruction into processor
- supplies instruction address
- get the instruction from memory
That was too easy
- A problem – how will we do a load or store?
Instruction & Data in same cycle?
Solution: separate data and instruction memory
There will be only one DRAM memory
We want a stored program architecture
How else can you compile and then run a program??
But we can have separate SRAM caches
(We'll study caches later)
Instruction Fetch Unit
Updating the PC for next instruction
- Sequential Code:
- Branch and Jump:
• We'll save branches for later, after adds, subs
The MIPS core subset
- **R-type**
- `add rd, rs, rt`
- `sub, and, or, slt`
- **LOAD and STORE**
- `lw rt, rs, imm`
- `sw rt, rs, imm`
- **BRANCH:**
- `beq rs, rt, imm`
The following table summarizes the fields and their functionalities:
<table>
<thead>
<tr>
<th>Field</th>
<th>Bits</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>op</td>
<td>6 bits</td>
<td>0-61116212631</td>
</tr>
<tr>
<td>rs</td>
<td>5 bits</td>
<td>registers</td>
</tr>
<tr>
<td>rt</td>
<td>5 bits</td>
<td>rt</td>
</tr>
<tr>
<td>rd</td>
<td>5 bits</td>
<td>rd</td>
</tr>
<tr>
<td>shamt</td>
<td>5 bits</td>
<td>shamt</td>
</tr>
<tr>
<td>funct</td>
<td>6 bits</td>
<td>funct</td>
</tr>
</tbody>
</table>
**Example Instructions**
- Read registers `rs` and `rt` for `add rd, rs, rt`
- Feed `rs` and immediate to ALU for `lw rt, rs, imm`
- Move data between memory and register for `sw rt, rs, imm`
---
Register Transfer Language (RTL)
- **Is a mechanism for describing the movement of data between storage elements**
- **Gives us a precise way to describe various actions of our instructions**
- May be more than 1 RTL statement per instruction
- `PC <= PC + 4`
- `R[rd] <= R[rs] + R[rt]`
Post Fetch Datapath for Reg-Reg Operations
- \( R[rd] \leftarrow R[rs] \text{ op } R[rt] \) Example: \( add \ rd, rs, rt \)
- \( Ra(1), Rb(2), \) and \( Rw \) come from \( rs, rt, \) and \( rd \) fields
- \( ALU \) operation signal depends on \( op \) and \( funct \)
<table>
<thead>
<tr>
<th></th>
<th>31</th>
<th>26</th>
<th>21</th>
<th>16</th>
<th>11</th>
<th>6</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>( op )</td>
<td>6 bits</td>
<td>( rs )</td>
<td>5 bits</td>
<td>( rt )</td>
<td>5 bits</td>
<td>( rd )</td>
<td>5 bits</td>
</tr>
</tbody>
</table>
Post Fetch Datapath for Store Operations
\( Mem[R[rs] + \text{SignExt}[imm16]] \leftarrow R[rt] \)
Example: \( sw \ rt, rs, imm16 \)
Putting together Store DP and RR DP
Post Fetch Datapath for Load Operations
\[ R[rt] \leftarrow \text{Mem}[R[rs]] + \text{SignExt}[\text{imm16}] \]
Example: \( lw \ rt, rs, \text{imm16} \)
Putting together Load/Store DP and Reg-Reg DP
Datapath for Branch Operations
`beq rs, rt, imm16` We need to compare Rs and Rt
Computing the Next Address
- PC is a 32-bit byte address into the instruction memory
- Sequential operation: $PC_{31:0} = PC_{31:0} + 4$
- Branch: $PC_{31:0} = PC_{31:0} + 4 + \text{SignExt}[\text{Imm16}] \times 4$
- We don’t need the 2 least-significant bits because:
- The 32-bit PC is a byte address
- And all our instructions are 4 bytes (32 bits) long
- The 2 LSB’s of the 32-bit PC are always zeros
Detour:
Multiply -- That’s expensive!
- Multiply the immediate by 4! Let’s try some possible values
```
0000 0001
0000 0010
0000 0011
0000 0100
1111 1111
```
Datapath for Branch Operations
```
beq rs, rt, imm16 We need to compare Rs and Rt
```
![Datapath diagram]
All together: the single cycle datapath
```
Add ALU rs
Shift left 2
Zero
```
![Datapath diagram]
The R-Format (e.g. *add*) Datapath
ALUsrc ALUop Mem Read MemWrite MemToReg RegDst RegWrite PCsrc
The Load Datapath
ALUsrc ALUop Mem Read MemWrite MemToReg RegDst RegWrite PCsrc
The Store Datapath
The beq Datapath
Key Points
- CPU is just a collection of state and combinational logic
- We just designed a very rich processor, at least in terms of functionality
- Know and understand
- Basic flow
- Control lines
- Muxes - where and why needed
- Execution time = Insts * CPI * Cycle Time
- where does the single-cycle machine fit in?
Adding Control Signals
DETOUR: Single Cycle Datapath
Warning! Text is inconsistent. MUX control signals sometimes have "1" is on top, sometimes "0". On exercises&tests, look carefully!
Control for instructions
<table>
<thead>
<tr>
<th>Instruction</th>
<th>RegDst</th>
<th>ALUSrc</th>
<th>MemtoReg</th>
<th>Reg Write</th>
<th>Mem Read</th>
<th>Mem Write</th>
<th>Branch</th>
<th>ALUOp1</th>
<th>ALUOp0</th>
</tr>
</thead>
<tbody>
<tr>
<td>R format</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>lw</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>sw</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>beq</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>0</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>j</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>jal</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>br branch</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>bcond</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>0</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>j</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
54
|
{"Source-Url": "http://cseweb.ucsd.edu/classes/wi05/cse141/Lecture4.pdf", "len_cl100k_base": 5435, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 49684, "total-output-tokens": 5886, "length": "2e12", "weborganizer": {"__label__adult": 0.0006699562072753906, "__label__art_design": 0.0009174346923828124, "__label__crime_law": 0.000518798828125, "__label__education_jobs": 0.0198516845703125, "__label__entertainment": 0.0001575946807861328, "__label__fashion_beauty": 0.00041413307189941406, "__label__finance_business": 0.0006418228149414062, "__label__food_dining": 0.0007758140563964844, "__label__games": 0.0015106201171875, "__label__hardware": 0.037078857421875, "__label__health": 0.0012731552124023438, "__label__history": 0.0007462501525878906, "__label__home_hobbies": 0.0006165504455566406, "__label__industrial": 0.00470733642578125, "__label__literature": 0.0004756450653076172, "__label__politics": 0.000457763671875, "__label__religion": 0.0011796951293945312, "__label__science_tech": 0.416259765625, "__label__social_life": 0.0001996755599975586, "__label__software": 0.0101318359375, "__label__software_dev": 0.498291015625, "__label__sports_fitness": 0.0010480880737304688, "__label__transportation": 0.0018520355224609375, "__label__travel": 0.0003230571746826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15459, 0.05273]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15459, 0.24097]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15459, 0.78328]], "google_gemma-3-12b-it_contains_pii": [[0, 408, false], [408, 1136, null], [1136, 1945, null], [1945, 2592, null], [2592, 2920, null], [2920, 3655, null], [3655, 3798, null], [3798, 4613, null], [4613, 5616, null], [5616, 6520, null], [6520, 6582, null], [6582, 7670, null], [7670, 8310, null], [8310, 8979, null], [8979, 9503, null], [9503, 9875, null], [9875, 10294, null], [10294, 11350, null], [11350, 11964, null], [11964, 12156, null], [12156, 12284, null], [12284, 12860, null], [12860, 13068, null], [13068, 13264, null], [13264, 13301, null], [13301, 13662, null], [13662, 15459, null]], "google_gemma-3-12b-it_is_public_document": [[0, 408, true], [408, 1136, null], [1136, 1945, null], [1945, 2592, null], [2592, 2920, null], [2920, 3655, null], [3655, 3798, null], [3798, 4613, null], [4613, 5616, null], [5616, 6520, null], [6520, 6582, null], [6582, 7670, null], [7670, 8310, null], [8310, 8979, null], [8979, 9503, null], [9503, 9875, null], [9875, 10294, null], [10294, 11350, null], [11350, 11964, null], [11964, 12156, null], [12156, 12284, null], [12284, 12860, null], [12860, 13068, null], [13068, 13264, null], [13264, 13301, null], [13301, 13662, null], [13662, 15459, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15459, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15459, null]], "pdf_page_numbers": [[0, 408, 1], [408, 1136, 2], [1136, 1945, 3], [1945, 2592, 4], [2592, 2920, 5], [2920, 3655, 6], [3655, 3798, 7], [3798, 4613, 8], [4613, 5616, 9], [5616, 6520, 10], [6520, 6582, 11], [6582, 7670, 12], [7670, 8310, 13], [8310, 8979, 14], [8979, 9503, 15], [9503, 9875, 16], [9875, 10294, 17], [10294, 11350, 18], [11350, 11964, 19], [11964, 12156, 20], [12156, 12284, 21], [12284, 12860, 22], [12860, 13068, 23], [13068, 13264, 24], [13264, 13301, 25], [13301, 13662, 26], [13662, 15459, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15459, 0.1875]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
161cfc1679c4e04e3dd082e045beb75f739cd446
|
Proof-of-Possession Key Semantics for CBOR Web Tokens (CWTs)
draft-ietf-ace-cwt-proof-of-possession-03
Abstract
This specification describes how to declare in a CBOR Web Token (CWT) that the presenter of the CWT possesses a particular proof-of-possession key. Being able to prove possession of a key is also sometimes described as being the holder-of-key. This specification provides equivalent functionality to "Proof-of-Possession Key Semantics for JSON Web Tokens (JWTs)" (RFC 7800), but using CBOR and CWTs rather than JSON and JWTs.
Status of This Memo
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 31, 2018.
Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved.
1. Introduction
This specification describes how a CBOR Web Token (CWT) [RFC8392] can declare that the presenter of the CWT possesses a particular proof-of-possession (PoP) key. Proof of possession of a key is also sometimes described as being the holder-of-key. This specification provides equivalent functionality to "Proof-of-Possession Key Semantics for JSON Web Tokens (JWTs)" [RFC7800], but using CBOR [RFC7049] and CWTs [RFC8392] rather than JSON [RFC7159] and JWTs [JWT].
2. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
This specification uses terms defined in the CBOR Web Token (CWT) [RFC8392], CBOR Object Signing and Encryption (COSE) [RFC8152], and Concise Binary Object Representation (CBOR) [RFC7049] specifications.
These terms are defined by this specification:
Issuer
Party that creates the CWT and binds the claims about the subject to the proof-of-possession key.
Presenter
Party that proves possession of a private key (for asymmetric key cryptography) or secret key (for symmetric key cryptography) to a recipient.
In context of OAuth this party is also called OAuth Client.
Recipient
Party that receives the CWT containing the proof-of-possession key information from the presenter.
In context of OAuth this party is also called OAuth Resource Server.
3. Representations for Proof-of-Possession Keys
By including a "cnf" (confirmation) claim in a CWT, the issuer of the CWT declares that the presenter possesses a particular key and that the recipient can cryptographically confirm that the presenter has possession of that key. The value of the "cnf" claim is a CBOR map and the members of that map identify the proof-of-possession key.
The presenter can be identified in one of several ways by the CWT, depending upon the application requirements. For instance, some applications may use the CWT "sub" (subject) claim [RFC8392], to identify the presenter. Other applications may use the "iss" claim to identify the presenter. In some applications, the subject identifier might be relative to the issuer identified by the "iss" (issuer) claim [RFC8392]. The actual mechanism used is dependent upon the application. The case in which the presenter is the subject of the CWT is analogous to Security Assertion Markup Language (SAML) 2.0 [OASIS.saml-core-2.0-os] SubjectConfirmation usage.
3.1. Confirmation Claim
The "cnf" claim in the CWT is used to carry confirmation methods. Some of them use proof-of-possession keys while others do not. This design is analogous to the SAML 2.0 [OASIS.saml-core-2.0-os] SubjectConfirmation element in which a number of different subject confirmation methods can be included (including proof-of-possession key information).
The set of confirmation members that a CWT must contain to be considered valid is context dependent and is outside the scope of this specification. Specific applications of CWTs will require implementations to understand and process some confirmation members in particular ways. However, in the absence of such requirements, all confirmation members that are not understood by implementations MUST be ignored.
This specification establishes the IANA "CWT Confirmation Methods" registry for these members in Section 7.2 and registers the members defined by this specification. Other specifications can register other members used for confirmation, including other members for conveying proof-of-possession keys using different key representations.
The "cnf" claim value MUST represent only a single proof-of-possession key. At most one of the "COSE_Key" and "Encrypted_COSE_Key" confirmation values defined in Figure 1 may be present. Note that if an application needs to represent multiple proof-of-possession keys in the same CWT, one way for it to achieve this is to use other claim names, in addition to "cnf", to hold the additional proof-of-possession key information. These claims could use the same syntax and semantics as the "cnf" claim. Those claims would be defined by applications or other specifications and could be registered in the IANA "CBOR Web Token Claims" registry [IANA.CWT.Claims].
```
<table>
<thead>
<tr>
<th>Name</th>
<th>Key</th>
<th>Value type</th>
</tr>
</thead>
<tbody>
<tr>
<td>COSE_Key</td>
<td>1</td>
<td>COSE_Key</td>
</tr>
<tr>
<td>Encrypted_COSE_Key</td>
<td>2</td>
<td>COSE_Encrypt or COSE_Encrypt0</td>
</tr>
<tr>
<td>kid</td>
<td>3</td>
<td>binary string</td>
</tr>
</tbody>
</table>
```
Figure 1: Summary of the cnf names, keys, and value types
3.2. Representation of an Asymmetric Proof-of-Possession Key
When the key held by the presenter is an asymmetric private key, the "COSE_Key" member is a COSE_Key [RFC8152] representing the corresponding asymmetric public key. The following example (using CBOR diagnostic notation) demonstrates such a declaration in the CWT Claims Set of a CWT:
```
{
/iss/ 1 : "coaps://server.example.com",
/aud/ 3 : "coaps://client.example.org",
/exp/ 4 : 1361398824,
/cnf/ 8 :{
/COSE_Key/ 1 :{
/kty/ 1 : /EC/ 2,
/crv/ -1 : /P-256/ 1,
/x/ -2 : h’d7cc072de2205bdc1537a543d53c60a6acb62eccd890c7fa27c9e354089bbe13’,
/y/ -3 : h’f95e1d4b851a2cc80fff87d8e23f22afbe725d535e515d020731e79a3b4e47120’
}
}
}
```
The COSE_Key MUST contain the required key members for a COSE_Key of that key type and MAY contain other COSE_Key members, including the "kid" (Key ID) member.
The "COSE_Key" member MAY also be used for a COSE_Key representing a symmetric key, provided that the CWT is encrypted so that the key is not revealed to unintended parties. The means of encrypting a CWT is explained in [RFC8392]. If the CWT is not encrypted, the symmetric key MUST be encrypted as described in Section 3.3.
3.3. Representation of an Encrypted Symmetric Proof-of-Possession Key
When the key held by the presenter is a symmetric key, the "Encrypted_COSE_Key" member is an encrypted COSE_Key [RFC8152] representing the symmetric key encrypted to a key known to the recipient using COSE_Encrypt or COSE_Encrypt0.
The following example (using CBOR diagnostic notation, with linebreaks for readability) illustrates a symmetric key that could subsequently be encrypted for use in the "Encrypted_COSE_Key" member:
```
```
The COSE_Key representation is used as the plaintext when encrypting the key. The COSE_Key could, for instance, be encrypted using a COSE_Encrypt0 representation using the AES-CCM-16-64-128 algorithm.
The following example CWT Claims Set of a CWT (using CBOR diagnostic notation, with linebreaks for readability) illustrates the use of an encrypted symmetric key as the "Encrypted_COSE_Key" member value:
```
{
/iss/ 1 : "coaps://server.example.com",
/sub/ 2 : "24400320",
/aud/ 3: "s6BhdRkqt3",
/exp/ 4 : 1311281970,
/iat/ 5 : 1311280970,
/cnf/ 8 : {
/COSE_Encrypt0/ 2 : [
/protected header / h'A1010A' /{ \alg/ 1:10 \AES-CCM-16-64-128\}/,
/unprotected header/ { / iv / 5: h'636898994FF0EC7BFCF6D3F95B' },
/ciphertext/ h'0573318A3573EB983E55A7C2F06CADD0796C9E584F1D0E3E8C5B052592A8B2694BE9654F0431F38D5BBC8049FA7F13F'
]
}
}
```
The example above was generated with the key:
```
h'6162630405060708090a0b0c0d0e0f10'
```
### 3.4. Representation of a Key ID for a Proof-of-Possession Key
The proof-of-possession key can also be identified by the use of a Key ID instead of communicating the actual key, provided the recipient is able to obtain the identified key using the Key ID. In this case, the issuer of a CWT declares that the presenter possesses a particular key and that the recipient can cryptographically confirm proof of possession of the key by the presenter by including a "cnf" claim in the CWT whose value is a CBOR map with the CBOR map containing a "kid" member identifying the key.
The following example (using CBOR diagnostic notation) demonstrates such a declaration in the CWT Claims Set of a CWT:
```
{
/iss/ 1 : "coaps://server.example.com",
/aud/ 3 : "coaps://client.example.org",
/exp/ 4 : 1361398824,
/cnf/ 8 : {
/kid/ 2 : h'dfd1aa976d4575a0fe34b96de2bfad'
}
}
```
The content of the "kid" value is application specific. For instance, some applications may choose to use a cryptographic hash of the public key value as the "kid" value.
3.5. Specifics Intentionally Not Specified
Proof of possession is often demonstrated by having the presenter sign a value determined by the recipient using the key possessed by the presenter. This value is sometimes called a "nonce" or a "challenge".
The means of communicating the nonce and the nature of its contents are intentionally not described in this specification, as different protocols will communicate this information in different ways. Likewise, the means of communicating the signed nonce is also not specified, as this is also protocol specific.
Note that another means of proving possession of the key when it is a symmetric key is to encrypt the key to the recipient. The means of obtaining a key for the recipient is likewise protocol specific.
4. Security Considerations
All of the security considerations that are discussed in [RFC8392] also apply here. In addition, proof of possession introduces its own unique security issues. Possessing a key is only valuable if it is kept secret. Appropriate means must be used to ensure that unintended parties do not learn private key or symmetric key values.
Applications utilizing proof of possession SHOULD also utilize audience restriction, as described in Section 4.1.3 of [JWT], as it provides additional protections. Proof of possession can be used by recipients to reject messages from unauthorized senders. Audience restriction can be used by recipients to reject messages intended for different recipients.
A recipient might not understand the "cnf" claim. Applications that require the proof-of-possession keys communicated with it to be understood and processed MUST ensure that the parts of this specification that they use are implemented.
CBOR Web Tokens with proof-of-possession keys are used in context of an architecture, such as the ACE OAuth Framework [I-D.ietf-ace-oauth-authz], in which protocols are used by a presenter to request these tokens and to subsequently use them with recipients. To avoid replay attacks when the proof-of-possession tokens are sent to presenters, a security protocol, which uses mechanisms such as nonces or timestamps, has to be utilized. Note that a discussion of the architecture or specific protocols that CWT proof-of-possession tokens are used with is beyond the scope of this specification.
As is the case with other information included in a CWT, it is necessary to apply data origin authentication and integrity protection (via a keyed message digest or a digital signature). Data origin authentication ensures that the recipient of the CWT learns about the entity that created the CWT since this will be important for any policy decisions. Integrity protection prevents an adversary from changing any elements conveyed within the CWT payload. Special care has to be applied when carrying symmetric keys inside the CWT since those not only require integrity protection but also confidentiality protection.
As described in Section 6 (Key Identification) and Appendix D (Notes on Key Selection) of [JWS], it is important to make explicit trust decisions about the keys. Proof-of-possession signatures made with keys not meeting the application’s trust criteria MUST NOT be relied upon.
5. Privacy Considerations
A proof-of-possession key can be used as a correlation handle if the same key is used with multiple parties. Thus, for privacy reasons, it is recommended that different proof-of-possession keys be used when interacting with different parties.
6. Operational Considerations
The use of CWTs with proof-of-possession keys requires additional information to be shared between the involved parties in order to ensure correct processing. The recipient needs to be able to use credentials to verify the authenticity, integrity, and potentially the confidentiality of the CWT and its content. This requires the recipient to know information about the issuer. Likewise, there
needs to be agreement between the issuer and the recipient about the claims being used (which is also true of CWTs in general).
When an issuer creates a CWT containing a Key ID claim, it needs to make sure that it does not issue another CWT containing the same Key ID with a different content, or for a different subject, within the lifetime of the CWTs, unless intentionally desired. Failure to do so may allow one party to impersonate another party, with the potential to gain additional privileges. Likewise, if PoP keys are used for multiple different kinds of CWTs in an application and the PoP keys are identified by Key IDs, care must be taken to keep the keys for the different kinds of CWTs segregated so that an attacker cannot cause the wrong PoP key to be used by using a valid Key ID for the wrong kind of CWT.
7. IANA Considerations
The following registration procedure is used for all the registries established by this specification.
Values are registered on a Specification Required [RFC5226] basis after a three-week review period on the cwt-reg-review@ietf.org mailing list, on the advice of one or more Designated Experts. However, to allow for the allocation of values prior to publication, the Designated Experts may approve registration once they are satisfied that such a specification will be published. [[ Note to the RFC Editor: The name of the mailing list should be determined in consultation with the IESG and IANA. Suggested name: cwt-reg-review@ietf.org. ]] Registration requests sent to the mailing list for review should use an appropriate subject (e.g., "Request to Register CWT Confirmation Method: example"). Registration requests that are undetermined for a period longer than 21 days can be brought to the IESG’s attention (using the iesg@ietf.org mailing list) for resolution.
Criteria that should be applied by the Designated Experts include determining whether the proposed registration duplicates existing functionality, determining whether it is likely to be of general applicability or whether it is useful only for a single application, and evaluating the security properties of the item being registered and whether the registration makes sense.
It is suggested that multiple Designated Experts be appointed who are able to represent the perspectives of different applications using this specification in order to enable broadly informed review of registration decisions. In cases where a registration decision could be perceived as creating a conflict of interest for a particular
Expert, that Expert should defer to the judgment of the other Experts.
7.1. CBOR Web Token Claims Registration
This specification registers the "cnf" claim in the IANA "CBOR Web Token Claims" registry [IANA.CWT.Claims] established by [RFC8392].
7.1.1. Registry Contents
- Claim Name: "cnf"
- Claim Description: Confirmation
- JWT Claim Name: "cnf"
- Claim Key: TBD (maybe 8)
- Claim Value Type(s): map
- Change Controller: IESG
- Specification Document(s): Section 3.1 of [[ this document ]]
7.2. CWT Confirmation Methods Registry
This specification establishes the IANA "CWT Confirmation Methods" registry for CWT "cnf" member values. The registry records the confirmation method member and a reference to the specification that defines it.
7.2.1. Registration Template
Confirmation Method Name:
The human-readable name requested (e.g., "kid").
Confirmation Method Description:
Brief description of the confirmation method (e.g., "Key Identifier").
JWT Confirmation Method Name:
Claim Name of the equivalent JWT confirmation method value, as registered in [IANA.JWT.Claims]. CWT claims should normally have a corresponding JWT claim. If a corresponding JWT claim would not make sense, the Designated Experts can choose to accept registrations for which the JWT Claim Name is listed as "N/A".
Confirmation Key:
CBOR map key value for the confirmation method.
Confirmation Value Type(s):
CBOR types that can be used for the confirmation method value.
Change Controller:
7.2.2. Initial Registry Contents
- Confirmation Method Name: "COSE_Key"
- Confirmation Method Description: COSE_Key Representing Public Key
- JWT Confirmation Method Name: "jwk"
- Confirmation Key: 1
- Confirmation Value Type(s): map
- Change Controller: IESG
- Specification Document(s): Section 3.2 of [[ this document ]]
- Confirmation Method Name: "Encrypted_COSE_Key"
- Confirmation Method Description: Encrypted COSE_Key
- JWT Confirmation Method Name: "jwe"
- Confirmation Key: 2
- Confirmation Value Type(s): array (with an optional COSE_Encrypt or COSE_Encrypt0 tag)
- Change Controller: IESG
- Specification Document(s): Section 3.3 of [[ this document ]]
- Confirmation Method Name: "kid"
- Confirmation Method Description: Key Identifier
- JWT Confirmation Method Name: "kid"
- Confirmation Key: 3
- Confirmation Value Type(s): binary string
- Change Controller: IESG
- Specification Document(s): Section 3.4 of [[ this document ]]
8. References
8.1. Normative References
[IANA.CWT.Claims]
IANA, "CBOR Web Token Claims",
<http://www.iana.org/assignments/cwt>.
8.2. Informative References
[I-D.ietf-ace-oauth-authz]
[IANA.JWT.Claims]
IANA, "JSON Web Token Claims",
<http://www.iana.org/assignments/jwt>.
[JWS]
Jones, M., Bradley, J., and N. Sakimura, "JSON Web Signature (JWS)", RFC 7515, May 2015,
[JWT]
Jones, M., Bradley, J., and N. Sakimura, "JSON Web Token (JWT)", RFC 7519, DOI 10.17487/RFC7519, May 2015,
[OASIS.saml-core-2.0-os]
Acknowledgements
Thanks to the following people for their reviews of the specification: Roman Danyliw, Michael Richardson, and Jim Schaad.
Ludwig Seitz and Goeran Selander worked on this document as part of the CelticPlus project CyberWI, with funding from Vinnova.
Document History
[ [ to be removed by the RFC Editor before publication as an RFC ] ]
-03
- o Addressed review comments by Jim Schaad, see https://www.ietf.org/mail-archive/web/ace/current/msg02798.html
- o Removed unnecessary sentence in the introduction regarding the use any strings that could be case-sensitive.
- o Clarified the terms Presenter and Recipient.
- o Clarified text about the confirmation claim.
-02
- o Changed "typically" to "often" when describing ways of performing proof of possession.
- o Changed b64 to hex encoding in an example.
Internet-Draft Proof-of-Possession Key for CWTs June 2018
-01
-00
-00
Authors’ Addresses
Michael B. Jones
Microsoft
Email: mbj@microsoft.com
URI: http://self-issued.info/
Ludwig Seitz
RISE SICS
Scheelevaegen 17
Lund 223 70
Sweden
Email: ludwig@ri.se
Goeran Selander
Ericsson AB
Faeroegatan 6
Kista 164 80
Sweden
Email: goran.selander@ericsson.com
Samuel Erdtman
Spotify
Email: erdtman@spotify.com
|
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-ace-cwt-proof-of-possession-03.pdf", "len_cl100k_base": 4971, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 29902, "total-output-tokens": 6214, "length": "2e12", "weborganizer": {"__label__adult": 0.0004372596740722656, "__label__art_design": 0.00043392181396484375, "__label__crime_law": 0.0030155181884765625, "__label__education_jobs": 0.000637054443359375, "__label__entertainment": 0.00012433528900146484, "__label__fashion_beauty": 0.00023293495178222656, "__label__finance_business": 0.0020809173583984375, "__label__food_dining": 0.0004067420959472656, "__label__games": 0.0007495880126953125, "__label__hardware": 0.002285003662109375, "__label__health": 0.0007047653198242188, "__label__history": 0.00034737586975097656, "__label__home_hobbies": 0.00011658668518066406, "__label__industrial": 0.0007543563842773438, "__label__literature": 0.0004930496215820312, "__label__politics": 0.0006823539733886719, "__label__religion": 0.0005955696105957031, "__label__science_tech": 0.1612548828125, "__label__social_life": 0.00012803077697753906, "__label__software": 0.06884765625, "__label__software_dev": 0.75439453125, "__label__sports_fitness": 0.00029659271240234375, "__label__transportation": 0.0006003379821777344, "__label__travel": 0.00020766258239746096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21530, 0.06053]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21530, 0.2918]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21530, 0.85677]], "google_gemma-3-12b-it_contains_pii": [[0, 1351, false], [1351, 1832, null], [1832, 3934, null], [3934, 6071, null], [6071, 7869, null], [7869, 9412, null], [9412, 11376, null], [11376, 13802, null], [13802, 16337, null], [16337, 17829, null], [17829, 18943, null], [18943, 19628, null], [19628, 21104, null], [21104, 21530, null], [21530, 21530, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1351, true], [1351, 1832, null], [1832, 3934, null], [3934, 6071, null], [6071, 7869, null], [7869, 9412, null], [9412, 11376, null], [11376, 13802, null], [13802, 16337, null], [16337, 17829, null], [17829, 18943, null], [18943, 19628, null], [19628, 21104, null], [21104, 21530, null], [21530, 21530, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21530, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21530, null]], "pdf_page_numbers": [[0, 1351, 1], [1351, 1832, 2], [1832, 3934, 3], [3934, 6071, 4], [6071, 7869, 5], [7869, 9412, 6], [9412, 11376, 7], [11376, 13802, 8], [13802, 16337, 9], [16337, 17829, 10], [17829, 18943, 11], [18943, 19628, 12], [19628, 21104, 13], [21104, 21530, 14], [21530, 21530, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21530, 0.02183]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f78c53bdbcaf85b38113080a4a0c428fd414d495
|
It’s hard to believe that using technology to record and play back music only dates back to 1878, when Edison patented the phonograph. We’ve come so far since then—with music synthesizers, CDs, sampling and remixing, phones that play music, and even long-distance jamming over the Internet. In this chapter, you’ll take part in this tradition by building a Xylophone app that records and plays music.
What You’ll Build
With the app shown in Figure 9-1 (originally created by Liz Looney of the App Inventor team), you can:
- Play eight different notes by touching colored buttons on the screen.
- Press a Play button to replay the notes you played earlier.
- Press a Reset button to make the app forget what notes you played earlier so you can enter a new song.
What You’ll Learn
This tutorial covers the following concepts:
- Using a single Sound component to play different audio files.
- Using the Clock component to measure and enforce delays between actions.
- Deciding when to create a procedure.
- Creating a procedure that calls itself.
- Advanced use of lists, including adding items, accessing them, and clearing the list.
Getting Started
Connect to the App Inventor website and start a new project. Name it “Xylophone”, and also set the screen's title to “Xylophone”. Open the Blocks Editor and connect to your phone or emulator.
Designing the Components
This app has 13 different components (8 of which compose the keyboard), listed in Table 9-1. Since there are so many, it would get pretty boring to create all of them before starting to write our program, so we'll break down the app into its functional parts and build them sequentially by going back and forth between the Designer and the Blocks Editor, as we did with the Ladybug Chase app in Chapter 5.
Table 9-1. All of the components for the Xylophone app
<table>
<thead>
<tr>
<th>Component type</th>
<th>Palette group</th>
<th>What you’ll name it</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>Button</td>
<td>Basic</td>
<td>Button1</td>
<td>Play Low C key.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>Button2</td>
<td>Play D key.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>Button3</td>
<td>Play E key.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>Button4</td>
<td>Play F key.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>Button5</td>
<td>Play G key.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>Button6</td>
<td>Play A key.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>Button7</td>
<td>Play B key.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>Button8</td>
<td>Play High C key.</td>
</tr>
<tr>
<td>Sound</td>
<td>Media</td>
<td>Sound1</td>
<td>Play the notes.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>PlayButton</td>
<td>Play back the song.</td>
</tr>
<tr>
<td>Button</td>
<td>Basic</td>
<td>ResetButton</td>
<td>Reset the song memory.</td>
</tr>
<tr>
<td>Horizontal Arrangement</td>
<td>Screen Arrangement</td>
<td>Horizontal Arrangement1</td>
<td>Place the Play and Reset buttons next to each other.</td>
</tr>
<tr>
<td>Clock</td>
<td>Basic</td>
<td>Clock1</td>
<td>Keep track of delays between notes.</td>
</tr>
</tbody>
</table>
Creating the Keyboard
Our user interface will include an eight-note keyboard for a pentatonic (seven-note) major scale ranging from Low C to High C. We will create this musical keyboard in this section.
Creating the First Note Buttons
Start by creating the first two xylophone keys, which we will implement as buttons.
1. From the Basic category, drag a Button onto the screen. Leave its name as Button1. We want it to be a long magenta bar, like a key on a xylophone, so set its properties as follows:
a. Changing its BackgroundColor property to Magenta.
b. Changing its Text property to “C”.
c. Setting its Width property to “Fill parent” so it goes all the way across the screen.
d. Setting its Height property to 40 pixels.
2. Repeat for a second Button, named Button2, placing it below Button1. Use Width and Height property values, but set its BackgroundColor property to Red and its Text property to “D”.
(Later, we will repeat step 2 for six more note buttons.)
The view in the Component Designer should look something like Figure 9-2.

The display on your phone should look similar, although there will not be any empty space between the two colored buttons.
Adding the Sound Component
We can’t have a xylophone without sounds, so create a Sound component, leaving its name as Sound1. Change the MinimumInterval property from its default value of 500 milliseconds to 0. This allows us to play the sound as often as we want, instead of having to wait half a second (500 milliseconds) between plays. Don’t set its Source property, which we will set in the Blocks Editor.
Upload the sound files 1.wav and 2.wav from http://examples.oreilly.com/0636920016632/. Unlike in previous chapters, where it was OK to change the names of media files, it is important to use these exact names for reasons that will soon become clear. You can either upload the remaining six sound files now or wait until directed to later.
Connecting the Sounds to the Buttons
The behavior we need to program is for a sound file to play when the corresponding button is clicked. Specifically, if Button1 is clicked, we’d like to play 1.wav; if Button2 is clicked, we’d like to play 2.wav; and so on. We can set this up in the Blocks Editor as shown in Figure 9-3 by doing the following:
1. From the My Blocks tab and Button1 drawer, drag out the Button1.Click block.
2. From the Sound1 drawer, drag out the set Sound1.Source block, placing it in the Button1.Click block.
3. Type “text” to create a text block. (This is quicker than going to the Built-In tab and then the Text drawer, although that would work too.) Set its text value to “1.wav” and place it in the Sound1.Source block.
We could do the same for Button2, as shown in Figure 9-4 (just changing the text value), but the code would be awfully repetitive.
Repeated code is a good sign that you should create a procedure, which you’ve already done in Chapter 3’s MoleMash game and Chapter 5’s Ladybug Chase game. Specifically, we’ll create a procedure that takes a number as an argument, sets Sound1’s Source to the appropriate file, and plays the sound. This is another example of refactoring—improving a program’s implementation without changing its behavior,
Creating the Keyboard
135
a concept introduced in the MoleMash tutorial. We can use the Text drawer’s join block (an alternate version of make text) to combine the number (e.g., 1) and the text “.wav” to create the proper filename (e.g., “1.wav”). Here are the steps for creating the procedure we need:
1. Under the Built-In tab, go to the Definition drawer and drag out the to procedure block.
2. Go back to the Definition drawer and drag a name block into the “arg” socket of to procedure.
3. Click the rightmost “name” and set the name to “number”.
4. Click procedure and set the name to “PlayNote”.
5. Drag the Sound1.Source block from Button1.Click into PlayNote to the right of the word “do”. The Sound1.Play block will move with it.
6. Drag the 1.wav block into the trash can.
7. From the Text drawer, drag the join block into Sound1.Source’s socket.
8. Type “number” and move it to the left socket of the join block (if it is not already there).
9. From the Text drawer, drag the text block into the right socket of the join block.
10. Change the text value to “.wav”. (Remember not to type the quotation marks.)
11. Under the My Blocks tab, go to the My Definitions drawer and drag a call PlayNote block into the empty body of Button1.Click.
12. Type “1” and put it in the “number” socket.
Now, when Button1 is clicked, the procedure PlayNote will be called, with its number argument having the value 1. It should set Sound1.Source to “1.wav” and play the sound.
Create a similar Button2.Click block with a call to PlayNote with an argument of 2. (You can copy the existing PlayNote block and move it into the body of Button2.Click, making sure to change the argument.) Your program should look like Figure 9-5.
Figure 9-5. Creating a procedure to play a note
Telling Android to Load the Sounds
If you tried out the preceding calls to PlayNote, you may have been disappointed by not hearing the sound you expected or by experiencing an unexpected delay. That’s because Android needs to load sounds at runtime, which takes time, before they can be played. This issue didn’t come up before, because filenames placed in a Sound component’s Source property in the Designer are automatically loaded when the program starts. Since we don't set Sound1.Source until after the program has started, that initialization process does not take place. We have to explicitly load the sounds when the program starts up, as shown in Figure 9-6.

Test your app. Now if you restart the app by clicking on “Connect to Device...” in the Blocks Editor, the notes should play without delay. (If you don’t hear anything, make sure that the media volume on your phone is not set to mute.)
Implementing the Remaining Notes
Now that we have the first two buttons and notes implemented and working, add the remaining six notes by going back to the Designer and uploading the sound files 3.wav, 4.wav, 5.wav, 6.wav, 7.wav, and 8.wav. Then create six new buttons, following the same steps as you did before but setting their Text and BackgroundColor properties as follows:
- Button3 ("E", Pink)
- Button4 ("F", Orange)
- Button5 ("G", Yellow)
- Button6 ("A", Green)
- Button7 ("B", Cyan)
- Button8 ("C", Blue)
You may also want to change Button8’s TextColor property to White, as shown in Figure 9-7, so it is more legible.
Back in the Blocks Editor, create Click blocks for each of the new buttons with appropriate calls to PlayNote. Similarly, add each new sound file to Screen.Initialize, as shown in Figure 9-8.
With your program getting so large, you might find it helpful to click the white minus signs near the bottom of the “container” blocks, such as PlayNote, to minimize them and conserve screen space.
Test your app. You should now have all the buttons, and each one will play a different note when you click it.
Recording and Playing Back Notes
Playing notes by pressing buttons is fun, but being able to record and play back songs is even better. To implement playback, we will need to maintain a record of played notes. In addition to remembering the pitches (sound files) that were played, we must also record the amount of time between notes, or we won’t be able to distinguish between two notes played in quick succession and two played with a 10-second silence between them.
Our app will maintain two lists, each of which will have one entry for each note that has been played:
- notes, which will contain the names of the sound files in the order in which they were played
- times, which will record the points in time at which the notes were played
Note. Before continuing, you may wish to review lists, which we covered in the Presidents Quiz in Chapter 8.
We can get the timing information from a Clock component, which we will also use to properly time the notes for playback.
**Adding the Components**
In the Designer, you will need to add a Clock component and Play and Reset buttons, which we will put in a HorizontalArrangement:
1. Drag in a Clock component. It will appear in the “Non-visible components” section. Uncheck its TimerEnabled property because we don’t want its timer to go off until we tell it to during playback.
2. Go to the Screen Arrangement category and drag a HorizontalArrangement component beneath the existing button. Set its Width property to “Fill parent.”
3. From the Basic category, drag in a Button. Rename it PlayButton and set its Text property to “Play”.
4. Drag in another Button, placing it to the right of PlayButton. Rename the new Button to ResetButton and set its Text property to “Reset”.
The Designer view should look like Figure 9-9.

Recording Notes and Times
We now need to add the correct behavior in the Blocks Editor. We will need to maintain lists of notes and times and add to the lists whenever the user presses a button.
1. Create a new variable by going to the Built-In tab and dragging out a `def variable` block from the Definition drawer.
2. Click “variable” and change it to “notes”.
3. Open the Lists drawer and drag a `make a list` block out, placing it in the socket of `def notes`.
This defines a new variable named “notes” to be an empty list. Repeat the steps for another variable, which you should name “times”. These new blocks should look like Figure 9-10.

How the blocks work
Whenever a note is played, we need to save both the name of the sound file (to the list notes) and the instant in time at which it was played (to the list times). To record the instant in time, we will use the `Clock1.Now` block, which returns the current instant in time (e.g., March 12, 2011, 8:33:14 AM), to the nearest millisecond. These values, obtained through the `Sound1.Source` and `Clock1.Now` blocks, should be added to the lists notes and times, respectively, as shown in Figure 9-11.

For example, if you play “Row, Row, Row Your Boat” [C C C D E], your lists would end up having five entries, which might be:
- **notes**: `1.wav, 1.wav, 1.wav, 2.wav, 3.wav`
- **times** [dates omitted]: `12:00:01, 12:00:02, 12:00:03, 12:00:03.5, 12:00:04`
When the user presses the Reset button, we want the two lists to go back to their original, empty states. Since the user won’t see any change, it’s nice to add a small **Sound1.Vibrate** block so he knows that the key click was registered. Figure 9-12 shows the blocks for this behavior.

**Playing Back Notes**
As a thought experiment, let’s first look at how to implement note playback without worrying about timing. We could (but won’t) do this by creating these blocks as shown in Figure 9-13:
- A variable `count` to keep track of which note we’re on.
- A new procedure, `PlayBackNote`, which plays that note and moves on to the next one.
- Code to run when `PlayButton` is pressed that sets the count to 1 and calls `PlayBackNote` unless there are no saved notes.
**How the blocks work**
This may be the first time you’ve seen a procedure make a call to itself. While at first glance this might seem bogus, it is in fact an important and powerful computer science concept called *recursion*.
To get a better idea of how recursion works, let’s step through what happens if a user plays three notes (`1.wav, 3.wav, and 6.wav`) and then presses the Play button. First, `PlayButton.Click` starts running. Since the length of the list `notes` is 3, which is greater than 0, `count` gets set to 1, and `PlayBackNote` is called:
1. The first time `PlayBackNote` is called, `count = 1`:
a. `Sound1.Source` is set to the first item in `notes`, which is `1.wav`.
b. `Sound1.Play` is called, playing this note.
c. Since `count (1) < the length of notes (3)`, `count` gets incremented to 2.
`PlayBackNote` gets called again.
2. The second time `PlayBackNote` is called, `count = 2`:
a. `Sound1.Source` is set to the second item in `notes`, which is `3.wav`.
b. `Sound1.Play` is called, playing this note.
c. Since `count (2) < the length of notes (3)`, `count` gets incremented to 3.
`PlayBackNote` gets called again.
3. The third time PlayBackNote is called, count = 3:
a. Sound1.Source is set to the third item in notes, which is 6.wav.
b. Sound1.Play is called, playing this note.
c. Since count (3) is not less than the length of notes (3), nothing else happens, and playback is complete.
**Note.** Although recursion is powerful, it can also be dangerous. As a thought experiment, ask yourself what would have happened if the programmer forgot to insert the blocks in PlayBackNote that incremented count.
While the recursion is correct, there is a different problem with the preceding example: almost no time passes between one call to Sound1.Play and the next, so each note gets interrupted by the next note, except for the last one. No note (except for the last) is allowed to complete before Sound1’s source is changed and Sound1.Play is called again. To get the correct behavior, we need to implement a delay between calls to PlayBackNote.
**Playing Back Notes with Proper Delays**
We will implement the delay by setting the timer on the clock to the amount of time between the current note and the next note. For example, if the next note is played 3,000 milliseconds (3 seconds) after the current note, we will set Clock1.TimerInterval to 3,000, after which PlayBackNote should be called again. Make the changes shown in Figure 9-14 to the body of the if block in PlayBackNote, and create and fill in the Clock1.Timer event handler, which says what should happen when the timer goes off.

How the blocks work
Let’s assume the following contents for the two lists:
- notes: 1.wav, 3.wav, 6.wav
- times: 12:00:00, 12:00:01, 12:00:04
As Figure 9-14 shows, **PlayButton.Click** sets count to 1 and calls **PlayBackNote**.
1. The first time **PlayBackNote** is called, count = 1:
a. **Sound1.Source** is set to the first item in notes, which is “1.wav”.
b. **Sound1.Play** is called, playing this note.
c. Since count (1) < the length of notes (3),
- **Clock1.TimerInterval** is set to the amount of time between the first (12:00:00) and second items in times (12:00:01): 1 second.
- Count gets incremented to 2.
- **Clock1.Timer** is enabled and starts counting down.
Nothing else happens for 1 second, at which time **Clock1.Timer** runs, temporarily disabling the timer and calling **PlayBackNote**.
2. The second time **PlayBackNote** is called, count = 2:
a. **Sound1.Source** is set to the second item in notes, which is “3.wav”.
b. **Sound1.Play** is called, playing this note.
c. Since count (2) < the length of notes (3),
- **Clock1.TimerInterval** is set to the amount of time between the second (12:00:01) and third items in times (12:00:04): 3 seconds.
- Count gets incremented to 3.
- **Clock1.Timer** is enabled and starts counting down.
Nothing else happens for 3 seconds, at which time **Clock1.Timer** runs, temporarily disabling the timer and calling **PlayBackNote**.
3. The third time **PlayBackNote** is called, count = 3:
a. **Sound1.Source** is set to the third item in notes, which is “6.wav”.
b. **Sound1.Play** is called, playing this note.
c. Since count (3) is not less than the length of notes (3), nothing else happens. Playback is complete.
Variations
Here are some alternative scenarios to explore:
- Currently, there’s nothing to stop a user from clicking ResetButton during playback, which will cause the program to crash. (Can you figure out why?) Modify **PlayButton.Click** so it disables ResetButton. To reenable it when the song is complete, change the if block in **PlayButton.Click** into an **ifelse** block, and reenable ResetButton in the “else” portion.
- Similarly, the user can currently click PlayButton while a song is already playing. (Can you figure out what will happen if she does so?) Make it so **PlayButton.Click** disables PlayButton and changes its text to “Playing...” You can reenable it and reset the text in an **ifelse** block, as described in the previous bullet.
- Add a button with the name of a song, such as “Für Elise”. If the user clicks it, populate the notes and times lists with the corresponding values, set count to 1, and call **PlayBackNote**. To set the appropriate times, you’ll find the **Clock1.MakeInstantFromMillis** block useful.
- If the user presses a note, goes away and does something else, and comes back hours later and presses an additional note, the notes will be part of the same song, which is probably not what the user intended. Improve the program by (1) stopping recording after some reasonable interval of time, such as a minute; or (2) putting a limit on the amount of time used for **Clock1.TimerInterval** using the **max** block from the Math drawer.
- Visually indicate which note is playing by changing the appearance of the button—for example, by changing its Text, BackgroundColor, or ForegroundColor.
Summary
Here are some of the ideas we’ve covered in this tutorial:
- You can play different audio files from a single **Sound** component by changing its **Source** property. This enabled us to have one **Sound** component instead of eight. Just be sure to load the sounds at initialization to prevent delays (Figure 9-6).
- Lists can provide a program with memory, with a record of user actions stored in the list and later retrieved and reprocessed. We used this functionality to record and play back a song.
- The **Clock** component can be used to determine the current time. Subtracting two time values gives us the amount of time between two events.
• The Clock’s TimerInterval property can be set within the program, such as how we set it to the duration of time between the starts of two notes.
• It is not only possible but sometimes desirable for a procedure to make a call to itself. This is a powerful technique called recursion. When writing a recursive procedure, make sure that there is a base case in which the procedure ends, rather than calling itself, or the program will loop infinitely.
|
{"Source-Url": "https://recursos.citcea.upc.edu/android/llibre/09-xylophone.pdf", "len_cl100k_base": 5457, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 31537, "total-output-tokens": 6046, "length": "2e12", "weborganizer": {"__label__adult": 0.0004439353942871094, "__label__art_design": 0.0010290145874023438, "__label__crime_law": 0.0001926422119140625, "__label__education_jobs": 0.000759124755859375, "__label__entertainment": 0.0006775856018066406, "__label__fashion_beauty": 0.0001615285873413086, "__label__finance_business": 0.00010597705841064452, "__label__food_dining": 0.000453948974609375, "__label__games": 0.0013990402221679688, "__label__hardware": 0.002838134765625, "__label__health": 0.0001583099365234375, "__label__history": 0.00015544891357421875, "__label__home_hobbies": 0.00018417835235595703, "__label__industrial": 0.0002455711364746094, "__label__literature": 0.0002880096435546875, "__label__politics": 0.00013959407806396484, "__label__religion": 0.0004775524139404297, "__label__science_tech": 0.00621795654296875, "__label__social_life": 0.00010389089584350586, "__label__software": 0.0159759521484375, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.00025153160095214844, "__label__transportation": 0.00028133392333984375, "__label__travel": 0.00016129016876220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21737, 0.02693]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21737, 0.69029]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21737, 0.92417]], "google_gemma-3-12b-it_contains_pii": [[0, 764, false], [764, 2995, null], [2995, 4654, null], [4654, 6309, null], [6309, 8095, null], [8095, 9579, null], [9579, 10197, null], [10197, 11055, null], [11055, 12065, null], [12065, 13355, null], [13355, 15018, null], [15018, 15633, null], [15633, 17190, null], [17190, 18981, null], [18981, 21285, null], [21285, 21737, null]], "google_gemma-3-12b-it_is_public_document": [[0, 764, true], [764, 2995, null], [2995, 4654, null], [4654, 6309, null], [6309, 8095, null], [8095, 9579, null], [9579, 10197, null], [10197, 11055, null], [11055, 12065, null], [12065, 13355, null], [13355, 15018, null], [15018, 15633, null], [15633, 17190, null], [17190, 18981, null], [18981, 21285, null], [21285, 21737, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21737, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 21737, null]], "pdf_page_numbers": [[0, 764, 1], [764, 2995, 2], [2995, 4654, 3], [4654, 6309, 4], [6309, 8095, 5], [8095, 9579, 6], [9579, 10197, 7], [10197, 11055, 8], [11055, 12065, 9], [12065, 13355, 10], [13355, 15018, 11], [15018, 15633, 12], [15633, 17190, 13], [17190, 18981, 14], [18981, 21285, 15], [21285, 21737, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21737, 0.07979]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
3d8b959fed9b7e0f1026f775de7f8270ef171a33
|
Agents and Artefacts for Multiple Models coordination.
Objective and decentralized coordination of simulators.
Julien Siebert, Laurent Ciarletta, Vincent Chevrier
To cite this version:
Agents & Artefacts for Multiple Models coordination
Objective and decentralized coordination of simulators
Julien Siebert
INRIA, Centre Nancy Grand Est
julien.siebert@loria.fr
Laurent Ciarletta
Ecole Nationale Supérieure des Mines de Nancy
laurent.ciarletta@loria.fr
Vincent Chevrier
Université Henri Poincaré (Nancy 1)
v Vincent.chevrier@loria.fr
LORIA - Campus Scientifique - BP 239 - 54506 Vandoeuvre-lès-Nancy Cedex
ABSTRACT
Complex systems simulation implies the interaction of different scientific fields. However, most of the time people involved into the simulation process do not know intricate distributed simulation tools and only care about their own domain modelling. We propose a framework (called AA4MM) to build a simulation as a society of interacting models. The main goal is to reuse existing models and simulators and to make them interact. The coordination challenges remain to be solved by AA4MM framework so that the simulation design and implementation stay as simple as possible. In this paper, we present the coordination model which intends to decentralize the simulators interactions. We propose to use the environment through the notion of artefact in order to deal with the coherence, compatibility and coordination issues that appear in parallel simulations.
Keywords
1. INTRODUCTION
A complex system is composed of a set of interacting parts that, as a whole, exhibits properties that cannot be predicted from the simple sum of the individual parts properties. Human economies, social structures, climate or ecosystems are good examples of complex systems. Equation based modelling cannot represent interactions among components and their impact on the global system behaviour. Multiagent approach offers an interesting alternative [12]
Complex systems modelling also involves the interaction of different scientific domains or different abstraction levels. In biology for example, in order to understand and to predict the impact of a molecule on a specific organism, both models from chemistry (chemical reaction) and biology (cellular, tissue, organ) are needed [10]. This way, different specialists work on the same simulation. Each one brings its own models and simulators.
The challenge is then to allow those scientists to build a complex simulation from their own building blocks. Moreover, we should keep in mind that they are probably not familiar with the intricate modelling and simulation tools and theories. One way to facilitate the design and the implementation of such a simulation is to build it as a society of interacting models. Models should be seen as components we can weave together (as in component-based software engineering).
We propose a framework (called AA4MM) to build a simulation as a society of interacting models. The main goal is to reuse existing models and simulators and to make them interact. However, the main constraint is that people involved into the simulation process do not know intricate distributed simulation tools and only have to care about their own domain modelling. Coordination challenges remain to be solved by AA4MM framework.
2. CHALLENGES AND RELATED WORKS
Contrary to the work in [5], where all models are integrated into DEVS formalism and run in a single simulator, we assume that each model has been created independently and implemented in its own simulator. Consequently, each model has its own representation of time and data. In the same way, each simulator proceeds its own execution. The following sections list issues that appear when coupling different models and simulators.
2.1 Coherence and compatibility issues
2.1.1 Coherence between models
Scales or dimensions in which a piece of data is represented could be different from a model to another. For example, a position: \( \text{pos}_1 = \langle x, y, z \rangle \) (with \( x, y \) and \( z \) expressed in \textit{meters}) in a first model can be represented only in two dimensions in a second one: \( \text{pos}_2 = \langle x', y' \rangle \) (with \( x' \) and \( y' \) expressed in \textit{kilometers}). A solution proposed in [1] is to define operations (projection, discretisation, reduction) in order to achieve this coherence.
Moreover, each model could have its own time representation. We need to assure, for example, that a time value \( t_1 \in \mathbb{R}^+ \) in a first model correspond to a time value \( t_2 \in \mathbb{N} \) in a second one. A possible solution is to express an operation that makes the correspondence between both time values.
2.1.2 Compatibility between simulators
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
\( \text{SAC’10 March 22-26, 2010, Sierre, Switzerland.} \)
Copyright 2010 ACM 978-1-60558-638-0/10/03 ...$10.00.
Each simulator could implement a single piece of data in its own way (integer, float...) or some simulators may not implement all aspects of a given model. These challenges are discussed in [6]. A solution is to add an entity (a program) between the simulators. Its role is to translate the data in order to respect the compatibility between simulation tools.
2.2 Coordination issues
The goal of time management in distributed simulation is to ensure that simulation events (or steps) are executed in the correct order. Two main approaches have been proposed to coordinate interacting simulators: optimistic and conservative [2]. We assume that the existing simulators we reuse have been developed independently and where not thought for distributed simulations. So they do not have a roll-back capability (see optimistic approach): they cannot go back into the simulation process in order to take new input events into consideration. As a consequence, we focus on the conservative mode. In the latter, the coordination model has to determine when a simulation event (or step) is safe to process.
**Definition 2.2.1.** For a model $M_i$, a simulation event (or step) associated with the current simulation time $c_t$, is said to be safe to process if all the input events received after this event execution are timestamped with a time value $> c_t$.
Conservative coordination can be done by using a central and global scheduler that synchronizes all the simulators as in [1, 6, 3]. These solutions prevent to easily reuse the existing models and simulators since they need substantial modifications in order to be controlled by the scheduler. Moreover, a global scheduler imposes a bottleneck that arduously allows the simulation to scale up in terms of either systems size or number of abstraction levels. In section 3, we remove this central scheduler and we propose a decentralized coordination model.
3. PROPOSAL
3.1 Hypothesis and requirements
We do not target on-line nor real time simulations, which directly interact with the reality. Instead we focus on a model that fits our initial requirements. We try to facilitate the design of a society of interacting models by suppressing the global scheduler and by modifying as little as possible the existing models and simulators used.
We propose to use objective coordination. That is, coordination does not rely on a single entity but is provided by the surrounding environment. This method is well known in the field of situated multiagent systems [11, 7] (stigmergy) or in parallel systems [8] (shared memory). This provides a way to loosely couple and to coordinate the interacting processes. In our case, the simulators interact through the set of data they exchange.
3.2 Validity interval and coordination
3.2.1 Description
Each model $M_i$ holds a current simulation time value $c_t$. The simulator $S_i$ knows the simulation time value for the next event to be processed $nt_i$. When a model $M_i$ is executed, it produces data $\delta_i$ at time $c_t$. These data $\delta_i$ will not change until the next time $M_i$ is executed (at time $nt_i$). As a result, we can say that $\delta_i$ are valid for the simulation time interval $\Gamma_i = [c_t, nt_i]$. A simulator can execute a model if the simulation event to process is safe (see definition 2.2.1). Then, the issue for the simulator is to know when an event is safe. It can be solved if the simulators exchange both the data and the corresponding validity interval: $< \delta_i; \Gamma_i >$. Indeed, a simulation event is safe to process if and only if:
$$\forall j \neq i : c_t \in \Gamma_j$$
This way, the input data $< \delta_j; \Gamma_j >$ are safe for the simulator $S_i$ and the latter can process the simulation event at time $c_t$.
3.2.2 Properties
We have developed a formal specification (in event-B) of this coordination model. Describing the whole specification is out of the scope of this article. However, it is available in [9]. This formal specification is used to prove that coordination occurs between models and that the system is alive and deadlock free with $k$ models ($k \in N$). A sketch of the proof, using *reductio ad absurdum* is presented hereafter.
Assume, within this coordination model, that a simulator $S_i$ is stopped at time $c_t$. It is waiting for input data $< \delta_i; \Gamma_i >$ from another simulator $S_j$ ($i \neq j$). This simulator cannot send data because it is also stopped but at a time $c_t < c_t$. $S_i$ also wait for input data. Two cases appear. Either the simulator $S_j$ is waiting for input data from $S_i$. In this case it means that $S_i$ is waiting for itself, that contravenes our assumption. Or the simulator $S_j$ is waiting for input data $< \delta_i; \Gamma_i >$ from another simulator $S_k$ at time $c_t < c_j < c_t$, with ($i \neq j \neq k$). In this case, we come back to the very first case. Since the number of simulators and the number of simulation events are assumed to be finite, and since $\forall i : c_t \geq 0$, we can show by recursion that the latter case means that initial conditions are not set correctly and then that the simulation cannot happen. This contravenes our initial assumption. We target the initial conditions on an example in section 5.2.
4. FRAMEWORK OVERVIEW
In this section, we present how the A&I paradigm is used to take up the coupling challenges and how it implements the coordination model.
4.1 Architecture
In agent oriented software engineering, agents are autonomous entities that interact with each other and with their environments in order to solve a given task [12]. In the A&I paradigm [4], the artefacts are used to design and to implement the interactions. They can be seen as tools used by the agents. In the case of building a simulation as a society of interacting models and simulators, the agents are in charge of the models execution and they interact through some specific artefacts. The *coupling-artefact* allows the agents to exchange data. It is in charge of coherence and compatibility issues and it implements the coordination model. The *model-artefact* allows the agents to initialize, to execute the model, to send input data and to get output data. The figure 1 depicts this architecture. An implementation example is given in the section 5 and the corresponding agents and artefacts used are described by figure 4.
4.2 Artefacts functions
In A&A paradigm [4], artefacts hold functions that agents can use. In this section, we present details about the model-artefact and the coupling-artefacts.
4.2.1 Model-artefact
The model-artefact role is to allow the agents to operate on a given model. We propose 6 functions. Init() allows to create a model instance and to initialize it. Run() allows to run the simulation only for one step or for one event. Then, in order to exchange some data between models, next functions are getOutputData() and setInputData(). Finally, as they are needed for the coordination process, last functions are getCurrentTime() that returns \(ct_i\) and getNextTime() that returns \(nt_i\) (Cf 3.2).
4.2.2 Coupling-artefact
The coupling-artefact allows the agents to post() and read() data \(\delta_i\). However, this artefact also intends to prepare data and to filter them. Thus, the post() function adds the validity interval \(\Gamma_j\) to the data \(\delta_i\). The read() function includes the guard condition \(ct_i \in \Gamma_j\) to coordinate models. read() only returns valid and last produced data to the agent. Moreover, it is possible to add operations (as in [1]) in order to deal with coherence and compatibility issues (Cf 2).
4.3 Agents behaviour
The role of the agent (a model-agent in our case) is to run a specific task. The artefacts are tools it can use. The very first role of a model-agent is to execute the model and to read and post data. We distinguish three major phases. First, the agent has to create and initialize the artefacts it is going to use (at least one model and one coupling-artefact). Then, the agent manages the simulation process as describe on figure 2. It loops over the five steps. Finally, once the simulation is over, this agent can retrieve results and save them for analysis.
5. IMPLEMENTATION EXAMPLE
In this section, we present an example of a simulation made of different interacting simulators. We choose to use Netlogo [14] since it is easy to understand and well known. Note that each model is executed independently in its own Netlogo instance, i.e. there are as many simulators running in parallel as interacting models. Implementation details and other use cases are discussed in section 6.
5.1 Coordination of existing Netlogo models
The example is the following: assume we want to build a simulation of a sheepfold. We are interested into the influence between the sheep movements and the dynamic of sheep grazing. Models already exist for each dynamic [13, 15]. The first one \(M_1\), the sheep model, depicts the sheep movements: sheep move randomly, lose some energy and shepherds try to gather them. The second model \(M_2\), the grass model, represents sheep\(^1\) eating grass and gaining energy. We want these two dynamics to influence each other. That is, \(M_1\) must send the sheep positions to \(M_2\) while \(M_2\) gives the sheep energy levels back to \(M_1\) (see figure 3). The architecture with A&A concepts is represented by figure 4.
\(^1\)Originally rabbits, we changed species.
function, that runs one simulation step of $M_1$, calls the "go" procedure. \texttt{getCurrentTime()} and \texttt{getNextTime()}, that returns $ct_1$ and $nt_1$, report the number of "ticks" and "ticks + 1" (as we process simulation step by step). \texttt{getData()} function, that returns the sheep positions $\delta_1$, reports all sheep present in the model and gathers their positions. \texttt{setInputData()} sets into $M_1$ the sheep energy $\delta_2$ produced by $M_2$. This is done by invoking the "set energy" Netlogo command on all the sheep in $M_1$.
5.1.2 Coupling-artefacts design and architecture overview
Once the dependency network is done (figure 3), we can design the coupling-artefacts. We propose that each coupling-artefact is in charge of only one set of data $< \delta_i, \Gamma_i >$. In our example, one coupling-artefact, $cA_1$, is in charge of sheep positions $< \delta_1, \Gamma_1 >$. The other one, $cA_2$, is in charge of the sheep energy levels $< \delta_2, \Gamma_2 >$
This way, the model-agent in charge of $M_1$ can post sheep positions $< \delta_1, \Gamma_1 >$ to $cA_1$ and read sheep energy levels $< \delta_2, \Gamma_2 >$ from $cA_2$. (Cf section 3.2).
Finally, we link each model-agent with its dedicated artefacts. Thus, the implementation strictly follows the initial dependency network. The whole architecture is represented in figure 4.
5.2 Model and simulators coordination
We saw on figure 3 that each model is waiting for data from the other one. In order to bootstrap the simulation, each model-agent must post initial data to the coupling-artefacts. Initial sheep positions $< \delta_1, \Gamma_1 >$ and initial sheep energy levels $< \delta_2, \Gamma_2 >$ have to be sent to their respective coupling-artefacts before the simulation process begins.
Then, the models execution follows the process described by figure 2. Once the model-agent 1 has read the sheep energy levels from the coupling-artefact 2, it sends them to the sheep model-artefact, executes $M_1$, gets the sheep positions and posts them to the coupling-artefact 1. On the other side, the model-agent 2 reads and sends the sheep positions to the grass model-artefact, executes $M_2$, gets sheep energy levels and posts them. In fact model-agents wait for each other and synchronize themselves thanks to the exchanged data $\delta_i$ present in their environment.
5.3 Scales differences
Until now, we have assumed that time and space scales in both models $M_1$ and $M_2$ were the same. In the next sections, we present how that framework is useful to make models with different scales interact.
5.3.1 Different space scales
It may not be necessary to represent grass as precise as sheep are. That is, one patch of grass may correspond to a square of $2 \times 2$ patches in the sheep model $M_1$ (Cf figure 5). Since it is not possible to change the patches size in the grass model $M_2$, sheep positions produced by $M_1$ does not fit anymore with space in $M_2$. As a consequence, we need to add an operation in the coupling-artefact $cA_1$ when the grass model-agent reads the sheep positions. This operation consists in dividing each sheep position coordinates by a factor 2.
Here, we target a challenge due to the exchanged data coherence. We only modify the entity in charge of that issue: the coupling-artefact. However, since we have altered the space in the grass model $M_2$, we may also want to change the model behaviour. For example, how the grass on a patch is eaten could now be a function of the number of sheep on that patch. To do that, we only need to change the model itself, no matter to change either the model-artefact or the model-agent.
5.3.2 Different time scales
The grass model $M_2$ may no longer be executed step-by-step but 2 steps by 2 steps; while the sheep model execution remains step-by-step (Cf figure 6). This is related to the model execution process. So we modify the grass model-agent \texttt{Run()} and \texttt{getNextTime()} functions. That is, instead of calling the "go" Netlogo procedure only once in the \texttt{Run()} function, the model-agent calls it twice. Then \texttt{getNextTime()} returns now the number of "ticks + 2". These are the only modifications to do since the coordination occurs only thanks to time values given by \texttt{getCurrentTime()} and \texttt{getNextTime()} functions.
6. DISCUSSION
All the framework has been developed in Java since it makes Java Messaging Service platform\footnote{Java Messaging Service. http://java.sun.com/products/jms/} and Netlogo integration easier. JMS platform is used for shared memory purposes. In this article, we present an example based upon the Netlogo platform that makes only two simulators (and their models) interact. Due to space constraints, we do not present all the features allowed by that framework. Indeed, coupling and synchronizing the model-agents through their environment deeply simplify the addition of a new model. We have made experiments with three Netlogo models (both $M_1$ and $M_2$ plus a model of wolves predation).
Changing the dependences between the models is also simplified by the use of one coupling-artefact for each kind of...
dependence. That is, if $M_1$ and $M_2$ both depends on the data $\delta_3$ provided by a third model $M_3$, we just need to build a coupling-artefact in charge of $\delta_3$. $M_1$ and $M_2$ will read that data from this new coupling-artefact.
In this article, we only talk about step-by-step simulation. We currently work on mobile ad hoc networks (MANET) simulations in which an event-driven simulator interact with a step-by-step multiagent simulator. The coordination model is exactly the same as the one described here. No additional modification is needed in order to integrate an event-driven simulator into the AA4MM framework.
The technical details, the examples and source code are available on the framework webpage.
7. CONCLUSION
We propose a framework (called AA4MM) to build a simulation as a society of interacting models. The main goal is to reuse existing models and simulators and to make them interact. However, the main constraint is that people involved into the simulation process do not know intricate distributed simulation tools and only care about their own domain modelling. That is, coordination challenges remain to the AA4MM framework.
In this paper, we present the coordination model in order to decentralize the simulator interactions. We propose to use the environment through the notion of artefact in order to deal with the coherence, compatibility and coordination issues that appear in parallel simulations. We have developed a framework that intends to deeply simplify the simulation of complex systems by easily building a society of interacting models and simulators.
We do not target the implementation performances. Indeed, parallel simulations have the advantage of theoretically scaling up. Large size systems or numerous abstraction levels may be simulated. However, the data exchange between simulators can cause a huge overhead and can slow down the whole simulation. These scalability issues are plan as future work.
We do not mention the open system consideration. That is, we only focus on different interacting models where agents do not enter or leave their model. We think that when an agent leaves, enters or goes from one model to another, exchanging data is not sufficient. We plan to extend our framework in order to deal with that issue. The challenge here is to respect our requirements of coordination via the environment.
This work has been motivated by our initial studies on the interactions between humans behaviour and dynamic networks. Since we have now the tools to reuse existing models and simulators, we plan to focus on the experiments in this domain.
8. ACKNOWLEDGEMENTS
The authors would like to thank the ANR SARAH project and La Region Lorraine for their financial support. Coordination formal specification in event B has been developed in collaboration with Joris Rehm\(^4\). JMS implementation has been done in collaboration with Virginie Galtier\(^5\).
9. REFERENCES
\(^3\)http://www.loria.fr/~siebertjas/mm/aa4mm.html
\(^4\)joris.rehm@loria.fr; MOSEL Team, LORIA.
\(^5\)virginie.galtier@supelec.fr; Supelec Metz.
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00426601/file/sac2010_siebert_accept.pdf", "len_cl100k_base": 5251, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20323, "total-output-tokens": 6620, "length": "2e12", "weborganizer": {"__label__adult": 0.0003817081451416016, "__label__art_design": 0.0005841255187988281, "__label__crime_law": 0.0004580020904541016, "__label__education_jobs": 0.0019931793212890625, "__label__entertainment": 0.00014984607696533203, "__label__fashion_beauty": 0.00022161006927490232, "__label__finance_business": 0.0006456375122070312, "__label__food_dining": 0.0005059242248535156, "__label__games": 0.00110626220703125, "__label__hardware": 0.0010938644409179688, "__label__health": 0.0009765625, "__label__history": 0.0007200241088867188, "__label__home_hobbies": 0.0001850128173828125, "__label__industrial": 0.0009546279907226562, "__label__literature": 0.00042128562927246094, "__label__politics": 0.0005617141723632812, "__label__religion": 0.0005221366882324219, "__label__science_tech": 0.431640625, "__label__social_life": 0.00019466876983642575, "__label__software": 0.01360321044921875, "__label__software_dev": 0.54150390625, "__label__sports_fitness": 0.0004715919494628906, "__label__transportation": 0.0009055137634277344, "__label__travel": 0.0003333091735839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26178, 0.03808]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26178, 0.40012]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26178, 0.89358]], "google_gemma-3-12b-it_contains_pii": [[0, 471, false], [471, 5669, null], [5669, 12059, null], [12059, 15136, null], [15136, 20335, null], [20335, 26178, null]], "google_gemma-3-12b-it_is_public_document": [[0, 471, true], [471, 5669, null], [5669, 12059, null], [12059, 15136, null], [15136, 20335, null], [20335, 26178, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26178, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26178, null]], "pdf_page_numbers": [[0, 471, 1], [471, 5669, 2], [5669, 12059, 3], [12059, 15136, 4], [15136, 20335, 5], [20335, 26178, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26178, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
5f48209d01aa0c5cdb574d9173bb7f1756f180ef
|
Chapter 7
Propositional Satisfiability Techniques
Dana S. Nau
University of Maryland
Fall 2009
Motivation
- Propositional satisfiability: given a boolean formula
» e.g., \((P \lor Q) \land (\neg Q \lor R \lor S) \land (\neg R \lor \neg P)\),
does there exist a model
» i.e., an assignment of truth values to the propositions
that makes the formula true?
- This was the very first problem shown to be NP-complete
- Lots of research on algorithms for solving it
◆ Algorithms are known for solving all but a small subset in average-case polynomial time
- Therefore,
◆ Try translating classical planning problems into satisfiability problems, and solving them that way
Outline
- Encoding planning problems as satisfiability problems
- Extracting plans from truth values
- Satisfiability algorithms
- Davis-Putnam
- Local search
- GSAT
- Combining satisfiability with planning graphs
- SatPlan
Overall Approach
- A *bounded planning problem* is a pair \((P,n)\):
- \(P\) is a planning problem; \(n\) is a positive integer
- Any solution for \(P\) of length \(n\) is a solution for \((P,n)\)
- Planning algorithm:
- Do iterative deepening like we did with Graphplan:
- for \(n = 0, 1, 2, \ldots\),
- encode \((P,n)\) as a satisfiability problem \(\Phi\)
- if \(\Phi\) is satisfiable, then
- From the set of truth values that satisfies \(\Phi\), a solution plan can be constructed, so return it and exit
Notation
- For satisfiability problems we need to use propositional logic
- Need to encode ground atoms into propositions
- For set-theoretic planning we encoded atoms into propositions by rewriting them as shown here:
- Atom: $\text{at}(r1,\text{loc}1)$
- Proposition: $\text{at-r1-loc1}$
- For planning as satisfiability we’ll do the same thing
- But we won’t bother to do a syntactic rewrite
- Just use $\text{at}(r1,\text{loc}1)$ itself as the proposition
- Also, we’ll write plans starting at $a_0$ rather than $a_1$
- $\pi = \langle a_0, a_1, \ldots, a_{n-1} \rangle$
Fluents
● If \( \pi = \langle a_0, a_1, \ldots, a_{n-1} \rangle \) is a solution for \((P,n)\), it generates these states:
\[
s_0, \quad s_1 = \gamma(s_0,a_0), \quad s_2 = \gamma(s_1,a_1), \quad \ldots, \quad s_n = \gamma(s_{n-1}, a_{n-1})
\]
● Fluent: proposition saying a particular atom is true in a particular state
◆ at(r1,loc1,i) is a fluent that’s true iff at(r1,loc1) is in \(s_i\)
◆ We’ll use \(l_i\) to denote the fluent for literal \(l\) in state \(s_i\)
» e.g., if \(l = \text{at}(r1,loc1)\)
then \(l_i = \text{at}(r1,loc1,i)\)
◆ \(a_i\) is a fluent saying that \(a\) is the \(i\)’th step of \(\pi\)
» e.g., if \(a = \text{move}(r1,loc2,loc1)\)
then \(a_i = \text{move}(r1,loc2,loc1,i)\)
Encoding Planning Problems
- Encode \((P, n)\) as a formula \(\Phi\) such that
\(\pi = \langle a_0, a_1, \ldots, a_{n-1} \rangle\) is a solution for \((P, n)\) if and only if
\(\Phi\) can be satisfied in a way that makes the fluents \(a_0, \ldots, a_{n-1}\) true
- Let
- \(A = \{\text{all actions in the planning domain}\}\)
- \(S = \{\text{all states in the planning domain}\}\)
- \(L = \{\text{all literals in the language}\}\)
- \(\Phi\) is the conjunct of many other formulas …
Formulas in $\Phi$
- Formula describing the initial state:
\[ \land \{ l_0 \mid l \in s_0 \} \land \land \{ \neg l_0 \mid l \in L - s_0 \} \]
- Formula describing the goal:
\[ \land \{ l_n \mid l \in g^+ \} \land \land \{ \neg l_n \mid l \in g^- \} \]
- For every action $a$ in $A$, formulas describing what changes $a$ would make if it were the $i$’th step of the plan:
\[ a_i \Rightarrow \land \{ p_i \mid p \in \text{Precond}(a) \} \land \land \{ e_{i+1} \mid e \in \text{Effects}(a) \} \]
- Complete exclusion axiom:
- For all actions $a$ and $b$, formulas saying they can’t occur at the same time
\[ \neg a_i \lor \neg b_i \]
- this guarantees there can be only one action at a time
- Is this enough?
Frame Axioms
- **Frame axioms:**
- Formulas describing what *doesn’t* change between steps $i$ and $i+1$
- Several ways to write these
- One way: *explanatory frame axioms*
- One axiom for every literal $l$
- Says that if $l$ changes between $s_i$ and $s_{i+1}$, then the action at step $i$ must be responsible:
$$
(\neg l_i \land l_{i+1} \Rightarrow \forall a \text{ in } A \{a_i \mid l \in \text{effects}^+(a)\})
\land
(l_i \land \neg l_{i+1} \Rightarrow \forall a \text{ in } A \{a_i \mid l \in \text{effects}^-(a)\})
$$
Example
- Planning domain:
- one robot $r_1$
- two adjacent locations $l_1, l_2$
- one operator (move the robot)
- Encode $(P,n)$ where $n = 1$
- Initial state: $\{at(r_1,l_1)\}$
Encoding: $at(r_1,l_1,0) \land \neg at(r_1,l_2,0)$
- Goal: $\{at(r_1,l_2)\}$
Encoding: $at(r_1,l_2,1) \land \neg at(r_1,l_1,1)$
- Operator: see next slide
Example (continued)
- Operator: move(r,l,l’)
- precond: at(r,l)
- effects: at(r,l’), ¬at(r,l)
Encoding:
\[
\begin{align*}
\text{move}(r1,l1,l2,0) & \Rightarrow \text{at}(r1,l1,0) \land \text{at}(r1,l2,1) \land \neg \text{at}(r1,l1,1) \\
\text{move}(r1,l2,l1,0) & \Rightarrow \text{at}(r1,l2,0) \land \text{at}(r1,l1,1) \land \neg \text{at}(r1,l2,1) \\
\text{move}(r1,l1,l1,0) & \Rightarrow \text{at}(r1,l1,0) \land \text{at}(r1,l1,1) \land \neg \text{at}(r1,l1,1) \\
\text{move}(r1,l2,l2,0) & \Rightarrow \text{at}(r1,l2,0) \land \text{at}(r1,l2,1) \land \neg \text{at}(r1,l2,1) \\
\text{move}(l1,r1,l2,0) & \Rightarrow \ldots \\
\text{move}(l2,l1,r1,0) & \Rightarrow \ldots \\
\text{move}(l1,l2,r1,0) & \Rightarrow \ldots \\
\text{move}(l2,l1,r1,0) & \Rightarrow \ldots
\end{align*}
\]
- How to avoid generating the last four actions?
- Assign data types to the constant symbols like we did for state-variable representation
Example (continued)
- **Locations:** \( l_1, l_2 \)
- **Robots:** \( r_1 \)
- **Operator:** \( \text{move}(r : \text{robot}, l : \text{location}, l' : \text{location}) \)
- **precond:** \( \text{at}(r,l) \)
- **effects:** \( \text{at}(r,l'), \neg \text{at}(r,l) \)
**Encoding:**
\[
\begin{align*}
\text{move}(r_1,l_1,l_2,0) \Rightarrow & \quad \text{at}(r_1,l_1,0) \land \text{at}(r_1,l_2,1) \land \neg \text{at}(r_1,l_1,1) \\
\text{move}(r_1,l_2,l_1,0) \Rightarrow & \quad \text{at}(r_1,l_2,0) \land \text{at}(r_1,l_1,1) \land \neg \text{at}(r_1,l_2,1)
\end{align*}
\]
Example (continued)
- Complete-exclusion axiom:
\[ \neg \text{move}(r1,l1,l2,0) \lor \neg \text{move}(r1,l2,l1,0) \]
- Explanatory frame axioms:
\[ \neg \text{at}(r1,l1,0) \land \text{at}(r1,l1,1) \Rightarrow \text{move}(r1,l2,l1,0) \]
\[ \neg \text{at}(r1,l2,0) \land \text{at}(r1,l2,1) \Rightarrow \text{move}(r1,l1,l2,0) \]
\[ \text{at}(r1,l1,0) \land \neg \text{at}(r1,l1,1) \Rightarrow \text{move}(r1,l1,l2,0) \]
\[ \text{at}(r1,l2,0) \land \neg \text{at}(r1,l2,1) \Rightarrow \text{move}(r1,l2,l1,0) \]
Extracting a Plan
- Suppose we find an assignment of truth values that satisfies \( \Phi \).
- This means \( P \) has a solution of length \( n \)
- For \( i=1,\ldots,n \), there will be exactly one action \( a \) such that \( a_i = true \)
- This is the \( i \)'th action of the plan.
- Example (from the previous slides):
- \( \Phi \) can be satisfied with \( \text{move}(r1,l1,l2,0) = true \)
- Thus \( \langle \text{move}(r1,l1,l2,0) \rangle \) is a solution for \( (P,0) \)
- It’s the only solution - no other way to satisfy \( \Phi \)
Planning
- How to find an assignment of truth values that satisfies $\Phi$?
- Use a satisfiability algorithm
- Example: the *Davis-Putnam* algorithm
- First need to put $\Phi$ into conjunctive normal form
$$
\Phi = D \land (\neg D \lor A \lor \neg B) \land (\neg D \lor \neg A \lor \neg B) \land (\neg D \lor \neg A \lor B) \land A
$$
- Write $\Phi$ as a set of *clauses* (disjuncts of literals)
$$
\Phi = \{\{D\}, \{\neg D, A, \neg B\}, \{\neg D, \neg A, \neg B\}, \{\neg D, \neg A, B\}, \{A\}\}
$$
- Two special cases:
- If $\Phi = \emptyset$ then $\Phi$ is always *true*
- If $\Phi = \{\ldots, \emptyset, \ldots\}$ then $\Phi$ is always *false* (hence unsatisfiable)
The Davis-Putnam Procedure
Backtracking search through alternative assignments of truth values to literals
- $\mu = \{\text{literals to which we have assigned the value TRUE}\}; \text{ initially empty}$
- if $\Phi$ contains $\emptyset$ then
- $\triangleright$ backtrack
- if $\Phi$ contains $\emptyset$ then
- $\triangleright$ $\mu$ is a solution
- while $\Phi$ contains a clause that’s a single literal $l$
- $\triangleright$ add $l$ to $\mu$
- $\triangleright$ remove $l$ from $\Phi$
- select a Boolean variable $P$ in $\Phi$
- do recursive calls on
- $\Phi \land P$
- $\Phi \land \neg P$
```
Davis-Putnam(\Phi, \mu)
if \emptyset \not\in \Phi \text{ then return}
if \Phi = \emptyset \text{ then exit with } \mu
Unit-Propagate(\Phi, \mu)
select a variable $P$ such that $P$ or $\neg P$ occurs in $\phi$
Davis-Putnam(\Phi \cup \{P\}, \mu)
Davis-Putnam(\Phi \cup \{-P\}, \mu)
```
```
Unit-Propagate(\Phi, \mu)
while there is a unit clause \{l\} in \Phi do
\mu \leftarrow \mu \cup \{l\}
for every clause $C \in \Phi$
if $l \in C$ then $\Phi \leftarrow \Phi \setminus \{C\}$
else if $\neg l \in C$ then $\Phi \leftarrow \Phi \setminus \{C\} \cup \{C \setminus \{\neg l\}\}$
end
```
Local Search
- Let $u$ be an assignment of truth values to all of the variables
- $\text{cost}(u, \Phi) =$ number of clauses in $\Phi$ that aren’t satisfied by $u$
- $\text{flip}(P, u) = u$ except that $P$’s truth value is reversed
- Local search:
- Select a random assignment $u$
- while $\text{cost}(u, \Phi) \neq 0$
- if there is a $P$ such that $\text{cost}(\text{flip}(P, u), \Phi) < \text{cost}(u, \Phi)$ then
- randomly choose any such $P$
- $u \leftarrow \text{flip}(P, u)$
- else return failure
- Local search is sound
- If it finds a solution it will find it very quickly
- Local search is not complete: can get trapped in local minima
GSAT
- Basic-GSAT:
- Select a random assignment $u$
- while $\text{cost}(u, \Phi) \neq 0$
- choose a $P$ that minimizes $\text{cost}(\text{flip}(P, u), \Phi)$, and flip it
- Not guaranteed to terminate
- GSAT:
- restart after a max number of flips
- return failure after a max number of restarts
- The book discusses several other stochastic procedures
- One is Walksat
- works better than both local search and GSAT
- I’ll skip the details
Discussion
- Recall the overall approach:
- for $n = 0, 1, 2, \ldots$,
- encode $(P, n)$ as a satisfiability problem $\Phi$
- if $\Phi$ is satisfiable, then
- From the set of truth values that satisfies $\Phi$, extract a solution plan and return it
- How well does this work?
Discussion
- Recall the overall approach:
- for $n = 0, 1, 2, \ldots$
- encode $(P,n)$ as a satisfiability problem $\Phi$
- if $\Phi$ is satisfiable, then
- From the set of truth values that satisfies $\Phi$, extract a solution plan and return it
- How well does this work?
- By itself, not very practical (takes too much memory and time)
- But it can be combined with other techniques
- e.g., planning graphs
SatPlan
- SatPlan combines planning-graph expansion and satisfiability checking, roughly as follows:
- for $k = 0, 1, 2, \ldots$
- Create a planning graph that contains $k$ levels
- Encode the planning graph as a satisfiability problem
- Try to solve it using a SAT solver
- If the SAT solver finds a solution within some time limit,
- Remove some unnecessary actions
- Return the solution
- Memory requirement still is combinatorially large
- but less than what’s needed by a direct translation into satisfiability
- BlackBox (predecessor to SatPlan) was one of the best planners in the 1998 planning competition
- SatPlan was one of the best planners in the 2004 and 2006 planning competitions
|
{"Source-Url": "http://www.cs.umd.edu/class/fall2009/cmsc722/notes/chapter07.pdf", "len_cl100k_base": 4148, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 37748, "total-output-tokens": 5189, "length": "2e12", "weborganizer": {"__label__adult": 0.0006222724914550781, "__label__art_design": 0.0005445480346679688, "__label__crime_law": 0.0012731552124023438, "__label__education_jobs": 0.0153350830078125, "__label__entertainment": 0.00017821788787841797, "__label__fashion_beauty": 0.00035500526428222656, "__label__finance_business": 0.0008764266967773438, "__label__food_dining": 0.0008549690246582031, "__label__games": 0.002838134765625, "__label__hardware": 0.0013332366943359375, "__label__health": 0.0023326873779296875, "__label__history": 0.0008182525634765625, "__label__home_hobbies": 0.0004301071166992187, "__label__industrial": 0.0014886856079101562, "__label__literature": 0.0011081695556640625, "__label__politics": 0.0008335113525390625, "__label__religion": 0.0009074211120605468, "__label__science_tech": 0.346923828125, "__label__social_life": 0.0003771781921386719, "__label__software": 0.00832366943359375, "__label__software_dev": 0.60888671875, "__label__sports_fitness": 0.0010862350463867188, "__label__transportation": 0.0016384124755859375, "__label__travel": 0.0003523826599121094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12036, 0.01638]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12036, 0.59823]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12036, 0.64048]], "google_gemma-3-12b-it_contains_pii": [[0, 96, false], [96, 684, null], [684, 917, null], [917, 1447, null], [1447, 2038, null], [2038, 2762, null], [2762, 3257, null], [3257, 3982, null], [3982, 4515, null], [4515, 4874, null], [4874, 5810, null], [5810, 6390, null], [6390, 6911, null], [6911, 7467, null], [7467, 8197, null], [8197, 9428, null], [9428, 10105, null], [10105, 10571, null], [10571, 10865, null], [10865, 11301, null], [11301, 12036, null]], "google_gemma-3-12b-it_is_public_document": [[0, 96, true], [96, 684, null], [684, 917, null], [917, 1447, null], [1447, 2038, null], [2038, 2762, null], [2762, 3257, null], [3257, 3982, null], [3982, 4515, null], [4515, 4874, null], [4874, 5810, null], [5810, 6390, null], [6390, 6911, null], [6911, 7467, null], [7467, 8197, null], [8197, 9428, null], [9428, 10105, null], [10105, 10571, null], [10571, 10865, null], [10865, 11301, null], [11301, 12036, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12036, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12036, null]], "pdf_page_numbers": [[0, 96, 1], [96, 684, 2], [684, 917, 3], [917, 1447, 4], [1447, 2038, 5], [2038, 2762, 6], [2762, 3257, 7], [3257, 3982, 8], [3982, 4515, 9], [4515, 4874, 10], [4874, 5810, 11], [5810, 6390, 12], [6390, 6911, 13], [6911, 7467, 14], [7467, 8197, 15], [8197, 9428, 16], [9428, 10105, 17], [10105, 10571, 18], [10571, 10865, 19], [10865, 11301, 20], [11301, 12036, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12036, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
99783f74373daec1b2bcfe0f054b7744a3f0da76
|
Structured Specification of Model Interpreters
Gabor Karsai
Institute for Software-Integrated Systems
Vanderbilt University
PO-Box 1829
Nashville, TN 37235, USA
gabor@vuse.vanderbilt.edu
Abstract
Model interpreters play an essential role in model-integrated systems: they transform domain-specific models into executable models. The state-of-the-art of model interpreter writing needs to be advanced to enhance the reusability and maintainability of this software. This paper presents an approach which makes this possible through the use of structured specifications. These specifications let the programmer express traversal strategies and visitation actions in very high-level terms. From these specifications efficient traversal code can be automatically generated.
1. Introduction
Model-integrated computing [SZ97] relies on the interpretation and use of domain-specific models in runtime environments. The domain models can be considered as objects that are mapped into run-time objects. The mapping can take many forms, ranging from configuring the attribute values of run-time objects to actual generation of code that defines classes and creates instances of run-time objects. This mapping process is performed by a component called the model interpreter that acts as a transformation engine. While the input of the interpreter is known (the model objects), the output is difficult to define in general: it can be, for instance, a text file, a list of objects created in a running system, or a sequence of messages in a distributed system. The exact nature of the output of the interpreter is specific to the domain and the run-time environment.
Writing a model interpreter is a non-trivial task. One has to understand the structure of the models, i.e. the data model of the model database), the intricate details of the expected output, and the relationship between the two. Next, this understanding has to be translated into software that performs the desired mapping. Additionally, the software has to perform the transformation with reasonable performance.
The model interpretation process is somewhat similar to the back-end of compilers. The models capture information in a structured form, typically in the form of hierarchically organized objects. This graph of objects should be traversed, perhaps transformed, and output generated. While the process is very easy to describe in general, it is highly non-obvious how it can be implemented.
This paper shows a generic technique, which helps in the writing of model interpreters by offering a high-level, concise notation for capturing the relevant steps of an interpreter in a structured form. The technique does not generate the entire model interpreter. This would be a rather impossible task because of the widely different outputs expected from an interpreter. Instead, it focuses on the “mechanistic aspects” of model interpretation and simplifies the task of the interpreter writer by generating a large and uninteresting portion of the interpreter code automatically.
2. Background
Model interpreters are transformation programs that walk a graph (the model objects), and perform actions during this process. This activity is, of course, performed routinely in various software systems. Indeed, probably it is fair to say that it is one of the most frequently occurring tasks in any system that transforms data.
Attribute Grammars
The first and foremost application of graph traversal and actions is the code generator part, the “back-end”, of compilers [AH86]. After building the syntax tree from the input text, compilers perform various analysis steps on the data structure (typically for the purpose of semantic checks), then traverse it and output the generated code. Compiler research literature provides a great source for efficient traversal and transformation algorithms. On the other hand, the area of automatically generated compilers.
provides some interesting technologies for the structured specification of graph traversals.
One approach, as a widely and successfully used one, is apparent: Attributed Grammars (AG) [AH86]. AGs, invented by Knuth, tie semantic specification to the syntactical rules of a programming language. Suppose the syntax of a language is specified in the form of context-free grammar, with production rules, terminal and non-terminal symbols, and a start symbol. Now the parser stage of a compiler builds a syntax tree from the input. This represents, in a tree form, what production rules have been applied from the grammar, starting from the start symbol. The application of these rules leads to the sequence of terminal symbols, which is equal to the input string. The syntax tree captures the syntactical structure of the input to the compiler, if the input was syntactically correct. Obviously, one grammatical production rule may appear many times in the tree, showing how a non-terminal (on the left side of the rule and at the local root in the tree) was used to “generate” non-terminals and terminals (on the right side of the rule and at the local leaves). With each symbol in the grammar, we can associate a set of *attribute values*, and in the rules we define how these values are to be calculated. Through these calculations attribute values can depend on other attribute values, including attribute values of other symbols. Because attributes are attached to symbols in the production rules, they can be considered as values associated with the nodes in the parse tree. Attributes can be *inherited* or *synthesized*. The value of a synthesized attribute is calculated from the attribute values of the children of a node in the parse tree, while the value of an inherited attribute is calculated from the attribute values of the parent and the siblings of a node. Note that the inherited/synthesized properties of attributes implicitly define a *data dependency* among the attribute values. This dependency implicitly describes a traversal sequence on the nodes of the parse tree. Thus, attribute specifications determine how the tree must be walked, and imply a visitation sequence for calculating attributes. Circular dependencies lead to infinite loops in the traversal, but these are the result of incorrect specifications. One can also insert into the attribute specifications any code to be executed when the traversal is performed. From the attribute specifications a traversal code can be generated that walks the tree and evaluates the attributes in the necessary order.
To summarize, AGs provide very high-level specification formalism for the traversal of a tree through the use of dependency among the attributes. Unfortunately, while intellectually appealing, AGs have very serious practical limitations. For a specific traversal sequence, it is highly non-trivial how the attributes should be set up and how should they depend on each other. Sometimes one has to introduce extra attributes just for forcing a particular kind of traversal. Referencing to attributes that are calculated at remote nodes in the parse tree is rather problematic. Thus, while Attributed Grammars offer a very high-level formalism for the structured specification of traversals of graphs, their usability is limited.
**Adaptive Programming**
When object-oriented languages started to gain acceptance, it has been observed that OO programs are structured very differently than “traditional” procedural programs [LIE96]. Specifically, it has been noted that OO programs follow a pattern of *collaborations* where multiple objects of different classes cooperate to achieve a certain goal. Unlike in traditional approaches, complex behaviors are implemented by a set of simpler behaviors distributed over a set of classes of objects. This appears to be an essential property of all object-oriented approaches. This structure is both a benefit and a liability. It is beneficial because very complex behaviors can be built from trivial ones, but it is a liability because OO languages typically lack the syntactical constructs to express them. Adaptive Programming (AP) [LIE96] offers a solution, which also provides relevant techniques for the model interpretation problem as well. In AP, collaborations are expressed using two specifications: *class graphs* and *traversal strategy graphs*. Class Graphs describe what classes are available in the system, and how they are related to each other through inheritance and associations. Traversal Strategy Graphs are subgraphs of Class Graphs that also specify what the precise strategy is to traverse that subgraph. The strategy is a very compact and high-level specification of the traversal: it simply refers to the classes involved, omitting all implementation details. For example, if class A is associated with class B which is associated with class C, a strategy can simply specify “from A visit C”, without mentioning intermediate classes. In addition to this specification one can also include code in the strategy which gets executed when the traversal happens. From the class graph and the strategy graph specifications, a tool can synthesize all the traversal code, which is distributed across classes as methods. From each strategy graph a set of methods is created (assigned to the classes), which implement the strategy. The automatic generation of this code removes the mundane tasks from the programmer: iterating over lists, invoking methods on objects in the list, and hand-coding the traversal of a quite complex graph with the help of small, distributed methods. The code also incorporates the user-specified code fragments that are executed during traversal. The technology has been developed by Lieberherr and others, and has been termed as “Adaptive Programming” (AP).
With respect to the specification of model interpreters, we can recognize the relevance of AP as follows. AP solves the task of traversal specification in a compact and efficient way that has many applications in object-oriented programs. The actual traversal code is synthesized, and the user is not burdened with low-level details.
Intentional Programming
Intentional Programming, developed at Microsoft Research [SIM96], offers another paradigm for program development. The central thesis here is that software is written as a collection of intentions, which are then refined into actual implementations. The intentions capture what a programmer wants to “say” in a particular context, in a language independent manner. Once the intentions are expressed, the programmer (or the development environment) should refine those intentions into actual implementation. Technically, intentions are intermediate nodes in a parse tree. The refinement is expressed by specifying how the intentional data structure should be transformed into a structure that can be directly used in a code generator. This refinement is currently expressed in the form of actual code that performs the transformation. IP shows similarities to model interpretation in many respects. If the “models” stand for “intentions”, the transformation of those into implementations is the task of the model interpreter. Unfortunately, the current implementation of IP offers a very low-level interface for implementing the transformation engines, i.e. their model interpreters.
The Visitor Design Pattern
The Visitor design pattern [GOF95] codifies a prototypical solution to a frequently occurring design problem: A graph consists of nodes of heterogeneous types. We need to traverse this graph, possibly multiple times, and perform operations on the node. For example, the graph can be the syntax tree generated in a compiler, and the actions can be “optimize” or “generate code”. The solution is to encapsulate the operations in a set of Visitor classes that can visit nodes of specific types. Once a visitor is created, it can be “handed to” a graph node, which will invoke the proper visitation operation on the visitor. The graph node should also incorporate the actual traversal operation: it should “know” how its neighboring nodes should be visited. As a design pattern, Visitor can be implemented in many ways (none of which is supported directly by a tool like an AP). The most obvious implementations have serious shortcomings in terms of scalability, but the pattern is a conceptually powerful technique.
The Visitor design pattern shows what is important: separating structure (the graph) from the traversal of the structure (the Visitor object), and the encapsulation of the operations in the latter. However, it is merely a design pattern, and thus in itself does not offer a way for the structured, high-level specification of model interpreters. From the above four background technologies one can draw the following requirements for model interpreter specification:
- There is a need for the formal, high-level specification of the traversal. This specification should be the input to a code generator that synthesizes the actual traversal code.
- The traversal specification must be explicit (for maintainability), and concise. All intermediate code (for iterations, etc.) should be automatically synthesized.
- There is a need for writing multiple interpreters for the same model. Just like one can have multiple visitors for a syntax tree, one should be able to define multiple interpretations for the same models.
- The traversal and the operations to be taken during that traversal should encapsulated in classes. This encapsulation offers a context, which can be built dynamically as the traversal proceeds, and provides a way for capturing the “state” of the traversal.
3. The approach
Based on the observations made above, the following approach is proposed. When specifying a model interpreter, the following components should be defined:
- Model structure. The model structure defines what classes of model objects we have, and how they are related to each other. One can use, for example, UML class diagrams to express model structure. In this paper a simple textual language is used.
- Traversals. Traversals capture how the models should be traversed. The specification should address the following question: If we are at node of type X, which node do we go to next? These traversal specifications can be made very concise (as shown in AP), and the actual traversal code can be generated from them. Traversals are objects that encapsulate the traversal code fragments, and can also encapsulate state information.
- Visitors. Visitors capture the actions to be taken when visiting a node of a particular type. Visitors are also objects that encapsulate the operations to be performed, and they can also provide a context for the traversal.
These three components can be encapsulated as classes, as shown on Figure 3 below.
The Traversal and Visitor objects are also directly linked to each other, and are operated in a co-routine like manner. Suppose the Traverser starts at a specific type of model node. Based on its specification, it determines how to follow pointers emanating from that type of node and call the visitor on the objects that the pointers are pointing to. The visitor might take an action, and/or activate the traverser to proceed from the accessed node. So the control flow oscillates between the traverser and the visitor: the traverser determines where to go next, the visitor “visits” (i.e. takes actions) and can call back the traverser to proceed further.
The *model structure specification* should capture what kind of model classes are available, how they are composed of simpler entities or other models, and how they are associated with each other (beyond composition). Models, entities, and relations might have attributes: key-value pairs that capture non-structured information. Thus, model structure can be easily specified using a standard specification technique; for instance UML class diagrams [FOW97]. For the sake of simplicity, we will use a simple; entity-relationship based textual language for specification. In the language, one can define entity types (which are named collections of attributes), relation types (that relate entities and models to each other), and model types (that contain entities, relations and possibly models). All types can have attributes and entity types, and model types can be organized into inheritance hierarchies. Figure 2 below shows a trivial specification. The specification introduces an entity, a model, and a relation, respectively. The relation is specified in terms of objects it relates (Signal to Signal) and the name of roles they play in the relations (src and dst).
*Traversal specifications* should answer the following question: “if we are at node of type X, where do we go next?” The “next” should be an object that is reachable from objects of type X. Thus it should be associated with X either directly or indirectly, possibly through inheritance. From “Compound”, for instance, one can access each “Dataflow” object via the “flows” association.
Thus, one traversal specification might be like “from Compound to flows”. This specification results in a code fragment in the interpreter, which is invoked when one wants to traverse a graph starting from a Compound node. When specifying a traversal, one specifies what visiting action to take indirectly: it is not explicit what to do, but the traverser “expects” that a corresponding visiting action is available. In a traversal specification one would also want to visit multiple nodes. For example “from Compound to {locals, blocks, flows}” might be a suitable specification. As was mentioned above, model and entity types can be organized in an inheritance hierarchy. In the example, Compound is a Block, inheriting attributes, parts, relations, and associations from the base type. When traversal of a derived type is specified, it is useful for specifying that the derived type objects should first be traversed as a base type object, for instance: “from Compound do Block to {locals, blocks, flows}”. This capability simplifies the specifications, because base class related traversals could be specified only once, and then invoked from derived class traversals.
```
entity Signal {
attr string name;
}
model Compound : Block {
part LocalSignal locals;
part Block blocks;
rel Dataflow flows;
}
relation Dataflow {
Signal src * <-> Signal dst *;
-- constraints for connections
}
```
Figure 2: Example Model Structure specification
*Visitor specifications* should capture what should be done when visiting a particular kind of object. There are basically two options: either take a “user action” (i.e. execute a piece of user-supplied code), or it can proceed with the traversal (i.e. call the traverser with the object being visited). These can be intermixed and/or omitted completely. The visitor specification should thus enumerate actions using the form:
```
at Dataflow <<USERCODE-1>> traverse <<USERCODE-2>>
```
Figure 1: Components of a Model Interpreter
Each clause after the type name is optional. The << and >> are special brackets which surround user-defined code. User-defined code can contain any C++ code to be executed at the start of the visit (USERCODE-1), or at the end of the visit (USERCODE-2).
A model interpreter generator can translate each traversal specification into a method of a Traverser class. The methods can take one parameter: a reference to the type of model object where the traversal starts. The visitor specifications can also be translated into methods of a Visitor class, which gets one parameter: a pointer to the object visited. One can even use the same name for the methods (e.g. visit() for Visitor methods, and traverse() for Traverser methods), because the C++ or Java overloading mechanism can correctly resolve the call based on the type of the parameter.
The overloading gives rise to an interesting capability: One can supply extra formal parameters in traversal and visitor specifications with the “origin point” of the action, and can supply actual parameters when “calling” a visitor or a traverser. An example is shown on Figure 3.
```
Traverser:
from Compound[int x] to blocks[2*x];
from Block[int j] to locals[j+1];
Visitor:
at Block[int x] traverse[x+3]
```
**Figure 3: Specification with parameters**
The parameters are simply added as extras for the generated method’s parameter list, and, again, the overloading mechanism will be used to select the correct alternative.
User-defined actions can be added to the visitor specifications, as indicated above, but occasionally it is also useful to add them in traversal specifications. The syntax for traversals allows this in the following way:
```
from Compound do Block << USERCODE-1 >>
to << USERCODE-2 >> locals
<< USERCODE-3 >> ;
```
USERCODE-1, USERCODE-2 are executed in sequence, and USERCODE-3 is executed after visiting all the LocalSignal nodes.
A translator program that generates C++ code processes the model structure, traversal and visitor specifications. The model structure can be translated into C++ class definitions, with attributes translated into class members, and relations into class objects that contain pointers to the related objects. This, of course, is just one possible translation: for example, an OODB schema can also be easily generated. The traverser specifications are translated into a Traverser class definition, with associated methods: one for each traversal specification. The methods contain code that iterates over the associated objects, and calls the corresponding visitor method. The Visitor methods contain the user-supplied code, and the (optional) call back to the Traverser for continuing with the traversal. Interestingly, the translation algorithm, which generates this code, can also be written quite easily using the Visitor/Traverser style.
### 4. Example
In this section a simple example is presented, which shows how to write a model interpreter for a block diagram language. The full specification can be found in the Appendix.
The models are for representing hierarchically organized processing networks (hardware or software). The modeling paradigm includes entity types called Ports, which are sub-classed into InputPorts and OutputPorts. The model type Block represents a “generic” processing module, which has inputs and outputs. This model type is sub-classed into Primitives and Compounds. Primitives define elementary processing operations, identified by a string attribute called type. Compounds are also blocks that contain other blocks (i.e. Primitives and Compounds), and relations of type Connection. Connections relate Ports to each other thus they can represent flow of data among processing blocks. There are no Block instances only Primitive or Compound instances. Furthermore, a Compound instance cannot contain itself.
The task of the model interpreter is to traverse the network of model objects, starting from a root Primitive or Compound. During traversal it has to print out each primitive instance (called a “node”) encountered with a unique id and the type string, and wiring instructions that connect wires to nodes on the numbered ports of that node. Note that Compounds can contain other Compounds and Primitives, but in the output only the primitives are needed, with the “flat” wiring connecting them.
To show how the specified interpreter works, suppose we start at a Compound that has some input ports, some output ports. It contains one Primitive whose input ports are “wired” to the input ports of the parent Compound, and its output ports to the output ports of its parent. The traversal starts at the Compound, and the first specification here forces a traversal as if the object were a Block (using the do Block clause). Note that the Compound traversal specification expects one extra parameter, of type PortMap, which maps object ids (IDs) into connection ids (Wires). This parameter is passed along to the Block traversal specification. That specification visits the input and output ports of the object. The corresponding Visitor action checks if the selected port has an entry in the map, if not it creates a new Wire and assigns it to the object. Thus, the input and output ports will all have a Wire assigned to them. Next, the traversal continues by visiting all the Connections, and then
5. Conclusions and Future Work
In this paper, we have shown a new approach to model interpreter specification. Traversals and Visits should be specified (in addition to the model structure). High-level notations can be used intermixed with user-supplied code. A tool has been developed that understands these specifications, and generates all the low-level traversal and visitation code. The approach described in this paper is a highly practical one: its purpose is to serve the software engineer. This does not mean that the specifications cannot be thoroughly analyzed and important properties of traversals and visits determined. In fact, the tool mentioned above already performs these checks, and code generated by it is always correct. (Naturally, it cannot check the correctness of user-supplied, embedded code.)
The approach can be extended in many different directions. One is to incorporate constraints in the specification that can be used to determine semantic correctness of models. Much of the work in a real model interpreter deals with validating model correctness, and the formal constraint specifications could help in this. Another issue is the sequencing and precise control of traversals. Currently the tool supports phases, which are distinct passes through the model structure. It is the main program’s responsibility to switch between the phases. Instead of using phases, one might use conditional traversals/visits, which are executed only when some conditions are true.
Specifying model interpreters in a structured way is key component for interpreter writing. While hand-coded actions may be needed for a long time, to impose a framework on the construction of interpreters offers long-term gains, especially in maintainability and code reuse.
Acknowledgment
The DARPA/ITO EDCS program (F30602-96-2-0227), The Boeing Company, Saturn Corporation, and the Arnold Engineering Development Center of USAF has supported the activities described in this paper.
References
Appendix
Model structure specification
paradigm Xdl;
entity Port { }
entity InputPort : Port { }
entity OutputPort : Port { }
class Block {
part InputPort inputs;
part OutputPort outputs;
}
class Primitive : Block {
attr string type;
}
class Compound : Block {
part Block blocks;
}
relation Connection {
Port src * <-> Port dst *;
}
Model Interpreter Specification
interpreter XdlInterpreter;
<< typedef long Wire; typedef long Node;
typedef map<ID, Wire> PortMap;
int newId() { static int count = 0;
return count++;
}
int mkWire() { return newId(); }
int mkNode() { return newId(); } >>
visitor Visitor {
at Port [PortMap& sMap]
<< if(sMap.find(self.Id())==sMap.end())
sMap[self.Id()]=mkWire(); >>;
at Connection [const Block_M* parent, PortMap & sMap]
<< Port_E *src = self.src(), *dst = self.dst();
Block_M* srcBlock = (Block_M*)src->Parent();
Block_M* dstBlock = (Block_M*)dst->Parent();
if((srcBlock != parent) && (dstBlock != parent)) {
int tmp = mkWire();
sMap[src->Id()] = tmp;
sMap[dst->Id()] = tmp;
} else if(srcBlock == parent) {
sMap[dst->Id()] = sMap[src->Id()];
} else if(dstBlock == parent) {
sMap[src->Id()] = sMap[dst->Id()];
} >>;
at Primitive [PortMap& sMap] traverse[sMap];
at Compound [PortMap& sMap] traverse[sMap];
at Port [int& count, Wire wire, Node node]
<< printf("connect wire:%d to node:%d,%d\n", wire,node,count);
count++;
};
at Port [Node node, Wire wire, int& count]
<< printf("connect node:%d to wire:%d\n", node,count, wire);
count++;
} >>;
}
traversal Traverser using Visitor {
from Block[PortMap& sMap]
to { inputs[sMap], outputs[sMap] };
from Primitive[PortMap& sMap] do Block[sMap]
<< Node node = mkNode(); int count;
printf("node %d %s\n ",node,self.type()); >>
to { << count = 0; } I
inputs[count,sMap[arg.Id()],node],
outputs[node,sMap[arg.Id()],count] ;
} << count =
|
{"Source-Url": "http://www.isis.vanderbilt.edu/sites/default/files/Karsai_G_3_0_1999_Structured.pdf", "len_cl100k_base": 5675, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 29720, "total-output-tokens": 6374, "length": "2e12", "weborganizer": {"__label__adult": 0.00036716461181640625, "__label__art_design": 0.00027561187744140625, "__label__crime_law": 0.00026106834411621094, "__label__education_jobs": 0.0003592967987060547, "__label__entertainment": 4.971027374267578e-05, "__label__fashion_beauty": 0.00013077259063720703, "__label__finance_business": 0.0001285076141357422, "__label__food_dining": 0.00036978721618652344, "__label__games": 0.0003609657287597656, "__label__hardware": 0.0008006095886230469, "__label__health": 0.00036025047302246094, "__label__history": 0.00019681453704833984, "__label__home_hobbies": 6.99758529663086e-05, "__label__industrial": 0.00034809112548828125, "__label__literature": 0.00021004676818847656, "__label__politics": 0.00021409988403320312, "__label__religion": 0.00046539306640625, "__label__science_tech": 0.0068817138671875, "__label__social_life": 6.562471389770508e-05, "__label__software": 0.003173828125, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0003006458282470703, "__label__transportation": 0.0005154609680175781, "__label__travel": 0.0002157688140869141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29169, 0.01084]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29169, 0.78762]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29169, 0.89204]], "google_gemma-3-12b-it_contains_pii": [[0, 3933, false], [3933, 10105, null], [10105, 15461, null], [15461, 19010, null], [19010, 24365, null], [24365, 27391, null], [27391, 29169, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3933, true], [3933, 10105, null], [10105, 15461, null], [15461, 19010, null], [19010, 24365, null], [24365, 27391, null], [27391, 29169, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29169, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29169, null]], "pdf_page_numbers": [[0, 3933, 1], [3933, 10105, 2], [10105, 15461, 3], [15461, 19010, 4], [19010, 24365, 5], [24365, 27391, 6], [27391, 29169, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29169, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
66f35157b4001e021fd7eb5daea381dd93005db7
|
Three-Layered Software Architecture and Its Variability for Teleoperated System
Yasuharu Kunii, Yoshiki Matsui and Masaru Furukawa
Human Machine System Laboratory, Chuo University, Bunkyo-ku, Tokyo, Japan
Keywords: System Architecture, Teleoperation.
Abstract: In a teleoperated system, robots are often required to easily change among various modes of operation; further, an efficient development of large-scale teleoperated systems is desired. Thus, we propose a three-layer software architecture implemented using a database node module (DNM). All modules are connected to a DNM, with connections among modules defined as virtual connections. It is possible to change connections during operation via the virtual connection of the DNM, and the DNM can achieve high-speed communication and high-speed connection changes. We examined the evaluation index of our module design using this architecture because module interface and function design influence the architecture. Finally, we confirmed that a robot based on our architecture worked in a real environment.
1 INTRODUCTION
Remote mobile robots often work in extreme environments such as planetary surfaces, disaster sites, and other dangerous zones. In general, they are required to achieve a stable performance during advanced missions in these environments. Several system architectures for robots have been proposed for achieving such capabilities (Ahn et al., 2010; Medvidovic et al., 2011; Volpe et al., 2001).
Teleoperators comprise several functions such as action planning, recognition, and motion control and various subsystems such as moving mechanisms, a communication system, and various sensors. Because these systems are multifunctional, they often become bulky and complex. Conversely, these systems are required to be scalable and efficiently adapt to any situation. Thus, their control and operating software must enable users to freely combine installed elements via a network and modify system components.
To flexibly respond to environmental changes or unpredictable problems, it is necessary to change the system configuration or add new functions from a remote site over a network. Most conventional software architectures for teleoperation cannot operate a robot if a failure occurs at a remote site (Estlin et al., 2008; Baranyi, 2011; Hoshino and Kunii, 2012; Galambos, 2012). This arises because of difficulties in the dynamic modification of robotic functions. From this viewpoint, an architecture with advanced scalability and variability is required for a mission-critical operation.
Teleoperated systems must also address information communication, i.e., the transmission of sensory information from a remote site to a human operator. For the safe operation of a robot, information of system conditions should be known; however, the complexity of the system makes it difficult to understand its various states. To overcome these limitations, we propose a system architecture that emphasizes variability in the structure of functions and data transparency (Ando et al., 2011).
In short, we need a fault-tolerant robot system. In widely used robot middleware such as R.O.S. and RT-Middleware targeted at easily implementing robot systems, adapting teleoperated systems is especially important. Therefore, we propose our architecture for a fault-tolerant system in which the ease of implementation is crucial. Accordingly, our architecture was constructed using RT-Middleware.
In this study, we discuss the importance of a module design and the granularity of the module in our three-layer architecture. Moreover, we show the evaluation index of the module design and confirm the validity of our architecture via experimentation.
2 PROPOSED SOFTWARE ARCHITECTURE FOR TELEOPERATED SYSTEMS
In general, teleoperators operate at locations that are not amenable to human activity. Moreover, the environment of these locations may not be well known. Therefore, system failures may occur because of the nonconformity of parameters or algorithms. Further, harsh environmental conditions often cause hardware problems. In such cases, the system should be alterable by software alone without physical restriction, i.e., the system should be flexible and adapt the structure of its functions to suit a situation. Moreover, as mentioned above, the safe operation of the robot requires knowledge of the state of the system during its operation.
Therefore, we designed a system that emphasizes flexibility and variability in its structure of functions and transparency and accessibility of data. In our proposed architecture, which is illustrated in Figure 1, each function is modularized and connected via a network. Advanced variability is achieved by defining real and virtual connections within different layers. Each layer of the architecture is detailed in the subsections as follows.
2.1 Physical Layer
In the bottom layer, all hardware is connected via a network, as shown in Figure 2. Any function can be directly accessed and connections can be changed using software, which imposes no physical restrictions. Thus, our system is accessible and has an advanced variable structure. In addition, it increases fault tolerance by minimizing lost units in the event of system failures.
2.2 Connection Layer
A robot operating in remote locations must be able to switch among multiple tasks, each task comprising a module’s behavior logic, in response to a given situation; however, connection switching is very expensive to realize in practice. Thus, the middle layer of our proposed architecture manages the actual modules of the system and virtually realizes task dependencies defined at the top layer. This is performed by the database node module (DNM), which relays information among the functions of modules. In particular, all modules are connected to the DNM, as shown in Figure 3, and data are exchanged at a high speed via shared memory. The DNM transmits destination addresses of each module containing task dependencies defined by a user in a logical layer. In this manner, the DNM realizes a network list. Hence, module connections can be switched by changing reference pointers, while the DNM manages the timing of the switches.
Moreover, because the DNM contains the data of all modules, it realizes high system transparency. To achieve load balancing and reduced traffic, the DNM can be arranged in a hierarchical structure, illustrated in Figure 4. Further, because the hierarchical structure limits the range of failures, this structure enables an easy identification of the causes of failures.
2.3 Logical Layer
The top layer enables users to intuitively compose tasks, thereby improving the efficiency of task development. Users can collect the necessary modules and connect them according to the intended task flow, as shown in Figure 5. Our method allows free swapping, addition, replacement, and deletion of modules. Thus, the system can effectively reconstruct its functions and respond quickly to changing situations or any problems encountered.
3 DATABASE NODE MODULE
The efficiency of development and operation processes is enhanced in our proposed architecture by function modularization. By modularizing every function, the development process can be shared and both development and maintenance can be quick. Moreover, the development process becomes more efficient because a once-developed function is essentially a software resource that can be diverted to other systems.
To execute task flows designed by an operator on the logical layer, task connections of the flows have to be converted to virtual connections of software modules. These virtual connections are controlled and managed by the DNM.
3.1 Task Flow by Virtual Connection
When executing logic constructed on the logical layer, the DNM reads the netlist of the virtual connection, delivers the data to each functional module, and controls its behavior. To construct more flexible logic into the logical layer, we introduce a port same as that adopted in RT-middleware. To establish a virtual port connection, the DNM generates shared-memory space corresponding to the number of ports in each module and executes data communication.
Moreover, when a task module straddles two or more DNMs, intermodule data communication requires the synchronization of the memory space among databases. While realizing this synchronization, a user-designed idea should not be input into the connection layer. Therefore, the DNM reads the netlist and automatically constructs data routing among databases based on real connections, as shown in Figure 6. The overall flow is as follows:
1) Data searches the position of an addressee module.
2) The course from a transmitting agency module to an addressee module is established.
3) The database of an addressee generates the memory space for transmitting agency modules.
4) Memory space is synchronized among databases, and data communication is performed.
By virtual connections using the DNM, a task can be switched dynamically and at high speeds. In a conventional system in which modules are directly connected during task switching, the modular connection is reconnected by a single separation re-degree. Because task switching merely involves replacing the virtual connection information read into the DNM, modules can be switched at a reduced cost. When a hardware component breaks down, the system can shift to the backup node at a high speed by rewriting the virtual connection. This is an important stabilizing feature that is advantageous to manipulate a robot in remote places where direct maintenance is impossible.
3.2 Data Communication among Modules using Shared Memory
The function Shared-Memory (SHM) Server supplied to a database enables data communication among modules. Data communication is executed when the SHM Server creates a shared-memory space according to demand from the SHM Client (functional modules), which then accesses data in the space, as illustrated in Figure 7. When a functional module wants to access data, it displays a
pointer to the storage address of the data. In this manner, data are exchanged at higher speeds than possible using typical middleware. Moreover, although shared memory is generally implemented using a single CPU, two or more memory spaces are synchronized using the Common Object Request Broker Architecture network, enabling the DNM to share data among two or more CPUs. Therefore, modular data access can be distributed, and high system performance can be maintained.
Further, the SHM Server offers a semaphore that secures data consistency. Using the semaphore, data can be safely exchanged within shared memory by an exclusive control of data access to the shared-memory space.
Figure 7: Shared memory system for task flow.
3.3 Realization of Remote Control and Task Management using Our Three-Layer Architecture
The system controls a multitasking robot from a remote location; this requires the implementation of our Three-Layered Architecture on both operator and robot sides and their connection via a communication module, as shown in Figure 8. Thus, the two separate systems are incorporated into one large system. By constructing a separate tree for each side, the mutual system ensures a more stable communication path and robustness for severe environments. Each communication module is equipped with a data transceiver function among systems, a modular controlling function, and a task controlling function using TCP.
A user selects a required module, creates various tasks, and assigns duties via flexible exchange from an operator side. Examples of user-defined tasks include combined mapping, course planning, driving the wheels of remote investigation vehicles, and operating a camera and a manipulator during remote sampling.
4 MODULE DESIGN
Our architecture is regarded as having high variability. To improve the variability of the system, the connections among modules should be able to be easily changed.
Figure 8: DNM structure in a tele-navigation system.
4.1 Structural Variability
To increase structural variability, module connections should be able to be easily changed. Our architecture cannot connect modules that have different interfaces. Developers can define interfaces when creating a module; further, modules can have multiple interfaces. A module that has many different interfaces is difficult to connect. When the number of interfaces of a module is reduced, structural variability improves. The interface shows the relation among modules.
4.2 Module Function
Functions that include a module should have a deep relation with one another. Because modules consist of their functions, they are not allowed to contain unrelated functions. To support structural variability, a module should be divided into smaller components. Choices of the structure increase if there are many small modules. In other words, variability can be improved.
4.3 Evaluation of Module Design
It is possible to evaluate the architecture to determine whether it has high variability (as described later); however, implementation modules are different from modules created for evaluation. Modules created for evaluation consider variability, but modules in actual operation may not consider variability. Because module design for actual operation depends on developers, it is necessary to evaluate the design. A module is quantitatively evaluated to eliminate the differences in variability based on creators. At the same time, we evaluate whether the module is suitable for our architecture.
In general, a module design of large-scale systems has been discussed in the literature (E. Yourdon et al., 1979). It is clear that the design of each module has a large influence on a system. A general architecture has strict regulations regarding module design. In such cases, a module is evaluated according to the design regulations. However, in our three-layer architecture, there are no regulations governing module design. Instead, module design guidelines exist. We need a method to estimate whether a given module deviates from these guidelines. The evaluation of module design of our three-layer architecture is different from that of general architectures.
4.4 Evaluation Index of Modules
Module design and its interface are important in terms of changing module connections while a robot is operating. It is necessary to estimate whether the module is of a good design; this is the heart of evaluating module design. Therefore, evaluation is estimated first using the number of lines of the module source code itself. The subsections given below summarize a number of different indexes we used for our evaluation.
4.4.1 Degree of Relation
This index shows the strength of the relation of a module and other modules. The number of lines used for communication among modules and the number of lines of an entire module are compared as follows:
\[
\frac{\sum_{S \in S} St(S)}{M(x)}
\]
(1)
Here, S is a set of programs for communication, St(S) is the number of lines for the communication of module x, and M(x) is the total number of lines of module x. A module should be designed that the relationship between modules is low. When one constructs a module with a good design, the related degree described here is low.
4.4.2 Degree of Concentration
This index shows the strength of the relation among the functions of a module. Modules are evaluated by the total percentage of the number of lines of a related function (Okamoto et al., 2012) as follows:
\[
\sum_{f1 \in F} \sum_{f2 \in F} Re(f1, f2) \cdot SR(f1) \cdot SR(f2)
\]
(2)
Here, F is a set of all functions in modules, Re(f1,f2) defines the relation of function f1 and f2, which a module should be composed of high relationship function. The degree of concentration becomes high with good designs.
4.5 Module Design Experimentation
We evaluated a module of a traveling system in which three modules exist. We calculated the degree of relation and the degree of concentration of these modules, and then, we made alterations on the basis of these calculations. We revalued each module after such change and argue variability of the entire system.
4.5.1 Evaluation using the Suggestion Index
We calculated the suggestion index using three modules (results shown in Table 1). Reviewing the degree of concentration for module A, it was the lowest value (less than 50%) as compared with other modules. Such a module appears to be a bad design; thus, it was redesigned. The related degree of module B was also evaluated and was observed to have the highest value (more than 70%) as compared with other modules. Therefore, this module was also redesigned.
4.5.2 Redesign of Modules
Module A was the module that revises the distortion of the run course. A different calculation system was included in one module. The degree of concentration was the low value because different calculation method exists in single modules. This module divided every calculation system because a relation was the low function, which is illustrated in Figure 9. Next, module B makes a run order with handed data. The same type of data is received at the same time. It was designed so that it might be used exclusively. The degree of relation was the high value because it was the module from which much data was received. A data conversion module was recreated because the same data connected to this module was set to one, as shown in Figure 10.
4.5.3 Evaluation after Design Changes
Table 2 shows the calculated suggestion index values after redesign. Both the degree of relation and the degree of concentration are better. When changing a module, a programmer is conscious of an index because an index uses the number of lines of a given program.
This change will benefit this architecture. While a robot operates, connections between the modules...
can be easily changed. Note that another benefit is that the module does not depend on data interface.
5 EXPERIMENTATION
5.1 Simulation Results
5.1.1 Data Communication Time
To evaluate the performance of our system, we compared data communication time of our virtual intermodule connections with that of conventional RT-Middleware. Results of this comparison are shown in Figure 11. As shown in the figure, the communication time of the virtual connections in our architecture was lower than that of conventional RT-Middleware. Therefore, we conclude that data communication in this system is efficient.
5.1.2 Variability
To compare variability, we measured the task-switching speed of actual connections using RT-Middleware and that of the virtual connections in our proposed architecture. Results of the comparison are shown in Figure 12. The switching speed of our architecture was faster than that obtained from RT-Middleware. Therefore, we confirm that our architecture offers efficient operation and task execution.
5.1.3 Load Distribution
We investigated how the load applied to a system would change when all modules are connected to a single DNM and when a module is distributed through two DNMs, each assigned to a separate PC. The load average as a function of time for the two cases is shown in Figure 13. As shown in the figure, we observe that adopting the multi-CPU configuration reduces system load relative to connecting all modules to a single DNM.
5.2 Implementation Experiment
The robot system adopted in the implementation experiment was Beetle-One, shown in Figure 14. Beetle-One is a test prototype for the planetary exploration Rover Micro6. An electric wheelchair, designed in the same manner as the Rover, was used as the test system. A joystick and computer control was made compatible with the two-wheel differential-steering system adopted in both systems.
Table 1: Evaluation using the suggestion index.
<table>
<thead>
<tr>
<th></th>
<th>The degree of relation [%]</th>
<th>The degree of concentration [%]</th>
</tr>
</thead>
<tbody>
<tr>
<td>Module A</td>
<td>30</td>
<td>47</td>
</tr>
<tr>
<td>Module B</td>
<td>73</td>
<td>85</td>
</tr>
<tr>
<td>Module C</td>
<td>27</td>
<td>92</td>
</tr>
</tbody>
</table>
Figures 15–18 show an implemented system using our proposed architecture. Three distinct tasks were assigned to the half-autonomous travel system Beetle-One; each task was governed by the corresponding operator side task. Figure 15(a) shows the navigation system task in which a user specifies a target location, and then, the system automatically generates the run course of Beetle-One and directs it safely to the target.
Another task is assigned to the GUI modules mediated by a user on an operator side. Figure 15(b) shows a visual odometry landmark tracker system task; this task acquires geographical feature data for the navigation system or visual odometry (i.e., the run orbit of Beetle-One) using the stereo camera on board of Beetle-One. It is run by the module group that acquires the azimuth difference picture from a stereo camera, the GUI display, and the operation...
module group that calculates the details and generates visual odometry.
Figure 11: Comparing communication times in a system with 10 modules for our proposed architecture and RT-Middleware.
A network assigns two PCs and the microcomputers of Beetle-One to the same LAN via a cable LAN, and each module is assigned to a separate PC. Programmed with the above tasks, the robot was directed to run the enclosure of a university. The experimental situation is shown in Figure 17. This experiment tests the performance of orbit compensation and the landmark tracking system, as well as whether the system is operating normally from both sides of arithmetic processing, such as picture acquisition from the camera, visual odometry generation, course planning, DEM data access, and processing to the GUI. Clearly, from this experiment, the DNM of our proposed architecture ensures normal data communication and task flow and demonstrates the capacity to operate a robot.
The final run locus and terrain evaluation map are shown in Figure 18. The system operated successfully for a long time, with proven stability and disaster tolerance.
The DNM was implemented on our test-bed rover and evaluated by operating experiments. Functions and stability of the architecture with the DNM were confirmed by successful long-distance traversal of the rover. As mentioned above, we showed that our proposed architecture can improve the efficiency in the development and operation stages of a teleoperated system.
6 CONCLUSIONS
In this paper, we proposed a system architecture for teleoperators that offers advanced flexibility and variability, efficiency, scalability, and transparency. We realized advanced variability by defining real and virtual connections in different layers. Software modules are managed by the DNM. Further, system transparency is improved because the DNM contains the data of all modules. We validated our architecture characteristics via simulation. Thus, our proposed architecture provides significant contributions to the development and operation of teleoperators. In future work, we plan to further improve the efficiency of our proposed architecture by incorporating a task scheduler into the logical layer.
ACKNOWLEDGEMENTS
This research is supported by a joint research project in the Institute of Science and Engineering of Chuo University, Japan.
REFERENCES
|
{"Source-Url": "http://www.scitepress.org/Papers/2015/55477/55477.pdf", "len_cl100k_base": 4574, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 20583, "total-output-tokens": 5502, "length": "2e12", "weborganizer": {"__label__adult": 0.0005335807800292969, "__label__art_design": 0.0007100105285644531, "__label__crime_law": 0.0006318092346191406, "__label__education_jobs": 0.0006365776062011719, "__label__entertainment": 9.256601333618164e-05, "__label__fashion_beauty": 0.0002073049545288086, "__label__finance_business": 0.0002968311309814453, "__label__food_dining": 0.0005135536193847656, "__label__games": 0.0008554458618164062, "__label__hardware": 0.005191802978515625, "__label__health": 0.0010137557983398438, "__label__history": 0.00046372413635253906, "__label__home_hobbies": 0.0001512765884399414, "__label__industrial": 0.0014829635620117188, "__label__literature": 0.00024628639221191406, "__label__politics": 0.00031256675720214844, "__label__religion": 0.0005669593811035156, "__label__science_tech": 0.2022705078125, "__label__social_life": 6.920099258422852e-05, "__label__software": 0.00897216796875, "__label__software_dev": 0.7724609375, "__label__sports_fitness": 0.0005164146423339844, "__label__transportation": 0.0016641616821289062, "__label__travel": 0.0003192424774169922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25257, 0.03508]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25257, 0.60708]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25257, 0.90471]], "google_gemma-3-12b-it_contains_pii": [[0, 3725, false], [3725, 7064, null], [7064, 10088, null], [10088, 13605, null], [13605, 17917, null], [17917, 21156, null], [21156, 23383, null], [23383, 25257, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3725, true], [3725, 7064, null], [7064, 10088, null], [10088, 13605, null], [13605, 17917, null], [17917, 21156, null], [21156, 23383, null], [23383, 25257, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25257, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25257, null]], "pdf_page_numbers": [[0, 3725, 1], [3725, 7064, 2], [7064, 10088, 3], [10088, 13605, 4], [13605, 17917, 5], [17917, 21156, 6], [21156, 23383, 7], [23383, 25257, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25257, 0.04348]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
e92fea16b7e2d381d1bee1a94a26ef5f525b5531
|
[REMOVED]
|
{"len_cl100k_base": 7294, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28491, "total-output-tokens": 9682, "length": "2e12", "weborganizer": {"__label__adult": 0.0004677772521972656, "__label__art_design": 0.0003883838653564453, "__label__crime_law": 0.00040793418884277344, "__label__education_jobs": 0.0007410049438476562, "__label__entertainment": 0.0001423358917236328, "__label__fashion_beauty": 0.0002110004425048828, "__label__finance_business": 0.00020015239715576172, "__label__food_dining": 0.0004050731658935547, "__label__games": 0.0008063316345214844, "__label__hardware": 0.0009832382202148438, "__label__health": 0.0007181167602539062, "__label__history": 0.00027632713317871094, "__label__home_hobbies": 0.00011134147644042967, "__label__industrial": 0.0004661083221435547, "__label__literature": 0.0006570816040039062, "__label__politics": 0.0003001689910888672, "__label__religion": 0.00063323974609375, "__label__science_tech": 0.06939697265625, "__label__social_life": 0.00011748075485229492, "__label__software": 0.01058197021484375, "__label__software_dev": 0.91064453125, "__label__sports_fitness": 0.0003266334533691406, "__label__transportation": 0.0005626678466796875, "__label__travel": 0.00019347667694091797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37345, 0.03111]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37345, 0.29308]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37345, 0.8453]], "google_gemma-3-12b-it_contains_pii": [[0, 4401, false], [4401, 10511, null], [10511, 16359, null], [16359, 19774, null], [19774, 21792, null], [21792, 26161, null], [26161, 31656, null], [31656, 37345, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4401, true], [4401, 10511, null], [10511, 16359, null], [16359, 19774, null], [19774, 21792, null], [21792, 26161, null], [26161, 31656, null], [31656, 37345, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37345, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37345, null]], "pdf_page_numbers": [[0, 4401, 1], [4401, 10511, 2], [10511, 16359, 3], [16359, 19774, 4], [19774, 21792, 5], [21792, 26161, 6], [26161, 31656, 7], [31656, 37345, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37345, 0.07787]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
279519ac6e1b49437ee050477a1ebe226dbcc7b5
|
Run-time Support for Controlling Communication-Induced Memory Fluctuation
Yan Shi, Gengbin Zheng and Laxmikant V. Kale
Department of Computer Science
University of Illinois at Urbana-Champaign
{yanshi, gzheng, kale}@cs.uiuc.edu
Abstract
Many parallel applications require a large volume of transient memory to hold data from communication, therefore demonstrating a pattern of communication-induced memory usage fluctuation. Even though these applications’ persistent working data might fit in physical memory, the transient peak memory usage could still lead to disk swapping or even out-of-memory error. In this paper, we present a solution to the above problems by runtime support for controlling the communication-induced memory fluctuation. The idea consists of imposing runtime flow control for large data transfers and thus controlling the peak transient memory consumed by communication. We explore the idea with both send-based and fetch-based low level communication primitives. We develop a runtime support based on the Charm++ integrated runtime environment. We test this runtime system with a set of real applications and show considerable performance improvements.
1 Introduction
A large number of parallel applications exhibit a pattern of fluctuating memory usage at runtime. Many of these patterns are established when parallel objects grasp data from others, do computation with the data and ultimately throw the data away. Frequently, the amount of transient memory is proportional to the program’s static memory consumption or larger. This wavy pattern is substantially undesirable for several reasons. One is that large memory fluctuation might drive the program into disk swapping zone where performance will be miserable due to the severe overhead of disk swapping. Another is that a large memory footprint might potentially bring poor cache performance. Further, applications might fail to run as a result of insufficient swapping space. An extreme architecture case is the IBM BlueGene/L machine, where there’s no virtual memory with only 512MB of physical memory available.
A vital observation here is that many of these transient memory variations are associated with data communications between parallel entities. In Section 3, we will better illustrate the idea with an example of 7-point stencil with 3D decomposition. The same pattern is also shown in a broad range of both structured and unstructured meshed applications. Various commonly used parallel libraries, such as matrix multiplication exhibit similar behaviors. Clearly, these stated applications could benefit from a runtime system that controls the transient memory and reduces the memory fluctuation. The same runtime system should require minimal user code modification and incur negligible overhead in the normal case while improving performance when the memory fluctuation is high.
In search of relevant work, we find that the stated problems have rarely been addressed directly. Many memory related studies focus on optimizing memory hierarchy based on locality. Other works try to solve the memory problem with faster swapping mechanisms.
In this paper, we present an approach where we try to confront the communication-induced memory problem head-on. Given knowledge at the runtime level of the communication-induced memory pattern, we could limit transient memory usage from communication by controlling large data transfers. Various flow control strategies could be applied in the runtime system to facilitate the selection process. We study the applicability of integrating this idea with both send-based and fetch-based communication. A runtime support for this approach is implemented in Charm++ [11] and AMPI [10], an integrated parallel runtime system. Throughout the paper, we demonstrate by drawing examples from a set of scientific applications, where the communication-induced memory fluctuation pattern persists. We believe the generality of our approach is maintained and it could be applied to various parallel systems where a high degree of concurrency is present.
The rest of the paper is organized as follows: Section 2 discusses the background of our work and its related work. Section 3 presents our methodology implemented in a runtime system to handle communication-induced memory fluctuation. Section 4 describes the performance with case studies of several real-world applications. Section 5 concludes with some future plans.
2 Background and Related Work
2.1 Related Work
Explicit Memory Control
Most work in explicit memory control aims at improving application performance by exploiting the memory hierarchy through better memory management using an educated policy for caching data in faster memory [7, 4]. For large data applications, out-of-core [6, 19, 18] methods are designed to overcome the memory capacity limitation. These approaches typically block data sets and use DRAM as a cache for slow bulk media such as hard drive or tape drives. The performance gain largely stems from applying application specific knowledge and replacing the operating system in the role of manipulating data swapping. By keeping the real working data set in-core, thrashing hopefully will be avoided.
Another relevant work is on resource-constrained sandboxing [17, 3], where irrevocable restrictions exists on resource usage, such as memory. It is primarily in the context of real-time systems where fair sharing and no-starvation guarantees are required. Relying largely on kernel support, monitoring resources, code instrumentation and system call interception, resources are enforced in a qualitative way.
Although some of the works above, such as the out-of-core method, explicitly control the application memory footprint, our work addresses the memory-constrained problem from a different perspective. We focus on controlling transient memory fluctuation caused by communication to reduce the memory footprint to fall within the bounds of system availability. In fact, our work can be used as a complement to out-of-core methods to better solve the memory problem. Our work leverages some of the techniques listed above, such as memory monitoring, code instrumentation and system call interception.
Communication Flow Control
In our work, we use the token-based communication flow control, which by itself is not a new idea. The Myrinet GM communication library [16] provides a simple communication flow control via regulating send and receive tokens, representing space allocated to the client in various internal GM queues. A client program may send or receive a message only when it possesses a send or receive token for a myrinet port. However, this mechanism does not provide an effective flow control for eager messages. In the MPICH-GM implementation, eager messages are received as unexpected messages. A fast unexpected sender can easily flood a slow receiver. MPICH-GM therefore implements a rudimentary but somewhat effective throttling mechanism to choke the sender if the unexpected queue is getting big.
ChaMPIon/Pro [15] MPI runtime enforced flow control by imposing some reasonable resource limit, such as, message buffer size to user processes. When the message buffers for un-
expected messages run out, the runtime simply aborted the program and showed to users that there is a resource issue with their application. This usually indicates possible load imbalance of the job because normally a process would get a large number of unexpected messages only when it falls behind the rest. This implementation, however, was not appreciated by users because aborting on a semantically correct MPI program is not desirable.
Communication flow control is effective in controlling the message buffer size used for unexpected messages between a pair of communicating processors. It, however, can not solve the memory fluctuation problem caused by communication. In a parallel application, a process tends to communicate with multiple processors, therefore a per link communication flow control is not sufficient in controlling the total buffer usage in a process. Furthermore, such low level communication flow control does not react to memory usage fluctuation caused by applications.
Safe MPI Program
MPI literature [13] calls a program safe if it can be executed to completion regardless of memory limitations. Non-blocking calls relaxes pressure on memory compared to blocking calls. The \( k \)-safe notion [2] relaxes the requirement of the safe program to being safe in an environment with \( k \) system buffers available per processor. Our approach of having flow control on large data communications also raises the question of being safe and deadlock free. We discuss these questions in Section 3.4 and argue that under certain assumptions, the program could be guaranteed to avoid deadlocks.
2.2 Parallel Run-time
Controlling communication-induced memory fluctuation often requires flow control of communication. Such a scheme may result in degraded performance due to delays in communication. To alleviate such a performance problem, it is essential for a runtime system to provide dynamic overlapping of computation and communication through a high degree of concurrency to hide the increased communication latency.
The Charm++ and AMPI runtime systems, which our work is based on, provide such techniques. The Charm++ runtime system employs an approach called processor virtualization [11, 12]. An application divides a problem into a large number of parallel entities (\( N \)), each a virtual processor, that will execute on \( P \) physical processors. \( N \) is independent of \( P \), and typically \( N \gg P \) so that there are multiple virtual processors on each physical processor for high concurrency. The user’s view of the program consists of these parallel entities and their interactions; the user need not be concerned with how the components map to processors. The underlying run-time system takes care of this (see Figure 1).

some static computation/communication overlapping.
We have also demonstrated that virtualization has minimal performance penalty [12], due to the low scheduling overheads of Chares and user-level threads. In fact, Charm++ and AMPI runtime systems promote better cache performance, which leads to improved performance. A virtual processor handles a smaller set of data than a physical processor, so a virtual processor will have better memory locality. This blocking effect is the same method many serial cache optimizations employ, and Charm++ and AMPI programs get this benefit automatically.
In typical Charm++ and AMPI applications with fine grained computation and high degree of concurrency, there are multiple objects or threads running on one processor. These objects or threads tend to act independently regardless of the memory constraint on a node. As a result, a bursty communication pattern may occur which leads to significant amount of transient memory usage for sending and receiving messages in a short period of time. This may either push the application into the swap zone with dramatically degraded performance, or even cause it to run out of memory. In the next section, we will present our effort in making the runtime system memory aware to control such bursty memory fluctuation caused by communication.
3 Design and Implementation
It is often found in many scientific applications, that cross processor communication, including collective communication, may lead to significant memory problem, MPI Alltoall is such an example (Section 3.5). Furthermore, as the number of parallel entities that participate the communication increases, memory usage may arise nonlinearly. After communication finishes, the memory usage returns to normal. This paper focuses on such transient memory usage problems caused by communication. A concrete example application is given next.
3.1 A Motivating Example
Let’s take a 7-point Jacobi relaxation program as a better illustration. In 3D Jacobi problem, data of a regular rectangular cube are partitioned into equal sized small cubes and distributed evenly over all processors. In every iteration, data in each small cube is updated locally with its own original data and data from its neighbors. With a 7-point centered scheme, each small cube depends on 1 adjacent slab with 1 width from all of its 6 neighbors. These data are usually stored locally and are called ghost cells. In our implementation, for purposes of memory efficiency, we allocate data for ghost cells on the fly, construct them as they come and after computation, free the stale ghost data. In this example, we use two data sets. For Data1, the 3D cube is of size 2048 * 512 * 512; while for Data2, 2048 * 512 * 384. Both data sets are partitioned into 8 * 8 * 8 small cubes. Each cube would need six ghost cells of 8 * 8 rectangles, one from each neighbor. These ghost cell communications comprise the primary communication cost in this program. Figure 2(a) exhibits the iterative wavy pattern of the memory usage over a sample run with these two different data sets. As expected, Data1 takes about 40% more time per iteration. Figure 2(b) shows the same run but on a different cluster, where Data1 runs into disk swapping and got slowed down substantially. While Data2 completes almost 7 iterations in 200 seconds, Data1 hardly finishes its first!
The amplitude of this memory fluctuation will be multiplied if the ghost cell region widens or the data decomposition grows finer. The former could be result of particular numerical algorithm (5), and the latter could result from the processor virtualization idea we discussed in section 2.2. In this example in figure 2(b), if the amplitude of the fluctuation could be reduced and controlled within the bounds of physical memory, disk swapping could be effectively reduced or avoided and performance would be greatly improved.
Figure 2. seven-point 3D stencil jacobi, Total data grid for data1: 2048*512*512; for data2: 2048*512*384. Both are decomposed into subgrid of 8*8*8. 4 nodes of a X86 cluster is used to run.
3.2 Memory-aware Control
The problem of interest here could be formulated as the following: We assume that an application starts with a memory footprint ($M_A$) in the bounds of physical memory ($M$) and there is a limit on the memory per processor ($M_C$) that can be used for holding data from communication. The $M_C$ is determined so that it prevents the application from entering the swap zone, such that:
$$M_A + M_C \leq M$$
For simplicity, assuming each message is of size $C$, therefore the runtime is able to schedule $\lfloor M_C/C \rfloor$ outstanding messages at a given time. The memory-aware runtime we designed schedules communication under such constraint.
It is clear that a runtime system can only achieve this goal when the message size is less than $M_C$, otherwise having even one message leads to swapping. To enable the runtime system to control memory usage effectively, one important design decision is to allow applications to be decomposed into finer grained computation. Fine grained computation leads to fine grained communication, which gives the runtime more opportunities to schedule communication in a memory efficient manner. We will see later in section 4.2 the advantage of fine grained computation encapsulated in the concept of virtualization.
Note that the inequality (1) is really not a hard limit, which means even if it can’t be satisfied, if swapping is supported the program should be able to run. But we show in our study that in order to obtain undegraded performance, it’s desirable to provide a best-effort soft guarantee that the inequality be met. In the next subsection, we present a token-based control strategy to provide this best-effort service at runtime.
3.3 Token-based Scheduling
Similar to Myrinet flow control, we apply tokens to represent memory resource allocated to an application. Data communications are posted only tentatively by the application, and the requests are queued by the runtime. The runtime would only schedule the transfer if the application possesses the token. Various interesting questions arise in this scenario such as to whom
and in what order to assign tokens to. An ideal allocation scheme should incur the least delay, yet bring extra benefits such as avoiding communication hotspot, balancing work load, reusing data and etc. For this paper, we are concerned with how to minimize memory usage.
This token-based communication control scheme requires several extensions to the runtime in order to provide memory efficient communication. First, the runtime needs to intercept normal communication phases by injecting token-based control. Second, instead of letting an application pre-allocate a receive buffer, the runtime manages the message buffer as memory resource regulated by tokens. Next, we will examine implementing the token-based scheduling with both fetch-based and send-based schemes.
3.3.1 Fetch-based vs Send-based Scheme
Different communication primitives pose different difficulty levels when trying to apply the token-based runtime control on communication buffer memory. First of all, to control specific large data transfers, we need to define points of interception. Secondly, the system needs to have knowledge about the party that the control will have an effect on. With a send-based model as shown in Figure 3(a), for the purpose of this discussion, we split the send-receive process into four phases:
1. application requests to send
2. runtime processes the requests
3. receiving system receives the message
4. receiving system delivers the message
In all four phases, memory is unavoidably consumed. With this model, possible runtime interception could happen during phase two or phase three. Regardless of where it takes place, in phase one, memory should already be allocated by the sender to prepare the send data, and memory is potentially needed to buffer the data at phase two. For the sake of discussion, assume runtime intercepts at phase two. At this stage, the best it can do is to avoid memory explosion caused by sending this data at the receiver side. In order to achieve that, knowledge about the receiver’s memory usage is required. Either pre-knowledge exists or new knowledge is acquired on demand. Both approaches, however, run the risk of that knowledge being outdated. Further, the latter approach would bring extra delay for communicating with the receiver in a on-demand fashion.
With a fetch-based model, as shown in Figure 3(b), the protocol is described in seven-phases:
1. application poses tentative fetch
2. runtime processes the request
3. destination runtime receives request
4. destination application returns data
5. data passed to destination runtime
6. data passed to requesting runtime
7. data delivered to requesting application
Combining the seven-phase model with the idea of runtime allocation of memory, the user doesn’t preallocate memory for receiving data at phase one. Instead, the runtime allocates the memory when the fetch data is received at phase six. By decoupling the fetch request and the memory allocation, consumption of memory is pushed back to later phases – four, five, six and seven. Under this model, we intentionally choose runtime interception at phase two, in which the requester side’s runtime queues up the tentative fetch requests and selectively schedules the ones within limits of its memory. Thus memory allocation occurring at later phases all falls within control. Moreover, the runtime making the scheduling decision would only require knowledge about itself in order to avoid the burst of memory allocation.
The advantage of the fetch-based model over the send-based model is simplicity of implementation and effectiveness of control. In the latter, interception occurs before all memory allocations take place; while in the former, it happens after data is generated and buffered. However, the side effect of using fetch is losing the explicit synchronization brought by the send-receive pair. Thus more careful synchronization is needed when fetch is being applied in the user program.
3.3.2 Detecting Memory Availability
Having addressed how the runtime regulates communication via token-based scheduling policy with the fetch-based model, the practical question remaining is that how to detect the amount of memory available to the job at the compute node, and hence decide on the number of tokens necessary. Since the memory availability for transient memory usage pattern over time, keeping track of memory availability is mandatory in order to adapt the number of tokens during the lifetime of the program.
Specifically, we need to calculate the application memory usage ($M_A$) and total available memory on a node ($M$)\(^1\). Application memory usage ($M_A$) can be easily instrumented in a memory allocator for each malloc and free. It is the peak memory usage measured in a certain period of time in the execution. Detecting memory availability on a node however is a nontrivial task [14] because most operating systems do not provide accurate free memory information. Often, even though operating systems report that the amount of free memory is close to zero, a large memory request from a process can still be accommodated. This is because many operating systems use as much free memory as it can as buffer memory, which can be adapted for user memory requests.
In this paper, we focus on dedicated parallel environments where there is no time sharing of other user applications. Therefore, the total amount of physical memory available on a compute node stays relatively stable during the execution time of a compute job, so that the application only needs to detect the physical memory availability at start time. A simple way to estimate the available physical memory is to first get the total amount of physical RAM installed, and then to subtract certain amount of memory (say 100MB) from this to leave room for the kernel and daemons. A more reliable way is to try to allocate and use memory as much as it can until a page fault occurs. This is to see how much of the temporarily claimed memory can be maintained in the program’s non-swapped physical memory (as often reported in the RSS field of the Unix top utility), which can be used to define $M$.
In our current implementation, we assume that an application’s base memory usage stays relative stable, and therefore we only calculate $M_C$ once and use it as the maximum amount of the transient memory that the runtime is allowed to use for communication. In the future we plan
\[ M_C = M - M_A \]
to extend this scheme with token adaptivity ability which will be discussed in Section 5.
3.4 Guaranteed Progress and Deadlock Free
With token-based flow control, parallel threads issue a serial of fetch-data requests and later block waiting on them, which can be represented by a fetch set:
\[ R_n = \{F_1, F_2, \ldots, F_n, W\} \quad (2) \]
where \( F_i \) is the issued fetch-data request and \( W \) is the waitall. The blocking wait introduces chances for deadlocks if there are dependencies between threads. To simplify the task of avoiding deadlocks at runtime level without the knowledge of application dependencies, we made the following two assumptions. One is the atomicity property of fetch requests \( (F_1, \ldots, F_n) \) posted by any single thread. Any thread would execute in a pattern of posting fetch-data requests, doing computation, later waiting on the requests. Atomicity requires that the issuing of fetch requests \( (F_1, \ldots, F_n) \) being atomic and thus guarantees that there’s no interleaving of issuing fetch-data requests from different threads in the request queue. Under this assumption, an application can be simply viewed by the runtime as a sequence of fetch sets \( (R_s) \):
\[ \{R_1, R_2, \ldots, R_k\} \quad (3) \]
where \( R_i \) is defined in (2) which is issued by a particular thread in the application. This assumption allows the runtime to execute the fetch requests in the order they are received and fulfill waits in the same order thread by thread. This avoids the detection of thread dependencies and significantly reduces unnecessary complexity of the implementation. The other assumption is that the number of tokens available would satisfy the progress of any single parallel thread, that is any \( R_i \) can be satisfied memory-wise. This assumption simply guarantees enough resources to make at least one thread progress. Under these two assumptions, we call the program \( f\text{-}live \) program which indicates it’s guaranteed to progress without a deadlock situation on waiting for the fetch-data requests. With the charm++ and AMPI system, an execution of any thread is non-preemptive until it finishes and surrender the control to the scheduler. Thus atomicity is automatically satisfied without extra effort.
3.5 Applications in MPI
The above ideas on runtime control of communication-induced memory allocation can be applied to MPI implementations. We use the MPI_Alltoall as an example in this section to illustrate our implementation in the Adaptive MPI runtime.
In MPICH, the default implementation of MPI_Alltoall uses different algorithms based on the size of messages and communicators. For small messages (less than 256 bytes), MPICH uses a very efficient algorithm by Jehoshua Bruck et al [1]. It is a store-and-forward algorithm that takes \( \log p \) steps, where \( p \) is the number of processors. Due to the messages being small, there is no memory issue for this algorithm. For medium size messages (less than 32KB), MPICH uses an algorithm that posts all irecvs and isends and then does a waitall, which however, requires significant transient memory for communication and does not scale to a very large number of processors. For example, to send a 16KB messages to 32,000 processors (BlueGene/L for example) requires about 512MB transient memory buffer which barely fits in BlueGene/L’s memory. For large messages, MPICH switches to a memory-conservative implementation that uses a pairwise exchange algorithm, which takes \( p - 1 \) steps for \( p \) processors. This pairwise exchange algorithm makes sure the transient memory required between two processors in a step is strictly limited. This algorithm however may not fully utilize the communication bandwidth even though there may be enough memory for transient message buffers. It is clear that without the memory awareness, it is difficult for a runtime to choose the best algorithm that is both memory and speed efficient. The runtime has to pick either the second algorithm which communicates aggressively assuming the memory is sufficient, or the third algorithm which restricts the com-
communication to only a pair of send/recv between two processors at a time assuming the memory is extremely limited.
Our new implementation of MPI_Alltoall treats medium and large size messages in a way that adapts to the available physical memory. MPI runtime issues communication requests aggressively, while the underlying communication runtime serves the requests using tokens. The communication is progressed according to the physical memory availability. When physical memory allows, this scheme can process as many communication requests as possible. When physical memory is limited in serving all requests, it restricts the outstanding communication. In Section 4.2 we demonstrate that the new implementation achieves better performance than the default MPI implementation.
4 Performance Case Studies
We evaluated our token-based memory control scheme on several platforms with several Charm++ and MPI applications and compare with the normal scheme without memory control. For the rest of the paper, normal scheme refers to the send-based scheme without any control on communication, while the controlled scheme indicates the fetch-based scheme with token-based flow control. And specifically, controlled-8token would be a controlled scheme with 8 tokens applied. In our experiment, we use one token to represent one message.
Two different clusters are used as testing platforms. The first one is a x86 Linux cluster with 8 nodes. Each node has 4 PentiumIII processors and 512MB of shared memory. Each processor is of 500MHz frequency and has 512KB cache. 100MB Ethernet is used as the interconnection. The second cluster is a AMD64 Linux cluster, where each node has 2 processors and 1GB of shared memory. Its processors are AMD Opterons of 1.8GHz and 1GB of cache. Nodes are connected with Gigabit Ethernet. From now on, we will refer to them as x86 cluster and AMD64 cluster respectively.
4.1 Jacobi (Charm++)
The first test program we run is the 3D stencil program as described in Section 3.1. Figure 4(a) shows the execution time on the x86 Cluster of the normal method vs the controlled method. A problem of size $X \times 1024 \times 128$, where $X$ varies from 6656 to 7680, is partitioned into $128 \times 128 \times 128$ sub-cubes. A 7-point centered stencil is used which leads to ghost cell size of $128 \times 128 \times 1$. It can be observed that as soon as the total data grid exceeds a certain threshold determined by system memory size, the execution time of the normal method blows up while the controlled one stays relatively flat and curves up much later. From table 1 we see that disk swapping picks up at the third data point to 2469 and drastically increases more than tenfold for the fourth point. This corresponds to the nonlinear increase of execution time in Figure 4(a). The controlled scheme also starts to swap at $X = 7552$ and its performance is degraded. At this point, the non-transient memory of the program has exceeded the amount of available memory of the system. Figure 4(b) draws the memory usage of the same experiment. The height of the bar represents total memory allocated during the lifetime of execution and the top part is the portion of transient memory used for communication for the ghost cell data transfers. While the static memory consumption of the two methods are almost identical, the normal method possesses a much larger transient usage while the controlled one uses so little for transient that it is almost invisible in the graph.
Figure 5 shows a sample run of the program at the AMD64 cluster. The problem being solved is of size $X \times 256 \times 256$, where $X$ varies from 3840 to 7040, and is decomposed into sub-cubes of $64 \times 64 \times 64$. A 13-point centered stencil is applied in the computation and the resulting ghost cell, in this case, is of size $64 \times 64 \times 2$. With the ghost cell widening and decomposition finer, the performance improvement is even more substantial than in Figure 4. Furthermore the undegraded execution zone is greatly extended.
In both cases, we see good performance improvement in execution time with reduced mem-
Figure 4. Jacobi running on x86 cluster Total data grid is $X \times 1024 \times 128$, $X$ steps from 6656 to 7680; running on 8 nodes, 1 proc per node, of the X86 cluster
Table 1. Number of page faults during 20-iteration period of the Jacobi program, running on x86 cluster
<table>
<thead>
<tr>
<th>scheme</th>
<th>6656</th>
<th>6784</th>
<th>6912</th>
<th>7040</th>
<th>7168</th>
<th>7296</th>
<th>7424</th>
<th>7552</th>
<th>7680</th>
</tr>
</thead>
<tbody>
<tr>
<td>normal scheme</td>
<td>9</td>
<td>42</td>
<td>2469</td>
<td>45510</td>
<td>32528</td>
<td>54505</td>
<td>42105</td>
<td>73987</td>
<td>90632</td>
</tr>
<tr>
<td>controlled-6token</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>11</td>
<td>20</td>
<td>1043</td>
<td>7521</td>
</tr>
</tbody>
</table>
4.2 NAS Benchmark FT
Here we test the performance of our MPI_Alltoall implementation with the well-known NAS FT benchmark( [8]). FT solves a three dimensional partial differential equation using forward and inverse FFTs. Hence it does several MPI_Alltoall’s per iteration with relatively large data size. We run the unmodified FT benchmark with AMPI and compare the performance of the different schemes. Since the data size from class A to B to C multiplies rather than incrementally increases, instead of controlling the problem data size we take control of the amount of system memory available to the program. This is achieved by running a small program that uses a specified amount of memory by pinning every memory page of the region periodically. Since each compute node is 4-way SMP, running the memory-using program on one processor while running the FT program on another processor will not introduce any contention for CPU time between them.
We solve the class B problem on 8 nodes of the x86 cluster. The class B consists of a 3D data grid of $512 \times 256 \times 256$. The problem is decomposed for 16, 32, 64, 128 and 256 virtual processors respectively, and running on 8 nodes cluster, 1 processor per node. Three methods are compared: normal scheme, controlled scheme with 4 tokens and controlled scheme with 8 tokens. Figure 6(a) shows the execution time of the FT.B.128, which is decomposed for 128 virtual processors. As we can see, the controlled
scheme has improved performance when available memory is less than 320MB. Table 3 shows the number of page faults occurring during the same sample run. Figure 6(b) illustrates execution time of the three methods for different virtual processor numbers when system memory is 260MB. As we observed, in the normal scheme, when the number of virtual processors increases, for the same class B problem, the execution time first decreases and then increases due to the combined effect of cache performance gain and finer grained message overhead. For the controlled schemes, however, larger number of virtual processors gives the runtime more opportunities to schedule communication to overlap with the computation, leading to better performance.
Overall, flow control for large MPI_Alltoall communication improves performance when memory is limited by reducing the memory peak usage. With virtualization, this effect is being aggregated and improvement is greater.
5 Conclusion
We presented a memory-aware runtime system that controls communication-induced memory fluctuation, which helps applications with large memory footprint to keep within the bounds of physical memory and avoid disk swapping. The runtime imposes flow control via communication tokens for large data transfer and thus control the peak transient memory consumed by communication. This runtime support is imple-
Figure 6. NAS FT Benchmark, running on x86 cluster, time taken for 10 iterations
<table>
<thead>
<tr>
<th></th>
<th>220MB</th>
<th>240MB</th>
<th>260MB</th>
<th>280MB</th>
<th>300MB</th>
<th>320MB</th>
<th>340MB</th>
</tr>
</thead>
<tbody>
<tr>
<td>normal scheme</td>
<td>65766</td>
<td>55580</td>
<td>32355</td>
<td>24346</td>
<td>8417</td>
<td>15</td>
<td>6</td>
</tr>
<tr>
<td>controlled-4token</td>
<td>16516</td>
<td>1143</td>
<td>3</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>controlled-8token</td>
<td>17053</td>
<td>1185</td>
<td>58</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
Table 3. Number of Page Fault during 2-iteration period of FT.B.128 Run, on the x86 cluster
In the future we plan to enhance our token-based memory control scheme to be able to adapt to the availability of the physical memory. This allows our scheme to work efficiently for time-sharing environment where the memory availability is influenced by other applications running on the same node. It also allows our scheme to handle the dramatic variation of the application memory usage. The token adaptivity can be realized by periodically probing both the available physical memory and the current application memory usage, which can be used to adapt the number of tokens during the execution of the program. We also plan to use the runtime we developed with out-of-core methods, which provides an effective way of controlling both the application memory and the transient communication memory to further eliminate the disk swapping overhead.
References
|
{"Source-Url": "http://charm.cs.illinois.edu/newPapers/06-06/paper.pdf", "len_cl100k_base": 7460, "olmocr-version": "0.1.51", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 39595, "total-output-tokens": 9064, "length": "2e12", "weborganizer": {"__label__adult": 0.00036263465881347656, "__label__art_design": 0.000438690185546875, "__label__crime_law": 0.0004575252532958984, "__label__education_jobs": 0.0008168220520019531, "__label__entertainment": 0.00015401840209960938, "__label__fashion_beauty": 0.0001971721649169922, "__label__finance_business": 0.0003299713134765625, "__label__food_dining": 0.0004107952117919922, "__label__games": 0.0008025169372558594, "__label__hardware": 0.0034084320068359375, "__label__health": 0.000827789306640625, "__label__history": 0.000499725341796875, "__label__home_hobbies": 0.00015294551849365234, "__label__industrial": 0.0008578300476074219, "__label__literature": 0.00029730796813964844, "__label__politics": 0.0003871917724609375, "__label__religion": 0.00063323974609375, "__label__science_tech": 0.452880859375, "__label__social_life": 0.00011903047561645508, "__label__software": 0.01470184326171875, "__label__software_dev": 0.51953125, "__label__sports_fitness": 0.0004305839538574219, "__label__transportation": 0.0011014938354492188, "__label__travel": 0.00027251243591308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39235, 0.04298]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39235, 0.40361]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39235, 0.91124]], "google_gemma-3-12b-it_contains_pii": [[0, 3145, false], [3145, 7233, null], [7233, 10065, null], [10065, 13977, null], [13977, 16289, null], [16289, 18693, null], [18693, 22751, null], [22751, 26898, null], [26898, 31046, null], [31046, 33126, null], [33126, 34507, null], [34507, 36414, null], [36414, 39235, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3145, true], [3145, 7233, null], [7233, 10065, null], [10065, 13977, null], [13977, 16289, null], [16289, 18693, null], [18693, 22751, null], [22751, 26898, null], [26898, 31046, null], [31046, 33126, null], [33126, 34507, null], [34507, 36414, null], [36414, 39235, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39235, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39235, null]], "pdf_page_numbers": [[0, 3145, 1], [3145, 7233, 2], [7233, 10065, 3], [10065, 13977, 4], [13977, 16289, 5], [16289, 18693, 6], [18693, 22751, 7], [22751, 26898, 8], [26898, 31046, 9], [31046, 33126, 10], [33126, 34507, 11], [34507, 36414, 12], [36414, 39235, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39235, 0.06818]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
a17400f0edcbd390f6fd094d0f67997891c4225e
|
Avionics Modernization and the C-130J Software Factory
Richard Conn, Stephen Traub, and Steven Chung
Lockheed Martin Aeronautics Company
The rollout of the first production C-130 aircraft, the C-130A, took place on March 10, 1955. Since then, more than 2,100 C-130s have been built in dozens of variations and are flown by more than 60 nations worldwide. They carry troops, vehicles, and armaments into battle. They drop paratroopers and supplies from the sky. They serve as airborne and ground refuelers. They serve as flying hospitals, hurricane hunters, and provide emergency evacuation and humanitarian relief. They perform airborne early warning and maritime surveillance. They’ve worn skis in Antarctica and have helped recover space capsules. In May 1992, the 2,000th C-130, a C-130H, was delivered. In September 1992, formal development of the C-130J began. Unlike its predecessors, the C-130J is a software intensive system employing modern avionics that have made significant improvements in its performance. By March 2001, the C-130J flew with a complete compliment of mission computer software setting 50 world records. This article presents insight into Lockheed Martin’s modernization of the C-130 airlifter family.
The C-130J looks like the earlier models, but it is really a brand new airplane with improved performance [1]. A key difference is that the C-130J is a software intensive system, where the earlier models were largely mechanical aircraft. Compared to the production C-130E, here are the C-130J improvements:
- Maximum speed is 21 percent greater.
- Climbing time is 50 percent less.
- Cruising altitude is 40 percent higher.
- Range is 40 percent longer.
The introduction of software intensive systems to the aircraft contributed significantly to all of these improvements. By June 1999, the C-130J had set 50 world aeronautical records in two aircraft categories. Twenty-one records were set in the Short Takeoff and Landing, Class N category for speed over a 1,000 and 2,000 kilometer closed course and for altitude with payload. The other 29 records were set in the Short Takeoff and Landing, Class N, Turboprop category for speed over a 1,000 and 2,000 kilometer speed over a closed course, altitude with payload, and time-to-climb to 3,000, 6,000, and 9,000 meters.
The C-130J also offers reduced manpower requirements, lower operating costs, lower support costs, and lower life-cycle costs. Here are the three key distinguishing features of the C-130J:
- A new propulsion system featuring four Full-Authority Digital Engine Control Allison AE2100D3 engines that generate 29 percent more thrust while increasing fuel efficiency by 15 percent.
- Advanced avionics technology featuring two holographic heads-up displays and four multifunctional heads-down Liquid Crystal Displays for aircraft flight control, onboard systems monitoring and control, and navigation; the displays are night vision imaging system compatible.
- Two mission computers and two backup bus interface units provide information flow and dual redundancy for the onboard systems, including an extensive integrated diagnostics system.
“The C-130J also offers reduced manpower requirements, lower operating costs, lower support costs, and lower life-cycle costs.”
The C-130J family started with the 382J, a commercial aircraft that was created specifically to achieve Type Certification by the Federal Aviation Administration (FAA). FAA Type Certification was at Level A (the highest level) of the DO-178B standard. This milestone established that the C-130J family has complied with the safety critical requirements of the FAA should we later have a commercial customer. Once FAA Type Certification was achieved, the C-130J was derived from the 382J, establishing the military baseline software for all future variants of the aircraft. Each major version of software for the C-130J is called a block, and more than 96 percent of the 382J software (Block 2) was reused in creating the C-130J military baseline (Block 3). Ninety percent or more of the military baseline software (Block 3) has been reused so far for each variant of the aircraft (Block 4):
- Block 1: basic airworthiness software.
- Block 2: safety-critical 382J aircraft software.
- Block 3: military baseline of the C-130J aircraft software.
- Block 4: custom variants of the C-130J aircraft software.
- Block 5: Block Upgrade Program.
- Beyond Block 5: Hercules Improvement Plan for software/systems will address future C-130J upgrades as a continuous process and product improvement activity and to address new and changed customer needs.
Each block provided a foundation of reusable software for the following blocks. As of March 2000, our level of software reuse typically exceeded 90 percent for most of our products:
- Block 3 military software baseline - 96 percent reused from Block 2.
- Block 4 software for the Royal Air Force - 95 percent reused from Block 3.
- Block 4 software for the Australian Air Force - 95 percent reused from Block 3.
- Block 4 software for the United States Air Force - 97 percent reused from Block 3.
- Block 4 software for the Italian Air Force - 90 percent reused from Block 3.
- Block 4 software for the Tanker vari-
ant - 90 percent reused from Block 3.
- Reuse on the C-27J aircraft, the C-5 Aircraft Modernization Program, and proposed for the C-130 Aircraft Modernization Program is yet to be measured but is expected to be equally high.
The first flight of the C-130J was April 1996 with a minimum of onboard software. The C-130J flew with a complete mission computer software suite (Block 5.3) in March 2001. The new software is expected to be installed in the deployed worldwide fleet of C-130J aircraft during a one-year period beginning the summer of 2001 after Air Force qualification testing is completed at the Air Force Flight Test Center at Edwards Air Force Base.
Plans for reuse of C-130J software and technology were laid out during the early days of the software development effort. The C-130J’s advanced avionics technology and mission computer software are already being reused in the C-27J aircraft, the C-5 Aircraft Modernization Program, and Lockheed Martin’s proposed Joint Strike Fighter. C-130J avionics and software reuse has also been proposed for the Lockheed Martin’s C-130 Aircraft Modernization Program that is intended to incorporate newer technology into the older C-130 aircraft in the fleet.
The C-130J Aircraft as a Software Intensive System
The C-130J aircraft is an integrated collection of software systems produced by more than 25 suppliers. These systems, which are developed in compliance with the Lockheed Martin C-130J Tier I Software Development Plan, are integrated with the devices on the aircraft such as the engines, pneumatics, flight station displays, and the radar. A common Tier I Software Development Plan helped to enforce commonality between all the suppliers, making integration of their products into the air vehicle easier.
The Lockheed Martin C-130J Software Integrated Product Team develops the air vehicle and ground-based data system software also in compliance with the Tier I plan. Thus Lockheed put the same commonality requirements on itself as it did its suppliers. All suppliers, including Lockheed itself, produced their own Tier II Software Development Plans per directions in the Tier I Software Development Plan.
The air vehicle software consists of the Mission Computer (MC) Operational Flight Program (OFP) and Bus Interface Unit (BIU) OFP. The MC OFP manages the overall software operations within the C-130J aircraft and executes within a normal or backup mode. Both modes of the MC OFP include the primary roles of maintaining a central database, providing executive control for all software functions, providing interfaces to the MIL- STD-1553 data buses, and performing fault detection/fault isolation.
The BIU OFP operates in conjunction with the MC OFP in performing the integration of the C-130J avionics. The BIU OFP operates within a normal mode or an MC backup mode. The primary roles of the BIU OFP during normal mode operations are monitoring health, storing and validating critical data, and providing interfaces to non-MIL-STD-1553B data sources. The primary roles of the BIU OFP during MC backup mode operations include acquiring the role as bus controller and performing critical functions.
The ground-based data system software includes the Ground Maintenance System (GMS) and the Organizational Maintenance System (OMS). The GMS is a ground-based computer system that provides a central database for maintaining line-replaceable unit (LRU) configuration information and archived aircraft history for each tail number in the C-130J fleet or squadron. The GMS processes the maintenance-related data recorded on on-board removable memory modules on the C-130J aircraft.
The GMS provides an automated or manual flight crew maintenance debrief function and reads data stored on the removable memory module. The GMS validates the downloaded data, runs automatic fault isolation routines, calculates health and usage parameters, and generates maintenance work orders as required. The system processes structural and engine data to monitor component life and supports configuration control and status reporting of the air vehicle. The GMS maintains a variety of printed reports to support aircraft maintenance. The GMS is also hosted on the Portable Maintenance Aid, which is loose equipment for each C-130J. This capability is provided to support the need to forward deploy the aircraft for operations away from its home base.
The OMS provides the user interface between the maintainer and the C-130J aircraft systems for performing organizational level maintenance on the aircraft. The OMS supports the maintainers by accessing electronic technical orders, troubleshooting aircraft failures, evaluating status of aircraft systems, checking configuration of aircraft systems, and uploading and downloading files to and from the aircraft systems. The GMS interfaces with the OMS for maintenance work order processing, status reporting of maintenance actions performed, and recording of diagnostic data during ground maintenance.
The Software Factory
In the culture of our aircraft manufacturing facility, software is a part on the aircraft, tracked just like the engines, pneumatic systems, and radar systems. The C-130J Software Integrated Product Team operates a software factory that produces the air vehicle and ground-based data system software parts and approves the software parts for all computerized devices on the aircraft. The air vehicle software parts are written in Ada (250,000 lines of code), and the ground-based data system software parts are written in C++ and a fourth generation language (400,000 lines of code total) for each aircraft. Each software part has a part number, a set of associated drawings, and an assembly (such as a removable memory module). The drawings associated with each software part include the following:
- Software Item Drawings assign a unique part number to each computer software configuration item that is 1) installed on the aircraft, 2) used to create or prepare a part for aircraft installation, or 3) used to install or transfer a software item into an aircraft part. The notes on each Software Item Drawing describe 1) the host hardware part number, 2) the image file names and software version identities or a reference to the document containing specific software configuration information (i.e. version description document), and 3) the software-to-software compatibility dependencies.
- Software Assembly Drawings are produced for each software assembly (integrated collection of software items). A Software Assembly Drawing describes 1) a software assembly used in the production of a deliverable part, or 2) a software assembly delivered to a customer. Software Assembly Drawings assign a unique part number to each release of each software assembly. The parts list in the Software Assembly Drawing describes the software items (by part number and location code) contained on the assembly and the specific media (i.e., 3.5-inch diskette, 4mm tape, etc.) of which the assembly is made. The notes on the Software Assembly Drawing describe...
the configuration of any vendor-supplied software items (i.e., reference to Vendor’s Version Description Document), 2) the specific software assembly instructions used to create the software assembly, and 3) the contents of the label placed on the completed software assembly.
- **Software Assembly Instruction Drawings** are produced for each deliverable software assembly. The Software Assembly Instruction Drawing describes the required hardware equipment, software environment, personnel, access privileges, and detailed procedures necessary to produce the software assembly.
- **Software Installation Instruction Drawings** are produced for each software item installed into a deliverable part. The Software Installation Instruction Drawing describes the required hardware equipment, software environment, personnel, access privileges, and detailed procedures necessary to install the software item(s) into the host hardware part.
- **Software Index Drawings** facilitate the identification of customer deliverable software on each aircraft model, thus allowing the software design organization to control interim software releases to production aircraft without changing the master index for production software releases that are not delivered to a customer.
- **Software Control Drawings** are produced for each C-130J customer. The Software Control Drawing details the software and hardware combinations delivered to each customer. The body of the Software Control Drawing contains the following information for each deliverable software item: 1) find number, 2) software description, 3) identification of the software manufacturer, 4) software part number, 5) software version identity, 6) the aircraft model, version, serialization usage of the software/hardware combination, 7) note references, 8) hardware description, 9) identification of the hardware manufacturer, and 10) the host hardware part number. Notes in the Software Control Drawing describe: 1) which software items are loadable in the field and 2) any software compatibility/usage limitations.
The people who work in the C-130J Software Factory are collectively called knowledge workers, and they serve in many distinct roles such as software product managers, software requirements engineers, software development engineers, software test engineers, software process engineers, software quality assurance specialists, and documentation specialists. These knowledge workers are tied together through a digital nervous system (DNS), a term coined by Bill Gates of Microsoft [2]:
“A DNS comprises the digital processes that closely link every aspect of a company’s thoughts and actions. Basic operations such as finance and production, plus feedback from customers, are electronically accessible to a company’s knowledge workers, who use digital tools to quickly adapt and respond. The immediate availability of accurate information changes strategic thinking from a separate, stand-alone activity to an ongoing process integrated with regular business activities.”
### Reuse
Software reuse has been at the heart of the C-130J Software Factory since development of the C-130J aircraft began in 1992. The program started with domain analysis and engineering, looking at what could be reused from other programs, defining the domain of the C-130J, and creating reusable assets that have been exploited throughout the program. The cost of developing air vehicle and ground-based data system software is the primary reason for Lockheed’s aggressive efforts to achieve real, effective reuse. Reuse has significantly lowered the life-cycle cost and program risk.
Many products of the C-130J Software Factory were designed from the beginning to be reusable:
- **Template-Based Design**: Six domain-specific design patterns were originally created to serve as class definitions for all device interfaces to the MC OFP and the BIU OFP. Since 1992, three more design patterns were created to address new technology transition, bringing the total to nine design patterns. Courseware was prepared to document these design patterns and teach newcomers how to use the patterns. The productivity gains, improved reliability, and reduced testing overhead provided by applying template-based design were observed throughout the development of the software.
- **Source Code**: For many device interfaces, source code used for other device interfaces could be reused with very minor modification. In addition, source code from previous blocks could be reused extensively on later blocks (note the reuse figures between Blocks 2 and 3 and Blocks 3 and 4, see page 19).
- **Test Scripts**: Due to the definition of the classes of device interfaces, test scripts could also be reused. Requirements-based testing also helped by supporting automated generation of test cases directly from the requirements specifications.
- **Documentation**: Delivered and internal documentation was designed to be reusable, facilitating its production from one software build to the next.
- **Software Development Domain Specific Kits (DSKs)**: Commercially-available DSKs, such as Microsoft Visual Studio .NET and Microsoft Visual Basic for Applications, greatly enhance productivity. We also employ homegrown DSKs, such as our Data Collection System Version 3, which is a DSK designed to build data collection applications.
- **Common Software Development Tools**: Our Environment and Tools Working Group establishes a set of common software development tools, such as Rational APEX and Cadre Teamwork for use on several Lockheed Martin programs. We save cost in terms of both purchase price and training, and we gain by having more readily interchangeable personnel. Reuse is also enhanced in that tool-specific conversions are reduced or eliminated should an asset produced by one program be adopted by another.
- **Domain Knowledge**: Knowledge captured during the early domain analysis and engineering activities was stored in courseware, reusable as a teaching instrument throughout the life of the program.
### Challenges
The C-130J aircraft denotes a cultural change in a significant part of a major corporation from producing largely mechanical aircraft to producing software intensive aircraft. Such a change takes time for the culture to adapt, and there are many challenges that both the management and technical communities within that culture must face. These are the challenges faced by the C-130J Software Integrated Product Team:
- **Building safety critical, high integrity software** for an aircraft with corporate funding (the development of the
Avionics Modernization
C-130J was done without funding from external sources, such as the United States government, the corporate investment and risk were high.
- Reducing risk and life-cycle cost for a software intensive system with a 30-year life span by achieving effective software reuse.
- Designing a software intensive system that is adaptable to changing technology during a 30-year life span.
- Meeting the requirements of FAA Type Certification.
- Controlling changes and software versions in light of thousands of requirements against multiple baselines for multiple customers, and creating different builds for different customers concurrently – satisfying the needs of a diverse group of customers, each with their own unique requirements during a 30-year life span.
- Achieving Capability Maturity Model® Level 3 and ISO 9001 certifications and continuing the investment needed to maintain these certifications.
From a broad perspective, the challenges may be grouped into four areas: software reuse, process, certification (for CMM Level 3, ISO 9001, and the FAA), and culture. Within the domain of our company (aircraft development and manufacturing), these challenges were addressed from the point of view of the pre-software intensive culture that was already in place:
- Software reuse was one of the easier challenges to address. The concept of line replaceable units (LRUs) was already in management’s minds from a hardware perspective, so adding software parts as LRUs was not a significant leap. Neither was viewing those software parts as complex parts containing smaller component parts. Domain engineering was done at the beginning of the program, at a time when the development laboratories were not yet ready and the systems engineers were engaged in design and simulation. Ideas were also picked up from other existing aircraft programs, adding credibility to our domain engineering effort.
- Introducing a software process orientation was also an easier challenge to address. Management was already aware of manufacturing process concepts, so software development process concepts were not a significant leap in the early stages. A common Software Engineering Process Group was readily established to share ideas and infrastructure between the various software development Integrated Product Teams, such as the C-130J, F-22, C-5 AMP, and C-27.
The primary obstacle to our process definition efforts arose when management implemented a lean initiative to reduce waste in both the hardware and software processes. In the efforts to completely document the processes, it became evident how expensive a complete process description would be to produce. In describing our software development processes down to the level of following the trail of paper and electronic data between people’s desks, the C-130J Software Integrated Product Team alone ended up with 114 distinct processes in a hierarchy that was three levels deep.
This collection of process descriptions was a small part of the overall detailed process description for the development and manufacturing of the entire aircraft, which is currently incomplete and estimated to be between 3,000 and 5,000 distinct processes. The effort to create the detailed process description for the hardware side is continuing as we are moving to CMMI adoption.
- Certification activities were more challenging than software reuse and process. Our lean effort described in the previous bullet was a significant aid in our CMM Level 3 certification activities, and applying web technologies to describe our processes allowed us to present this information from the point of view of a CMM assessor, organized by Key Process Area and Key Practice. The introduction of automated data collection during the last three years has made it much easier to produce the evidence demanded by the CMM assessors, but gathering more than 300 artifacts for a CMM assessment is still a daunting task. The challenge of FAA Type Certification was similar to CMM Level 3 certification, and the ISO 9001 certification challenges fell nicely into place as our CMM Level 3 certification challenges were addressed.
- The cultural shift required by management to understand the issues and culture of the software engineers was our greatest challenge. Management expectations were originally high that software engineers could possess the same domain knowledge as systems engineers, and this was simply not the case. The mindset of someone with a master’s degree in mechanical or electrical engineering, especially if that degree was granted more than 10 years ago, is fundamentally different from the mindset of a contemporary software engineer.
Attempts were made to have systems engineers perform software engineering work – the success of these attempts was mixed. Over time, systems engineers and software engineers gradually came to understand each other’s mindsets, but occasional personnel turnover disrupted this understanding; we found a continual need to reeducate engineers on both sides.
Likewise, management’s acceptance of software engineering concepts has been gradual, again requiring reeducation with personnel turnovers. After a decade, the three groups – management, systems engineering, and software engineering – still do not completely accept each other’s mindsets. We expect this cultural difference to continue for some time to come.
The following statistics are noted in the more than 5 million source lines of code delivered to date: The C-130J software has been built for a 30-year life span. A lot can change in terms of the demands placed on the C-130J aircraft and its mission during these many years. Incorporation of a Global Air Traffic Management system and a comprehensive software maintenance plan are two of the efforts currently underway, and software production is continuing with a projection of more than 9 million lines of code delivered by the end of 2001. New missions,
<table>
<thead>
<tr>
<th>Statistic Tracked</th>
<th>1998</th>
<th>1999</th>
<th>2000</th>
</tr>
</thead>
<tbody>
<tr>
<td>Number of changes processed</td>
<td>2,430</td>
<td>2,350</td>
<td>2,115</td>
</tr>
<tr>
<td>Number of engineering software builds</td>
<td>240</td>
<td>300</td>
<td>330</td>
</tr>
<tr>
<td>Number of software qualification tests</td>
<td>79</td>
<td>85</td>
<td>81</td>
</tr>
<tr>
<td>Number of pages of documentation produced</td>
<td>472,500</td>
<td>564,200</td>
<td>531,010</td>
</tr>
<tr>
<td>Number of software tests executed</td>
<td>700,450</td>
<td>798,683</td>
<td>751,700</td>
</tr>
<tr>
<td>Test success percentage</td>
<td>98.27%</td>
<td>98.75%</td>
<td>99.00%</td>
</tr>
</tbody>
</table>
Table 1: Modern Avionics in the C-130J has Contributed to its Improved Performance
different requirements from new customers, changing requirements from existing customers, and the introduction of even newer technology to the aircraft are the key factors causing this software growth. Continual process improvement, particularly through the C-130J Digital Nervous System, is underway, and increasing levels of capability maturity, through CMM Level 4 to Level 5, are planned.
Lessons Learned
Many lessons were learned during the last decade of the C-130J software development. Here are some key lessons:
- Objectives and requirements must be nailed down specifically from the beginning. It is never possible to get the requirements right the first time if the problem is of any significant degree of complexity. Requirements traceability and requirements grading are required. Conduct software product evaluations on requirements as intensely as you would review the code.
- You can never have too many simulations or laboratory resources.
- Software engineering capability maturity is not enough by itself to improve the quality of an integrated system like an aircraft. Systems engineering and management capability maturity are also required.
- Driving a product by schedule is unavoidable. Be prepared to deal with it and be prepared to adapt when the schedule slips. Define all your processes and measure their performance. Remember that the last process in the sequence is not necessarily the source of the problem when a schedule slips.
- Automate testing as much as possible. Always plan on running a test again. Always base test cases on requirements, trace test cases to those requirements, and employ automated tools to build your test cases from your requirements specifications when possible.
- Successful reuse requires a significant up-front cost and an effective, compelling producer/consumer model that makes it economically viable. Management must see reuse values and accept the costs as well as the benefits.
- Measurement comes with capability maturity, but no measurements can replace the in-depth, detailed knowledge of the people on the development line. Management must journey to the (software) factory floor before they can really understand the issues.
References
About the Authors
Richard L. Conn has more than 20 years experience in software engineering and project management. Conn is currently the software process engineer for the C-130J Airlifter at Lockheed Martin Aeronautics Company. He graduated with bachelor’s and master’s degrees in computer science from Rose-Hulman Institute of Technology in 1976 and the University of Illinois in 1978, respectively. Conn was an Army officer from 1978-82 at the Army’s Satellite Communications Agency and the Air Force Institute of Technology, where he taught computer science. Conn was a member of the Federal Advisory Board for Ada and a distinguished reviewer of the Department of Defense’s Software Reuse Technology Road Map.
Stephen M. Traub has more than 20 years experience in software engineering and project management. Traub is currently the software designated engineering representative at Lockheed Martin Aeronautics Company on behalf of the Federal Aviation Administration. Graduating from Elon University in North Carolina in 1984, Traub worked for Unisys from 1980-1984 as the principal software engineer for Weapons Assignment tasks for several Navy shipboard systems. He has been at Lockheed Martin since 1984, first working on the C-5B aircraft, and then working on the C-130J in the roles of Mission Computer Software Development lead, software product manager, and Software Integrated Product Team lead.
Steven J. Chung has 18 years of experience in software engineering and project management. Chung is currently the Software Integrated Product Team lead for the C-130J Airlifter at Lockheed Martin Aeronautics Company. Graduating from the University of South Florida in 1983, he worked for Honeywell Space Systems as a software engineer on the Space Shuttle and the Advanced Space Communications Technology programs and E-Systems on a real-time communications network. Chung came to the C-130J program at Lockheed in 1996 as a staff engineer and was promoted to Software Integrated Product Team lead in 2001.
|
{"Source-Url": "http://www.crosstalkonline.org/storage/issue-archives/2001/200109/200109-Conn.pdf", "len_cl100k_base": 5961, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17648, "total-output-tokens": 6553, "length": "2e12", "weborganizer": {"__label__adult": 0.0016193389892578125, "__label__art_design": 0.0007905960083007812, "__label__crime_law": 0.0008411407470703125, "__label__education_jobs": 0.0019512176513671875, "__label__entertainment": 0.0002598762512207031, "__label__fashion_beauty": 0.00052642822265625, "__label__finance_business": 0.0030364990234375, "__label__food_dining": 0.0013027191162109375, "__label__games": 0.003803253173828125, "__label__hardware": 0.0145111083984375, "__label__health": 0.0008502006530761719, "__label__history": 0.001033782958984375, "__label__home_hobbies": 0.0005192756652832031, "__label__industrial": 0.00537109375, "__label__literature": 0.0005955696105957031, "__label__politics": 0.00048732757568359375, "__label__religion": 0.0008697509765625, "__label__science_tech": 0.033782958984375, "__label__social_life": 0.0002104043960571289, "__label__software": 0.02838134765625, "__label__software_dev": 0.79638671875, "__label__sports_fitness": 0.0011434555053710938, "__label__transportation": 0.10101318359375, "__label__travel": 0.0009598731994628906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30153, 0.03232]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30153, 0.46636]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30153, 0.93859]], "google_gemma-3-12b-it_contains_pii": [[0, 5233, false], [5233, 12319, null], [12319, 18939, null], [18939, 25526, null], [25526, 30153, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5233, true], [5233, 12319, null], [12319, 18939, null], [18939, 25526, null], [25526, 30153, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30153, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30153, null]], "pdf_page_numbers": [[0, 5233, 1], [5233, 12319, 2], [12319, 18939, 3], [18939, 25526, 4], [25526, 30153, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30153, 0.07339]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
59f5b5e8d088924f397510084a520507a3234d4f
|
The MeDoc Distributed Electronic Library: Accounting and Security Aspects
Michael Breu, Anne Brüggemann-Klein, Cornelia Haber, Ricarda Weber
Research Institute for Applied Software Technology (FAST e.V.) and
Technische Universität, München, Germany
e-mail: breu@fast.de, brueggem@informatik.tu-muenchen.de
ABSTRACT
The MeDoc service provides access to a distributed full-text library for computer scientists over the Internet. Since the library provides commercial information products, accounting and security aspects are of considerable importance in this electronic-publishing project.
MeDoc has developed business, cost, and payment models suitable for electronic library services. The partners cooperating in the MeDoc service are users, providers and producers of information products. Their business interaction is based on trade as opposed to systems financed by advertising.
The cost models offered to the users are various forms of subscription and 'pay per view' purchase. As payment models, both credit and debit models are considered suitable for the MeDoc service. Initially only registered users are admitted to the MeDoc library, so the users can be charged via accounts. Currently a clearing agency handles the actual invoice process for the MeDoc service.
To secure the communication over the Internet within the MeDoc library, several existing implementations of cryptographic algorithms have been evaluated against the MeDoc requirements analysis. Communication channels in MeDoc are now secured by transparent encryption mechanisms based on SSL.
The mechanisms described are implemented in a prototype that has been evaluated in a first field test from the beginning of 1997.
Introduction
MeDoc [MeD96] (Multimedia electronic Documents) is a German digital library project that brings together 12 German and international publishing houses on the producer side and 24 universities and industrial user institutions on the user side. The project is led by a consortium consisting of the German society of computer professionals (GI), the FIZ Karlsruhe, a database provider for technical and scientific information, and the scientific Springer Publishers. The project sees itself not as a pure research project but intends to collect hands-on experience to start up a professional service after August 1997.
The aim of this research and development project is to initialize the MeDoc service: a distributed digital computer science library that makes available a critical amount of literature at the desktop of computer scientists, students and practitioners all over Germany [BDG96a; BDG96b; DM96].
1 The Project MeDoc is sponsored by the German Ministry for Education, Science, Research and Technology (no. 08 C 7829 6). The project's homepage is http://medoc.informatik.tu-muenchen.de
McDoc aims at making about 50 books and 25 journals accessible via Internet by the end of August 1997. Some of them are already available. These books and journals are mostly electronic editions of print versions. But also multimedia supplements of print books are provided which cannot be published by traditional means.
With the transition from classical libraries to digital libraries, the distribution of responsibilities between publishing houses, library organizations and end-users is changing. Electronic books and journals not only have new properties that provide added value (e.g., full-text searching, audio-clips, video animations), but electronic editions on the World Wide Web also have potentially a far larger audience. If you put electronic versions of classical paper books or journals on the Internet, the traditional business models have to be reconsidered and new cost models for the usage of electronic documents must be applied.
The MeDoc service provides a distributed electronic full-text library of high quality computer-science literature. This library can only be furnished with commercial products if usage is billable and protected. The involvement of commercial partners that contribute their standard offer in books and journals requires the development and installation of a commonly agreed business model and the implementation and evaluation of a variety of cost models to charge for the usage of the services of the MeDoc library. Therefore flexible business, cost and payment models and user-transparent ways to secure the communication and the privacy of the users have to be provided.
There are several digital library initiatives that deal with aspects of a billable digital library, such as the United States initiative [DL95; CKP+95], the British initiative [UK96] or another German initiative, IBIS [Neu96]. Also several expositions are treating the subject of pricing electronic services, such as [Day94a; Day94b; GSW96; SNFY96] or [Var96], or generally talk about commercial infrastructures for digital libraries, like [Sch96]. Still there is no comprehensive foundation of business and cost models for digital libraries.
A requirements-analysis phase (see [BK96]) has identified the basic concepts and models for accounting and a security policy for MeDoc. We first introduce the basic business model of the MeDoc service. Based on this business model a variety of cost models can be applied, which are discussed next. We sketch which implementation requirements must be met to realize these cost models. Then we analyze the security requirements and evaluate existing solutions. Finally, we describe the configuration of a first prototype of a billable and secure information site.
1 Accounting
The producers, providers and users of the MeDoc library are connected through business transactions. The interaction and business relationships between these participants are described in a business model. The realization of this business model requires the definition of cost models and payment models.
1.1 Participants and Services
The users of the MeDoc library are researchers, students and practitioners of computer science. These individual users will mostly act as members of user groups such as their department, library or company. Providers operate repositories for electronic documents and provide services to users. Providers are libraries, publishing houses, universities or scientific information centres. They receive their contents from producers such as publishing houses,
individual authors, university departments or commercial database producers. A specific institution can act both as a producer and as a provider.
The MeDoc library offers several basic digital library services, searching and navigation, document browsing and delivery, mediation for external services (like access to certain commercial databases), information filtering and profiling and the the compilation of statistics for users and producers. Other services are recognized as important but have not yet considered for the MeDoc library like annotation facilities, format conversion services or long term preservation and archiving.
1.2 Business Models
On the Internet there are typically two basic models to finance a service: one can be compared to TV financed by commercials and the other to Pay TV.
Many services on the Internet (e.g. Yahoo, Lycos) finance themselves through posting advertisements. The end-user gets the service for free. For each ad displayed a fee is collected from the advertiser. This is the same model that commercial TV stations use. These Internet services are quite successful and have considerable value for the end-user, but there is no guaranteed quality of results, and searching can get tedious. This business model gets around the problem that there still is no feasible way to collect (typically very small) fees for each usage of such a service from a world-wide and mainly anonymous audience.
MeDoc like a variety of other Internet services has adopted a business model that is more like Pay TV. The user has to pay for the service delivered. For this the user is guaranteed a certain quality of contents and service.

Figure 1 shows this business model. On one side we have the producers of electronic documents, typically publishing houses or database providers, but also universities or research institutes with their variety of technical reports, teaching material, theses, etc. The main role of a producer is to provide the documents with a guaranteed level of quality of the contained information. It has to provide the documents in an appropriate electronic format together with the meta description (e.g., title, authors, formats, abstract, table of contents) of the documents.
On the other side are the users. They use the service either to search or to retrieve documents. The main target users for the MeDoc project are (for the time being) computer scientists, students and computer professionals of the pilot user organizations. None of them wishes to spend much time with tedious searching among irrelevant contents and each of them wants to be sure to have retrieved relevant data.
The providers operate as a link between producers and users. Besides providing the basic technical service they add value to the offer of the producers. They bundle the offers of a variety of producers, allow inter-producer searches, and act as a financial clearing house between users and producers. Currently the MeDoc consortium itself acts as the provider.
operating multiple provider sites, but in principle there could be several competing providers. In this setting the provider acquires the licence rights from the producer to offer the electronic document as a service to the user. In exchange, the user is charged service fees. The cost model for the royalties on one side, and the cost model for the service usage on the other side, could be chosen completely independently. For the time being, the MeDoc project forwards the service fees directly to the producers.
The information-flow relationships in Figure 1 are many-to-many. In general, a provider receives information products from a number of producers, and a producer delivers products to several provider sites; analogously, a user searches several information repositories that stock them. For this the MeDoc system also has a broker component that handles distributed search and retrieval transparently to the users [BK96]. As the service of this broker is offered without costs, it is outside the scope of this paper.
The business model from Figure 1 has to be refined, because MeDoc would encounter the problem of collecting the fees from potentially some hundreds of thousands of (German) end-users. Contractual partners for the providers are, for the time being, not the individual end-users, but mainly end-user institutions, representing groups of users (Figure 2).
For simplicity they are called 'libraries', because the conventional university libraries could be such user groups representing their members, although they need not necessarily be libraries in the usual sense. Their main responsibility is the administration of local users, the control of copyright restrictions, and perhaps the reimbursement of usage fees from their members. How that reimbursement is done is outside the scope of this paper.
1.3 Cost Models
A difficult task is the choice of an adequate cost model. Until now there has been little experience with cost models for electronic documents. Due to its ease of handling, the most common model is subscription to a service for a fixed fee.
Users have different needs and usage patterns that result in different requirements for cost models. In addition to the request for price-worthiness, from the user's point of view basically two aspects are important. First, predictability, i.e. to be able to predict the cost of a service, very often is essential. This must apply not only to the cost of a single user action but, more importantly for budget tied users or user groups, also to the cost of a complete time period, e.g., a budget year. Transparency is also an important issue. The cost model must be clear and simple to be acceptable.
Basic Rate
A basic rate allows general access to the digital library. Some of the service components may be used without further charges, others may be subject to additional fees. With basic rates, services can be financed that cannot be charged for directly, e.g. long-term archiving or statistics. It is not intended, at present, to charge a basic rate for participation in the MeDoc service.
Subscription
The subscription model requires the payment of a flat fee to gain unlimited access to a document or class of documents for a certain period of time. Subscription models are employed if there will be continuous use of a document base, or if the pricing of each individual usage is not adequate or user-friendly.
A service can be subscribed to by a fixed fee for a limited period of time. This is quite often applied to large document bases such as, for example, journals, dictionaries or encyclopedias that contain information that is regularly maintained or extended. The subscription allows access to the complete document base. The subscription to an electronic journal differs significantly in one aspect from a printed journal: if a subscription expires, you still have the old paper copies of that journal; for an electronic journal, the information is no longer accessible at all. This is the reason why publishing houses quite often link the subscription to the electronic edition to a subscription to the paper (or CD-ROM) edition.
In MeDoc we offer fixed and floating types of licences that can be subscribed to. The basic type of fixed licence is the single licence that is assigned to an individual person for the subscription period, just like a personal subscription to a paper journal. A group licence is a fixed type of licence assigned to a group of persons; each member of the group has the same rights to use the service. This is typically connected with discounted rates as opposed to several single licences.
Floating licences are shared by the members of a group. A licence is assigned to a group member for a limited period of time within the subscription period. When all licences are assigned a subsequent user has to wait until one of the licences is released. Since standard World Wide Web technology does not support the session concept, licences cannot be returned automatically, when a user logs out. Instead, a licence must be released on a time-out basis after a certain period of time.
Purchase (Pay per View)
The purchase model requires payment for each service rendered, e.g. every access to a document (or a set of related documents) of the document base gets assigned an individual price and is charged for. ‘Pay per view’ models can be applied to the delivery or browsing of documents, as well as to information searching. In MeDoc searching is either free or is covered by a flat fee to the producer. Only the retrieval of a document itself entails costs.
This usage-based cost model is offered as an alternative to subscription, if the access of the user to the document base or service is infrequent and the user is generally not interested in subscribing to the complete document base. It is suitable to meter the usage of a document base or service.
As it is not necessarily predictable how frequently a service will be used, the 'pay per view' model does not meet the requirement of predictability. This must be taken into account when dealing with budget-tied users or user groups.
1.4 Payment Models
Besides the cost models McDoc also defines so-called payment models that specify when to collect payments. There are basically two payment models: debit and credit models.
In the credit model, registered users are sent an invoice after the service has been used. Typically invoices are bundled over a specific period of time. The credit model may allow for unlimited or limited credit.
The debit model requires the user to pay prior to using the service. The debit model requires either the payment of a lump deposit, from which individual payments are deducted, or separate instant payments for each service at the time of usage.
1.5 Implementation Requirements
In this section we show what system components must be developed to implement the cost models we defined above. Figure 3 shows the basic components and a possible data flow between them from the arrival of a chargeable user request until its execution.

The basic rate cost model is the easiest to implement. It requires only an access control and user administration.
Besides access control, the implementation of any subscription cost model requires a subscription management that administers and controls user access rights to the different document collections and services. This suffices for single licences. To realize the group licence subscription cost model additionally a mapping mechanism of users to groups is necessary.
The floating licence subscription cost model requires not only this mapping mechanism of users to groups, but also a floating licence server. This floating licence server must provide dynamic group management with the following feature: users are entered in the dynamic group of the current floating licence holders when requesting a service and removed again on
quitting the service or at a specified timeout. Users are rejected when all floating licences are in use.
To implement the purchase cost model (pay per view) a system component must be available that determines the price for the service requested, i.e. a price manager. When realizing a purchase cost model we recommend the use of a transaction mechanism, firstly to avoid billing users for documents that have been corrupted during transfer and, secondly, so that users cannot deny that they have received a document which they have ordered. Such a transaction protocol typically is based on encryption and handshaking mechanisms [Zw96; Ket95; Kol96]. In order to support the combination of a pay per view cost model and an unlimited credit model, a transaction mechanism provides a logging protocol that is accepted by both parties. Together with the access control and the user management this is sufficient for the implementation of the purchase cost model.
A debit payment model, or a limited credit payment model, requires accounting facilities in addition to the mechanisms described so far: providers have to keep an account per registered user or user group and to deduct the costs of each transaction from the deposit or the credit limit, i.e. set up an account manager.
If the system provides a mechanism for online charging (electronic cash [JW96; CKP95]) together with the transaction mechanism, this is sufficient. No further access control or user management is necessary. Then spontaneous anonymous users can be catered for.
2 Security
In a networked environment, it is mandatory to safeguard against attacks that target the communication channel. Particular concerns are that the person billed is the person who has used the service and that user’s privacy is protected.
2.1 Threats to communication over a non-secure network
There are several well-known threats that have to be taken into account when communicating over a non-secure network. It is common practice to distinguish between active and passive attacks. Active attacks are removal, modification and replaying of messages. A passive attack is, for example, the unauthorized listening to a communication. Several well-known methods, based on cryptographic algorithms, have been devised to make the communication over non-secure networks more secure.
2.2 Steps to secure the communication
A number of security services safeguard different aspects of the communication. Authenticity guarantees that the sender of a message has not been faked. Integrity guarantees that the message has not been modified during transmission. Confidentiality guarantees that no one except the sender and the receiver can understand the message. Non-repudiation guarantees that the sender cannot deny having sent the message. Access control guarantees that only authorized users can access services. Availability guarantees that network services are available at any time and with the required capacity. Availability has to be ensured by the network administrator. Access control is managed by the system administrator.
There are a number of cryptographic algorithms to ensure the first four services mentioned above. Hash functions are used to control the integrity of the message, private-key algorithms
to ensure the confidentiality and public-key functions for key exchange and digital signatures to realize non-repudiation.
2.3 Existing protocols
We have evaluated the following implementations of cryptographic algorithms to determine their usability within the MeDoc system: Kerberos [BSK89], SHTTP [RS95] and SSLaye [FKK96; HY95].
The goal of Kerberos is to make services available to authorized users and processes (principals) only. It is based on a private key crypto system. It has been developed at MIT as part of project Athena. Kerberos is a system with one authentication server and one ticket granting server. This central server is a bottleneck of the service. Without a ticket from the authentication server no services can be used. Kerberos can be used for authentication and encrypted message transfer. A spontaneous communication is not possible as the authentication server and the principal have to have a mutual password. Encryption with Kerberos requires its installation on the local system. As this is a rather complicated process and because of the central authentication server, Kerberos is not an alternative for the MeDoc system.
SHTTP (Secure HTTP) is an application layer protocol. SHTTP marks individual documents as private or signed. The user has to decide on the security of the communication (which documents are to be sent encrypted, which documents are to be signed, which encryption algorithm is to be used, etc.). A reference implementation was released by EIT to members of CommerceNet. The toolkits to integrate SHTTP in existing server and clients as well as the reference implementation are subject to US export restrictions.
One requirement for security in MeDoc is its transparency to the user. This means that the user does not have to bother about encryption, decryption, and the other processes which are needed to ensure communication security. Therefore, SHTTP cannot be used in this project. Furthermore, there is the problem with the export restrictions.
The Secure Socket Layer Protocol (SSL) is a protocol layering above TCP and below the application protocol; i.e. it provides the functionality of TCP to the application protocol and looks like an application protocol to TCP. SSLaye is an Australian implementation of SSL as the US version is subject to export restrictions and Netscape's export version uses keys with length 40 bits only. While SHTTP secures individual documents, SSL secures a communication channel. SSL supports public key, different private key and hashing algorithms. Which algorithms are to be used is determined during the handshake protocol.
The use of SSL is transparent to the user. Moreover SSL can be used with different application protocols. These are the advantages. One disadvantage is due to the fact that SSL is transparent to users. Thus, they cannot decide which cryptographic algorithm is really used out of the ones offered by the system. As SSL is integrated into the Netscape Browser, users do not need to install it before using the MeDoc system. SSL is used to secure communication channels in the prototype described below.
3 A Billable Secure MeDoc Library Prototype
For the time being, the MeDoc library admits only registered users or user groups to chargeable services and does not charge a base rate. All services offered, except document browsing and delivery, are free, the access to the documents themselves is charged for. The
users are charged via an unlimited credit model. For document delivery and browsing, subscription models as well as a 'pay per view' models are implemented. Non-registered users may use only the services that are free of charge. When electronic payment modes (see e.g. [JW96; Ric96]) become commercially safe, we plan to introduce the instant-payment debit model using electronic cash and credit card systems. Then anonymous users can be given access to chargeable services as well.
Currently, a clearing centre, FIZ Karlsruhe, handles invoicing and collection of bills for the MeDoc service. The billing data are collected in a distributed way at the individual service provider sites of the MeDoc digital library and are regularly forwarded to the clearing centre. Invoices are generated there and executed and statistics are compiled for the producers.
The first pilot installation of the MeDoc system has been set up. It supports billable and secure access to a number of electronic libraries [BW96]. In late spring, 1997, an extended prototype has been scheduled for release. The overall architecture is described in detail in [BDG96b] or [BDG96a].
Figure 4 shows the communication structure of the initial prototype. As indicated in the MeDoc architecture, the user connects to the MeDoc System with a Web client to place requests and to browse the results. The requests are passed through a user agent to one or more provider agents. The user agent stores user profiles, received results, cost-control data and user data. Usually such a user agent is installed at each user site, e.g. a university campus.

**Figure 4.** Architecture of the Billable Secure MeDoc Library Prototype.
In the MeDoc architecture a provider agent transforms incoming requests into the specific language of its provider system and gives results back to the user agent. The provider agent is extended by functions that handle the pricing information, the document metadata and suitable mechanisms for gathering logging data. The user administration holds the data describing the
user (e.g. last name, first name, e-mail, institution, account number, etc). The first prototype supports single licences and pay per view purchases, where the price of a document can depend on the user or be user-independent. Data relevant to accounting are collected and forwarded regularly to the clearing centre.
The prototype is built from standard software components, as, for example, standard Web browsers, Web servers, a Postgres95 database, and a Fulcrum full-text search engine. The components are connected via custom-made Java programs.
Security issues arise at three interfaces. The first communication interface is the one between the Web client and user agent. It is secured through a session password. After login and transmission of the user password, the session is assigned a session password that can be used for all further transactions. This is a rather weak security mechanism, but at least the user password has only to be transmitted once, and the validity of the session password is limited to 16 hours. In the extended prototype, this communication path will be secured by using SSL-capable Web-client and server technology.
The second communication interface is the one between user agent and provider agent which is the most sensitive, because the information is traversing external networks outside the control of both the provider and the user authority. This communication is secured by coupling Java code with an SSL-based protocol. The user agent and the provider agent identify each other by certificates that ensure their mutual authenticity. The certificates are signed by the MeDoc office. As information about the user agent is insufficient for billing, the ID of the user causing the costs has to be transmitted and the user has to be authenticated. For this authentication a user access password is requested that is administered by the user agent. Therefore the entire encryption and security issues are transparent to the user. In the first prototype, the document order message, the user access password and the document itself are transmitted encrypted.
The third communication interface is that between provider agent and provider service. It is considered to be outside the scope of MeDoc, as both components usually are located within the same local area network and the provider agent acts as a firewall for the provider service. It lies within the responsibility of the provider to ensure security between those two.
**Conclusion**
The MeDoc service is still an experiment. But the project aims to go beyond standard research approaches for digital libraries. That means the MeDoc service must be tested under (nearly) real conditions to be able to get forecasts for running the real service. The release of the first 20 books and four journals on the subscription base has already started. First feedback from pilot users shows that they accept a digital library can be subject to charges, provided the quality of service is acceptable.
For our pilot users the most important issue is the transition from investment-based thinking (‘buying a book’) to consumer-like thinking (‘using a service’). But this already started before digital libraries became possible. Some libraries ceased to subscribe to expensive but rarely used journals and recommended their users to order copies via a fax copy service. From there it is no big step to using electronic services.
Acknowledgments
This paper is based on the work carried out by the whole MeDoc team. Through their enthusiasm and their initiative they made the MeDoc service and this paper possible.
References
[Ket95] Steven Ketchpel. Transaction protection for information buyers and sellers. In
Michael Breu studied Computer Science and obtained his PhD at the Technical University of Munich. After that he worked on a joint project of Siemens-Nixdorf, Bull and Olivetti as a software engineer in the development of methods for business re-engineering and distributed business applications, and was seconded to the European Software Institute in Bilbao for one year. He currently works as a research engineer at the research institute for applied software technology and is project manager in the German digital library initiative MeDoc.
Ann Brügge and Klein studied Mathematics and Latin at Munster (Germany). She received a PhD in Mathematics (Mathematical Logic) from the same university. Afterwards, she did research in the areas of document processing, electronic publishing, information systems, and digital libraries at the universities of Karlsruhe, Freiburg, Paderborn (all Germany) and Waterloo (Canada). She is currently a professor of computer science at the Technische Universität, München.
Cornelia Haber studied Computer Science at the Technical University at Munich and is currently a member of their research staff. She participates in the German digital library
initiative MeDoc and her main research areas are informations systems, electronic publishing and digital libraries.
Ricarda Weber studied Computer Science at the Technical University at Munich. Afterwards she worked for Softlab as a software engineer in computer communication and information retrieval projects. She then rejoined the Technical University as a member of the research staff to participate in the German digital library initiative MeDoc. Currently she is doing research work in the field of electronic payment systems for digital libraries.
|
{"Source-Url": "https://elpub.architexturez.net/system/files/pdf/97122.content_0.pdf", "len_cl100k_base": 5989, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 13962, "total-output-tokens": 8041, "length": "2e12", "weborganizer": {"__label__adult": 0.0006351470947265625, "__label__art_design": 0.00110626220703125, "__label__crime_law": 0.002288818359375, "__label__education_jobs": 0.028106689453125, "__label__entertainment": 0.0003399848937988281, "__label__fashion_beauty": 0.0003705024719238281, "__label__finance_business": 0.07080078125, "__label__food_dining": 0.0006589889526367188, "__label__games": 0.0011320114135742188, "__label__hardware": 0.0024166107177734375, "__label__health": 0.0024871826171875, "__label__history": 0.001026153564453125, "__label__home_hobbies": 0.00035762786865234375, "__label__industrial": 0.0011186599731445312, "__label__literature": 0.00189208984375, "__label__politics": 0.0009312629699707032, "__label__religion": 0.0006833076477050781, "__label__science_tech": 0.34326171875, "__label__social_life": 0.0004553794860839844, "__label__software": 0.1685791015625, "__label__software_dev": 0.36962890625, "__label__sports_fitness": 0.0002651214599609375, "__label__transportation": 0.001129150390625, "__label__travel": 0.0004696846008300781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35986, 0.01824]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35986, 0.18026]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35986, 0.91338]], "google_gemma-3-12b-it_contains_pii": [[0, 2813, false], [2813, 6352, null], [6352, 9420, null], [9420, 12113, null], [12113, 15319, null], [15319, 17378, null], [17378, 20650, null], [20650, 24093, null], [24093, 26178, null], [26178, 29600, null], [29600, 32468, null], [32468, 35430, null], [35430, 35986, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2813, true], [2813, 6352, null], [6352, 9420, null], [9420, 12113, null], [12113, 15319, null], [15319, 17378, null], [17378, 20650, null], [20650, 24093, null], [24093, 26178, null], [26178, 29600, null], [29600, 32468, null], [32468, 35430, null], [35430, 35986, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35986, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35986, null]], "pdf_page_numbers": [[0, 2813, 1], [2813, 6352, 2], [6352, 9420, 3], [9420, 12113, 4], [12113, 15319, 5], [15319, 17378, 6], [17378, 20650, 7], [20650, 24093, 8], [24093, 26178, 9], [26178, 29600, 10], [29600, 32468, 11], [32468, 35430, 12], [35430, 35986, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35986, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
60cb564ecc8d15b6cdb2086b6aefee0ea24cc811
|
Mastering Python for Finance
Built initially for scientific computing, Python quickly found its place in finance. Its flexibility and robustness can be easily incorporated into applications for mathematical studies, research, and software development.
With this book, you will learn about all the tools you need to successfully perform research studies and modeling, improve your trading strategies, and effectively manage risks. You will explore the various tools and techniques used in solving complex problems commonly faced in finance. You will learn how to price financial instruments such as stocks, options, interest rate derivatives, and futures using computational methods. Also, you will learn how you can perform data analytics on market indexes and use NoSQL to store tick data.
Who this book is written for
If you are an undergraduate or graduate student, a beginner to algorithmic development and research, or a software developer in the financial industry who is interested in using Python for quantitative methods in finance, this is the book for you. It would be helpful to have a bit of familiarity with basic Python usage, but no prior experience is required.
What you will learn from this book
- Perform interactive computing with IPython Notebook
- Solve linear equations of financial models and perform ordinary least squares regression
- Explore nonlinear modeling and solutions for optimum points using root-finding algorithms and solvers
- Discover different types of numerical procedures used in pricing options
- Model fixed-income instruments with bonds and interest rates
- Manage big data with NoSQL and perform analytics with Hadoop
- Build a high-frequency algorithmic trading platform with Python
- Create an event-driven backtesting tool and measure your strategies
In this package, you will find:
- The author biography
- A preview chapter from the book, Chapter 1 'Python for Financial Applications'
- A synopsis of the book’s content
- More information on Mastering Python for Finance
About the Author
James Ma Weiming works with high-frequency, low-latency trading systems, writing his own programs and tools, most of which are open sourced. He is currently supporting veteran traders in the, trading pits of the Chicago Board of Trade devising strategies to game the market. He graduated from the Stuart School of Business at Illinois Institute of Technology with a master of science degree in finance.
He started his career in Singapore after receiving his bachelor's degree in computer engineering from Nanyang Technological University and diploma in information technology from Nanyang Polytechnic. During his career, he has worked in treasury operations handling foreign exchange and fixed income products. He also developed mobile applications with a company operating a funds and investments distribution platform.
Mastering Python for Finance
Python is widely practiced in various sectors of finance, such as banking, investment management, insurance, and even real estate, for building tools that help in financial modeling, risk management, and trading. Even big financial corporations embrace Python to build their infrastructure for position management, pricing, risk management, and trading systems.
Throughout this book, theories from academic financial studies will be introduced, accompanied by their mathematical concepts to help you understand their uses in practical situations. You will see how Python is applied to classical pricing models, linearity, and nonlinearity of finance, numerical procedures, and interest rate models, that form the foundations of complex financial models. You will learn about the root-finding methods and finite difference pricing for developing an implied volatility curve with options.
With the advent of advanced computing technologies, methods for the storing and handling of massive amounts of data have to be considered. Hadoop is a popular tool in big data. You will be introduced to the inner workings of Hadoop and its integration with Python to derive analytical insights on financial data. You will also understand how Python supports the use of NoSQL for storing non-structured data.
Many brokerage firms are beginning to offer APIs to customers to trade using their own customized trading software. Using Python, you will learn how to connect to a broker API, retrieve market data, generate trading signals, and send orders to the exchange. The implementation of the mean-reverting and trend-following trading strategies will be covered. Risk management, position tracking, and backtesting techniques will be discussed to help you manage the performance of your trading strategies.
The use of Microsoft Excel is pervasive in the financial industry, from bond trading to back-office operations. You will be taught how to create numerical pricing Component Object Model (COM) servers in Python that will enable your spreadsheets to compute and update model values on the fly.
What This Book Covers
Chapter 1, Python for Financial Applications, explores the aspects of Python in judging its suitability as a programming language in finance. The IPython Notebook is introduced as a beneficial tool to visualize data and to perform scientific computing.
Chapter 2, The Importance of Linearity in Finance, uses Python to solve systems of linear equations, perform integer programming, and apply matrix algebra to linear optimization of portfolio allocation.
Chapter 4, *Numerical Procedures*, explores trees, lattices, and finite differencing schemes for valuation of options.
Chapter 5, *Interest Rates and Derivatives*, discusses the bootstrapping process of the yield curve and covers some short rate models for pricing the interest rate derivatives with Python.
Chapter 6, *Interactive Financial Analytics with Python and VSTOXX*, discusses the volatility indexes. We will perform analytics on EURO STOXX 50 Index and VSTOXX data, and replicate the main index using options prices of the sub-indexes.
Chapter 7, *Big Data with Python*, walks you through the uses of Hadoop for big data and covers how to use Python to perform MapReduce operations. Data storage with NoSQL will also be covered.
Chapter 8, *Algorithmic Trading*, discusses a step-by-step approach to develop a mean-reverting and trend-following live trading infrastructure using Python and the API of a broker. Value-at-risk (VaR) for risk management will also be covered.
Chapter 9, *Backtesting*, discusses how to design and implement an event-driven backtesting system and helps you visualize the performance of our simulated trading strategy.
Chapter 10, *Excel with Python*, discusses how to build a Component Object Model (COM) server and client interface to communicate with Excel and to perform numerical pricing on the call and put options on the fly.
1
Python for Financial Applications
In this introductory chapter, we will explore the aspects of Python in order to judge its suitability as a programming language in finance. Notably, Python is widely practiced in various financial sectors, such as banking, investment management, insurance, and even in real estate for building tools that help in financial modeling, risk management, and trading. To help you get the most from the multitude of features that Python has to offer, we will introduce the IPython Notebook as a beneficial tool to help you visualize data and to perform scientific computing for presentation to end users.
In this chapter, we will cover the following topics:
- Benefits of Python over other programming languages for financial studies
- Features of Python for financial applications
- Implementing object-oriented design and functional design in Python
- Overview of IPython
- Getting IPython and IPython Notebook started
- Creating and saving notebook documents
- Various formats to export a notebook document
- Notebook document user interface
- Inserting Markdown language into a notebook document
- Performing calculations in Python in a notebook document
- Creating plots in a notebook document
- Various ways of displaying mathematical equations in a notebook document
- Inserting images and videos into a notebook document
- Working with HTML and pandas DataFrame in a notebook document
Is Python for me?
Today's financial programmers have a diverse choice of programming languages in implementing robust software solutions, ranging from C, Java, R, and MATLAB. However, each programming language was designed differently to accomplish specific tasks. Their inner workings, behavior, syntax, and performance affect the results of every user differently.
In this book, we will focus exclusively on the use of Python for analytical and quantitative finance. Originally intended for scientific computations, the Python programming language saw an increasingly widespread use in financial operations. In particular, pandas, a software library written for the Python programming language, was open sourced by an employee of AQR Capital Management to offer high-performance financial data management and quantitative analysis.
Even big financial corporations embrace Python to architect their infrastructure. Bank of America's Quartz platform uses Python for position management, pricing, and risk management. JP Morgan's Athena platform, a cross-market risk management and trading system, uses Python for flexibility in combination with C++ and Java.
The application of Python in finance is vast, and in this book, we will cover the fundamental topics in creating financial applications, such as portfolio optimization, numerical pricing, interactive analytics, big data with Hadoop, and more.
Here are some considerations on why you might use Python for your next financial application.
Free and open source
Python is free in terms of license. Documentation is widely available, and many Python online community groups are available, where one can turn in times of doubt. Because it is free and open source, anyone can easily view or modify the algorithms in order to adapt to customized solutions.
Being accessible to the public opens a whole new level of opportunities. Anyone can contribute existing enhancements or create new modules. For advanced users, interoperability between different programming languages is supported. A Python interpreter may be embedded in C and C++ programs. Likewise, with the appropriate libraries, Python may be integrated with other languages not limited to Fortran, Lisp, PHP, Lua, and more.
Python is available on all major operating systems, such as Windows, Unix, OS/2, Mac, among others.
High-level, powerful, and flexible
Python as a general-purpose, high-level programming language allows the user to focus on problem solving and leave low-level mechanical constructs such as memory management out of the picture.
The expressiveness of the Python programming language syntax helps quantitative developers in implementing prototypes quickly.
Python allows the use of object-oriented, procedural, as well as functional programming styles. Because of this flexibility, it is especially useful in implementing complex mathematical models containing multiple changeable parameters.
A wealth of standard libraries
By now, you should be familiar with the NumPy, SciPy, matplotlib, statsmodels, and pandas modules, as indispensable tools in quantitative analysis and data management.
Other libraries extend the functionalities of Python. For example, one may turn Python into a data visualization tool with the gnuplot package in visualizing mathematical functions and data interactively. With Tk-based GUI tools such as Tkinter, it is possible to turn Python scripts into GUI programs.
A widely popular shell for Python is IPython, which provides interactive computing and high-performance tools for parallel and distributed computing. With IPython Notebook, the rich text web interface of IPython, you can share code, text, mathematical expressions, plots, and other rich media with your target audience. IPython was originally intended for scientists to work with Python and data.
Objected-oriented versus functional programming
If you are working as a programmer in the finance industry, chances are that your program will be built for handling thousands or millions of dollars' worth in transactions. It is crucial that your programs are absolutely free of errors. More often than not, bugs arise due to unforeseen circumstances. As financial software systems and models become larger and more complex, practicing good software design is crucial. While writing the Python code, you may want to consider the object-oriented approach or the functional approach to structure your code for better readability.
The object-oriented approach
As the demand for clarity, speed, and flexibility in your program increases, it is important to keep your code readable, manageable, and lean. One popular technical approach to building software systems is by applying the object-oriented paradigm. Consider the following example of displaying a greeting message as a class:
```python
class Greeting(object):
def __init__(self, my_greeting):
self.my_greeting = my_greeting
def say_hello(self, name):
print "%s %s" % (self.my_greeting, name)
```
We created a class called `Greeting` that is capable of accepting an input argument in its constructor. For this example, we will define our greeting as "Hello". The `say_hello` function is invoked with an input name and prints our greeting messages as follows:
```python
>>> greeting = Greeting("Hello")
>>> greeting.say_hello("World")
>>> greeting.say_hello("Dog")
>>> greeting.say_hello("Cat")
Hello World
Hello Dog
Hello Cat
```
The functional approach
We can achieve the same `Greeting` functionality using the functional approach. Functional programming is a programming paradigm, where computer programs are structured and styled in such a way that they can be evaluated as mathematical functions. These pseudo mathematical functions avoid changing state data, while increasing reusability and brevity.
In Python, a function object can be assigned to a variable and, like any other variables, can be passed into functions as an argument as well as return its value.
Let's take a look at the following code that gives us the same output:
```python
from functools import partial
def greeting(my_greeting, name):
print "%s %s" % (my_greeting, name)
```
Here, we defined a function named `greeting` that takes in two arguments. Using the `partial` function of the `functools` module, we treated the function, `greeting`, as an input variable, along with our variable greeting message as `Hello`.
```python
>>> say_hello_to = partial(greeting, "Hello")
>>> say_hello_to("World")
>>> say_hello_to("Dog")
>>> say_hello_to("Cat")
```
We assigned the `say_hello_to` variable as its return value, and reused this variable in printing our greetings with three different names by executing it as a function that accepts input arguments.
**Which approach should I use?**
There is no clear answer to this question. We have just demonstrated that Python supports both the object-oriented approach and the functional approach. We can see that in certain circumstances the functional approach in software development is brevity to a large extent. Using the `say_hello_to` function provides better readability over `greeting.say_hello()`. It boils down to the programmer's decision as to what works best in making the code more readable and having ease of maintenance during the software life cycle while collaborating with fellow developers.
As a general rule of thumb, in large and complex software systems representing objects as classes helps in code management between team members. By working with classes, the scope of work can be more easily defined, and system requirements can be easily scaled using object-oriented design. When working with financial mathematical models, using functional programing helps to keep the code working in the same fashion as its accompanying mathematical concepts.
**Which Python version should I use?**
The code examples in this book have been tested in Python 2.7 but are optimized to run on Python 3. Many of the third-party Python modules mentioned in this book require at least Python 2.7, and some do not have support for Python 3 as yet. To achieve the best compatibility, it is recommended that you install Python 2.7 on your workstation.
If you have not installed Python on your workstation, you can find out more about Python from the official source at https://www.python.org. However, in order to build financial applications from the examples, you are required to use a number of additional third-party Python modules such as NumPy, SciPy, and pandas. It is recommended that you obtain an all-in-one installer to ease the installation procedures. The following are some popular installers that include hundreds of supported packages:
- Anaconda by Continuum Analytics at https://store.continuum.io/cshop/anaconda
- Canopy by Enthought at https://store.enthought.com
Introducing IPython
IPython is an interactive shell with high-performance tools used for parallel and distributed computing. With the IPython Notebook, you can share code, text, mathematical expressions, plots, and other rich media with your target audience.
In this section, we will learn how to get started and run a simple IPython Notebook.
Getting IPython
Depending on how you have installed Python on your machine, IPython might have been included in your Python environment. Please consult the IPython official documentation for the various installation methods most comfortable for you. The official page is available at http://ipython.org.
IPython can be downloaded from https://github.com/ipython. To install IPython, unpack the packages to a folder. From the terminal, navigate to the top-level source directory and run the following command:
$ python setup.py install
Using pip
The pip tool is a great way to install Python packages automatically. Think of it as a package manager for Python. For example, to install IPython without having to download all the source files, just run the following command in the terminal:
$ pip install ipython
To get pip to work in the terminal, it has to be installed as a Python module. Instructions for downloading and installing pip can be found at https://pypi.python.org/pypi/pip.
The IPython Notebook
The IPython Notebook is the web-based interactive computing interface of IPython used for the whole computation process of developing, documenting, and executing the code. This section covers some of the common features in IPython Notebook that you may consider using for building financial applications.
Here is a screenshot of the IPython Notebook in Windows OS:
![IPython Notebook Screenshot]
Notebooks allow in-browser editing and executing of the code with the outputs attached to the code that generated them. It has the capability of displaying rich media, including images, videos, and HTML components.
Its in-browser editor allows the Markdown language that can provide rich text and commentary for the code.
Mathematical notations can be included with the use of LaTeX, rendered natively by MathJax. With the ability to import Python modules, publication-quality figures can be included inline and rendered using the matplotlib library.
Notebook documents
Notebook documents are saved with the .ipynb extension. Each document contains everything related to an interactive session, stored internally in the JSON format. Since JSON files are represented in plain text, this allows notebooks to be version controlled and easily sharable.
Notebooks can be exported to a range of static formats, including HTML, LaTeX, PDF, and slideshows.
Notebooks can also be available as static web pages and served to the public via URLs using nbviewer (IPython Notebook Viewer) without users having to install Python. Conversion is handled using the nbconvert service.
Running the IPython Notebook
Once you have IPython successfully installed in the terminal, type the following command:
```
$ ipython notebook
```
This will start the IPython Notebook server service, which runs in the terminal. By default, the service will automatically open your default web browser and navigate to the landing page. To access the notebook program manually, enter the http://localhost:8888 URL address in your web browser.
![By default, a notebook runs in port 8888. To infer the correct notebook address, check the log output from the terminal.]
The landing page of the notebook web application is called the dashboard, which shows all notebooks currently available in the notebook directory. By default, this directory is the one from which the notebook server was started.
Creating a new notebook
Click on New Notebook from the dashboard to create a new notebook. You can navigate to the File | New menu option from within an active notebook:
Here, you will be presented with the notebook name, a menu bar, a toolbar, and an empty code cell.
The menu bar presents different options that may be used to manipulate the way the notebook functions.
The toolbar provides shortcuts to frequently used notebook operations in the form of icons.
**Notebook cells**
Each logical section in a notebook is known as a cell. A cell is a multi-line text input field that accepts plain text. A single notebook document contains at least one cell and can have multiple cells.
To execute the contents of a cell, from the menu bar, go to Cell | Run, or click on the Play button from the toolbar, or use the keyboard shortcut Shift + Enter.
Each cell can be formatted as a **Code**, **Markdown**, **Raw NBConvert**, or heading cell:
**Code cell**
By default, each cell starts off as a code cell, which executes the Python code when you click on the **Run** button. Cells with a rounded rectangular box and gray background accept text inputs. The outputs of the executed box are displayed in the white space immediately below the text input.
**Markdown cell**
Markdown cells accept the Markdown language that provides a simple way to format plain text into rich text. It allows arbitrary HTML code for formatting.
Mathematical notations can be displayed with standard LaTeX and AMS-LaTeX (the `amsmath` package). Surround the LaTeX equation with $ to display inline mathematics, and $$ to display equations in a separate block. When executed, MathJax will render Latex equations with high-quality typography.
**Raw NBConvert cell**
Raw cells provide the ability to write the output directly and are not evaluated by the notebook.
**Heading cells**
Cells may be formatted as a heading cell, from level 1 (top level) to level 6 (paragraph). These are useful for the conceptual structure of your document or to construct a table of contents.
Simple exercises with IPython Notebook
Let's get started by creating a new notebook and populating it with some content. We will insert the various types of objects to demonstrate the various tasks.
Creating a notebook with heading and Markdown cells
We will begin by creating a new notebook by performing the following steps:
1. Click on New Notebook from the dashboard to create a new notebook. If from within an active notebook, navigate to the File | New menu option.
2. In the input field of the first cell, enter a page title for this notebook. In this example, type in Welcome to Hello World.
3. From the options toolbar menu, go to Cells | Cell Type and select Heading 1. This will format the text we have entered as the page title. The changes, however, will not be immediate at this time.
4. From the options toolbar menu, go to Insert | Insert Cell Below. This will create another input cell below our current cell.
5. In this example, we will insert the following piece of text that contains the Markdown code:
```
Text Examples
This is an example of an *italic* text.
This is an example of a **bold*** text.
This is an example of a list item:
- Item #1
- Item #2
- Item #3
---
#heading 1
##heading 2
###heading 3
####heading 4
#####heading 5
######heading 6
```
6. From the toolbar, select **Markdown** instead of **Code**.
7. To run your code, go to **Cell | Run All**. This option will run all the Python commands and format your cells as required.
When the current cell is executed successfully, the notebook will focus on the next cell below, ready for your next input. If no cell is available, one will be automatically created and will receive the input focus.
This will give us the following output:
```
Welcome to Hello World
Text Examples
This is an example of an italic text.
This is an example of a **bold** text.
This is an example of a list item:
- Item #1
- Item #2
- Item #3
heading 1
heading 2
heading 3
heading 4
heading 5
heading 6
```
**Saving notebooks**
Go to **File** and click on **Save** and **Checkpoint**. Our notebook will be saved as an `.ipynb` file.
Mathematical operations in cells
Let’s perform a simple mathematical calculation in the notebook; let’s add the numbers 3 and 5 and assign the result to the `answer` variable by typing in the code cell:
```python
answer = 3 + 5
```
From the options menu, go to Insert | Insert Cell Below to add a new code cell at the bottom. We want to output the result by typing in the following code in the next cell:
```python
print answer
```
Next, go to Cell | Run All. Our answer is printed right below the current cell.
Displaying graphs
The `matplotlib` module provides a MATLAB-like plotting framework in Python. With the `matplotlib.pyplot` function, charts can be plotted and rendered as graphic images for display in a web browser.
Let’s demonstrate a simple plotting functionality of the IPython Notebook. In a new cell, paste the following code:
```python
import numpy as np
import math
import matplotlib.pyplot as plt
x = np.linspace(0, 2*math.pi)
plt.plot(x, np.sin(x), label=r'$\sin(x)$')
plt.plot(x, np.cos(x), 'ro', label=r'$\cos(x)$')
plt.title(r'Two plots in a graph')
plt.legend()
```
The first three lines of the code contain the required import statements. Note that the NumPy, math, and matplotlib packages are required for the code to work in the IPython Notebook.
In the next statement, the variable \( x \) represents our \( x \) axis values from 0 through 7 in real numbers. The following statement plots the \( \sin \) function for every value of \( x \). The next \texttt{plot} command plots the \( \cos \) function for every value of \( x \) as a dotted line. The last two lines of the code print the title and legend respectively.
Running this cell gives us the following output:
![Graph showing \( \sin(x) \) and \( \cos(x) \)]
---
**Inserting equations**
What is TeX and LaTeX? **TeX** is the industry standard document markup language for math markup commands. **LaTeX** is a variant of TeX that separates the document structure from the content.
Mathematical equations can be displayed using LaTeX in the Markdown parser. The IPython Notebook uses MathJax to render LaTeX surrounded with \$$\$$ inside Markdown.
For this example, we will display a standard normal cumulative distribution function by typing in the following command in the cell:
\[
N(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} e^{-\frac{z^2}{2}} \, dz
\]
Select **Markdown** from the toolbar and run the current cell. This will transform the current cell into its respective equation output:
\[
N(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} e^{-\frac{z^2}{2}} \, dz
\]
Besides using the MathJax typesetting, another way of displaying the same equation is using the `math` function of the IPython display module, as follows:
```python
from IPython.display import Math
Math(r'N(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-\frac{z^2}{2}}\, dz')
```
The preceding code will display the same equation, as shown in the following screenshot:
Notice that, since this cell is run as a normal code cell, the output equation is displayed immediately below the code cell.
We can also display equations inline with text. For example, we will use the following code with a single `$` wrapping around the LaTeX expression:
```
This expression $\sqrt{3x-1}+(1+x)^2$ is an example of a TeX inline equation
```
Run this cell as the Markdown cell. This will transform the current cell into the following:
```
This expression $\sqrt{3x - 1} + (1 + x)^2$ is an example of a TeX inline equation.
```
### Displaying images
To work with images, such as JPEG and PNG, use the `Image` class of the IPython display module. Run the following code to display a sample image:
```python
from IPython.display import Image
Image(url='http://python.org/images/python-logo.gif')
```
On running the code cell, it will display the following output:
```
In [15]: from IPython.display import Image
| Image(url='http://python.org/images/python-logo.gif')
Out[15]:
```

### Inserting YouTube videos
The `lib.display` module of the IPython package contains a `YouTubeVideo` function, where you can embed videos hosted externally on YouTube into your notebook. For example, run the following code:
```
from IPython.lib.display import YouTubeVideo
# An introduction to Python by Google.
YouTubeVideo('tKTzoB2Vjuk')
```
The video will be displayed below the code, as shown in the following screenshot:
Working with HTML
Notebook allows HTML representations to be displayed. One common use of HTML is the ability to display data with tables. The following code outputs a table with two columns and three rows, including a header row:
```python
from IPython.display import HTML
table = """""""
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
"""
HTML(table)
```
The `HTML` function will render HTML tags as a string in its input argument. We can see the final output as follows:
<table>
<thead>
<tr>
<th></th>
<th>Header 1</th>
<th>Header 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>row 1</td>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2</td>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</tbody>
</table>
The pandas DataFrame object as an HTML table
In a notebook, pandas allow DataFrame objects to be represented as HTML tables.
In this example, we will retrieve the stock market data from Yahoo! Finance and store them in a pandas DataFrame object with the help of the `pandas.io.data.web.DataReader` function. Let's use the `AAPL` ticker symbol as the first argument, `yahoo` as its second argument, the start date of the market data as the third argument, and the end date as the last argument:
```python
import pandas.io.data as web
import datetime
```
Python for Financial Applications
```
start = datetime.datetime(2014, 1, 1)
end = datetime.datetime(2014, 12, 31)
df = web.DataReader("AAPL", 'yahoo', start, end)
df.head()
```
With the `df.head()` command, the first five rows of the DataFrame object that contains the market data is displayed as an HTML table in the notebook:
```
In [78]: import pandas.io.data as web
In [79]: start = datetime.datetime(2014, 1, 1)
...: end = datetime.datetime(2014, 12, 31)
In [80]: df = web.DataReader("AAPL", 'yahoo', start, end)
...: df.head()
Out[80]:
Open High Low Close Volume Adj Close
Date
2014-01-02 555.68 557.03 552.02 553.13 58671200 77.39
2014-01-03 552.86 553.70 540.43 540.98 98116900 75.69
2014-01-06 537.45 546.80 533.60 543.93 103152700 76.10
2014-01-07 544.32 545.96 537.92 540.04 79302300 75.56
2014-01-08 538.81 545.56 538.69 543.46 64632400 76.04
```
**Notebook for finance**
You are now ready to place your code in a chronological order and present the key financial information, such as plots and data to your audience. Many industry practitioners use the IPython Notebook as their preferred editor for financial model development in helping them to visualize data better.
You are strongly encouraged to explore the powerful features the IPython Notebook has to offer that best suit your modeling needs. A gallery of interesting notebook projects used in scientific computing can be found at https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks.
Summary
In this chapter, we discussed how Python might be suitable for certain areas of finance and also discussed its advantages for our software applications. We also considered the functional programming paradigm and the object-oriented programming paradigm that are supported in Python, and saw how we can achieve brevity in our applications. There is no clear rule as to how one approach may be favored over the other. Ultimately, Python gives programmers the flexibility to structure their code to the best interests of the project at hand.
We were introduced to IPython, the interactive computing shell for Python, and explored its usefulness in scientific computing and rich media presentation. We worked on simple exercises on our web browser with the IPython Notebook, and learned how to create a new notebook document, insert text with the Markdown language, performed simple calculations, plotted graphs, displayed mathematical equations, inserted images and videos, rendered HTML, and learned how to use pandas to fetch the stock market data from Yahoo! Finance as a DataFrame object before presenting its content as an HTML table. This will help us visualize data and perform rich media presentations to our audience.
Python is just one of the many powerful programming languages that can be considered in quantitative finance studies, not limited to Julia, R, MATLAB, and Java. You should be able to present key concepts more effectively in the Python language. These concepts, once mastered, can easily be applied to any language you choose when creating your next financial application.
In the next chapter, we will explore linear models in finance and techniques used in portfolio management.
Where to buy this book
You can buy Mastering Python for Finance from the Packt Publishing website.
Alternatively, you can buy the book from Amazon, BN.com, Computer Manuals and most internet book retailers.
Click here for ordering and shipping details.
|
{"Source-Url": "http://cdn.oreillystatic.com/oreilly/booksamplers/packt/9781784394516_Sample.pdf", "len_cl100k_base": 7355, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 45170, "total-output-tokens": 8608, "length": "2e12", "weborganizer": {"__label__adult": 0.0011301040649414062, "__label__art_design": 0.0007839202880859375, "__label__crime_law": 0.0005259513854980469, "__label__education_jobs": 0.0084991455078125, "__label__entertainment": 0.00032329559326171875, "__label__fashion_beauty": 0.0004377365112304687, "__label__finance_business": 0.046600341796875, "__label__food_dining": 0.0015859603881835938, "__label__games": 0.0015954971313476562, "__label__hardware": 0.0012664794921875, "__label__health": 0.0011377334594726562, "__label__history": 0.0003261566162109375, "__label__home_hobbies": 0.0004167556762695313, "__label__industrial": 0.0012111663818359375, "__label__literature": 0.0010805130004882812, "__label__politics": 0.0004048347473144531, "__label__religion": 0.0007314682006835938, "__label__science_tech": 0.0287628173828125, "__label__social_life": 0.00035071372985839844, "__label__software": 0.0312347412109375, "__label__software_dev": 0.869140625, "__label__sports_fitness": 0.0006399154663085938, "__label__transportation": 0.0012969970703125, "__label__travel": 0.0004703998565673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34522, 0.02478]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34522, 0.72706]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34522, 0.88153]], "google_gemma-3-12b-it_contains_pii": [[0, 1880, false], [1880, 2944, null], [2944, 5545, null], [5545, 7043, null], [7043, 8469, null], [8469, 10815, null], [10815, 12941, null], [12941, 14469, null], [14469, 16686, null], [16686, 18659, null], [18659, 19633, null], [19633, 21223, null], [21223, 21906, null], [21906, 23112, null], [23112, 24397, null], [24397, 25224, null], [25224, 26509, null], [26509, 27805, null], [27805, 28999, null], [28999, 29677, null], [29677, 30976, null], [30976, 32553, null], [32553, 34268, null], [34268, 34522, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1880, true], [1880, 2944, null], [2944, 5545, null], [5545, 7043, null], [7043, 8469, null], [8469, 10815, null], [10815, 12941, null], [12941, 14469, null], [14469, 16686, null], [16686, 18659, null], [18659, 19633, null], [19633, 21223, null], [21223, 21906, null], [21906, 23112, null], [23112, 24397, null], [24397, 25224, null], [25224, 26509, null], [26509, 27805, null], [27805, 28999, null], [28999, 29677, null], [29677, 30976, null], [30976, 32553, null], [32553, 34268, null], [34268, 34522, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34522, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 34522, null]], "pdf_page_numbers": [[0, 1880, 1], [1880, 2944, 2], [2944, 5545, 3], [5545, 7043, 4], [7043, 8469, 5], [8469, 10815, 6], [10815, 12941, 7], [12941, 14469, 8], [14469, 16686, 9], [16686, 18659, 10], [18659, 19633, 11], [19633, 21223, 12], [21223, 21906, 13], [21906, 23112, 14], [23112, 24397, 15], [24397, 25224, 16], [25224, 26509, 17], [26509, 27805, 18], [27805, 28999, 19], [28999, 29677, 20], [29677, 30976, 21], [30976, 32553, 22], [32553, 34268, 23], [34268, 34522, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34522, 0.01114]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
9e639114c429d2907c7fe5d25d76175a42d9446b
|
Pragmatic software testing education
Aniche, Mauricio; Hermans, Félienne; van Deursen, Arie
DOI
10.1145/3287324.3287461
Publication date
2019
Document Version
Accepted author manuscript
Published in
SIGCSE 2019 - Proceedings of the 50th ACM Technical Symposium on Computer Science Education
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
We also survey 84 students and seven of our teaching assistants on withdrawing their papers [33]; all of them caused by software bugs. When taking software testing courses.
Making sure software works is maybe the greatest responsibility of a software developer. Luckily, over the years, software testing education, software engineering education, computer science education.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
ACM Reference Format:
1 INTRODUCTION
Every software developer should be aware of the (high) impact that malfunctioning software can have in our society. We have seen huge losses in the financial market [30], and even researchers withdrawing their papers [33]; all of them caused by software bugs. Making sure software works is maybe the greatest responsibility of a software developer. Luckily, over the years, software testing moved away from being considered the activity that ‘less skilled’ software engineers do to one of the most important skills an engineer should have.
The act of inspecting large and complex code bases to find bugs is not a trivial task in the real world: engineers need to have a broad understanding of different practices that vary from simple manual exploratory testing, where a human tries to find bugs manually by interacting with the system, to advanced bleeding-edge testing techniques, such as automated testing and automated test generation, where engineers program machines to test their system.
Companies such as Facebook [12], Google [41], and Microsoft [35] take testing seriously and require their engineers to master such techniques. Surveys have shown that developers understand the importance of testing-related training [15] and yet many of them still lack formal testing education [6, 34].
Indeed, educating a student in the art of software testing is challenging, for both students and educators. From the educator’s perspective, it is hard to keep a testing course up-to-date with the novelties of the field as well as to come up with exercises that are realistic [14]. Due to the importance of the topic, educators have been experimenting with the introduction of testing earlier in Computer Science programs [17, 19–21, 23, 27], introducing a test-first approach in CS courses [9, 10, 22], developing tools focused on software testing education [11, 38], and proposing more complete postgraduate courses focused on testing [39]. Educators also face the fact that some testing topics are not conceptually straightforward, not easy to demonstrate and generalize, and are not all available in a single textbook [40].
This paper has a twofold goal. First, to present how we have been teaching pragmatic software testing to the first year CS students at Delft University of Technology. Second, we explore students’ common mistakes, hard topics to learn, favourite learning activities, and challenges they face when learning pragmatic software testing.
To this aim, we analyzed the 1,993 quotes from the feedback report that we, as teachers and teaching assistants, gave to each of the 230 students of the 2017 edition of the Software Quality and Testing course, which is taught at the first year of our Computer Science bachelor. In addition, we performed a survey with 84 students, which we augmented by also surveying seven of our TAs.
The main contributions of this paper are:
- A proposal for a pragmatic software testing course based on nine key principles that can be taught for computer science students, including building a test mindset and interaction with practitioners (Section 3).
- An empirical analysis of the students’ most common mistakes (Section 6.1), their perceptions on the most difficult topics in software testing (Section 6.2), and the importance of different teaching activities (Section 6.3) when learning pragmatic software testing.
2 RELATED WORK
Software Testing is an important part of any Software Engineering program [2, 8, 26, 42], and by itself poses several other challenges to educators. Unfortunately, the topic still does not receive its deserved attention in several CS programs. Wong [42] argues that many engineers are not well trained in software testing because most CS programs offer ST as an elective course. Clarke et al. [8] also points to the fact that due to the large number of topics to be covered in a Software Engineering program, little attention is given to Software Testing. Astigarraga et al. [2] show that most CS programs tend to emphasize development at the expense of testing as a formal engineering discipline. Lemos et al. [26] show that software testing education can improve code reliability in terms of correctness; however, authors also argue that university instructors tend to lack the same knowledge that would help students increase their programming skills toward more reliable code.
Educators have been suggesting different approaches on how to introduce testing in a CS curriculum: from students submitting their assignments together with test plans or sets [16, 17, 21], performing black-box testing in a software seeded with errors [21, 24, 31], students testing each others’ programs [36], to suggesting students to use a test-first approach at the very beginning of the program [9, 10, 22, 27]. Many of these authors even suggest that tests should be incorporated to the Computer Science and Software Engineering curricula, not only as an elective discipline, but throughout the curriculum. More specifically, Jones [23] suggests that students need to see the practice of software testing as part of the educational experience and that each core course in the curriculum should impart one or more testing experiences.
In addition, educators have proposed tools that are solely focused on software testing education. Elbaum et al. [11] propose BugHunt. BugHunt is a tool that contains four different lessons on software testing (terminology, black box, white box, efficiency in testing). 79% of the students in their experiment agreed that BugHunt added significant value to the material presented in the lecture(s) on software testing, and 61% of the students agreed that BugHunt could replace the classes on testing. Spacco and Pugh propose Marmoset [38], a tool to help incentivize students to test their software. Marmoset’s innovative element is that if a submission passes all of the public test cases, then students are given the opportunity to test their code against a test suite that is not publicly disclosed.
3 PRAGMATIC SOFTWARE TESTING EDUCATION
The Software Testing and Quality Engineering at Delft University of Technology is a course that covers several different aspects of software testing, ranging from topics in the ISTQB industry certification [5] to software testing automation, as well as the future of testing by means of selected research papers.
The course is currently a compulsory part of the 4th quarter of the first year in the Computer Science bachelor. The course corresponds to 5 ECTS (140 hours). Students have two lectures of 1.5 hours plus 4 hours of labwork a week. As a pre-requisite, students should have at least basic knowledge on Java programming language.
The teaching team is currently composed of two teachers and teaching assistants (TAs). The number of TAs vary as our university has a policy of 1 TA per 30 students. Teachers are responsible for the course design, lectures, creating and assessing multiple choice exams, and they have the overall responsibility of the course. TAs are responsible for helping students, grading all labwork deliverables, and for giving concrete and specific feedback on what students can improve.
Learning goals. At the end of the course, students (1) are able to create unit, integration, and system tests using current existing tools (e.g., JUnit, Mockito) that successfully test complex software systems, (2) are able to derive test cases that deal with exceptional, corner, and bad weather cases by performing several different techniques (i.e., boundary analysis, state-based testing, decision tables), (3) are able to measure and reflect on the effectiveness of the developed test suites by means of different test adequacy metrics (e.g., line and branch code coverage, MC/DC), (4) are able to reflect on limitations of current testing techniques, when and when not to apply them in a given context, and to design testable software systems, (5) Participants are able to write maintainable test code by avoiding well-known test code smells (e.g., Assertion Roulette, Slow or Obscure Tests).
Program. The course covers software quality attributes, maintainability and testability, manual and exploratory testing, automated testing, devops, test adequacy, model-based testing, state-based testing, decision tables, reviews and inspections, design-by-contract, embedded system testing, test-driven development, integration testing, mocks and stubs. More specifically:
- Week 1: Introduction to software testing, fault vs failure, principles of testing, (un)decidability, introduction to JUnit, introduction to labwork.
- Week 2: Life cycle, validation vs verification, V-model, code reviews. Functional testing, partition testing, boundary testing, and domain testing.
- Week 3: Structural testing, adequacy criteria, code coverage. Unit vs integration vs system testing, mock objects, and test-driven development.
- Week 4: State-based testing, model-based testing, and decision tables.
- Week 6: Security testing. Search-based software testing.
- Week 7: Guest lectures from industry.
Key elements. To achieve a pragmatic software testing course, we have devised and currently follow some key elements:
Theory applied in the lecture. We put our efforts into developing lectures where students can see theory being applied to practice. Our lectures often have the following structure: we present a (buggy) code implementation (initially on slides, and later in the IDE), we discuss where the bug is, we explore, at a conceptual level, a systematic approach to detect the bug, we apply the approach into a set of concrete examples. In other words, we do not only focus on explaining abstract ideas, but on concretely showing how to apply them on different real world problems, using real-world tools, like JUnit, Mockito, and Cucumber.
Real-world pragmatic discussions. Software testing is a challenging activity to be done in practice. This means that developers often make trade-offs in deciding what and how much to test. Engineering questions that arise when complex software systems are being tested, such as "how much should I test?", "how should I test a mobile application that communicates with a web server?", and "should I use mocks to test this application?" are often discussed in classroom so that students see how to extrapolate from our often small exercises to their future real lives as developers.
Build a testing mindset. Software testing is not seen as an important task by many students. A software testing course should inspire students to think about testing whenever they implement any piece of code. In our testing course, we aim to achieve such a testing mindset by (1) showing how testing can be a creative activity, requiring strong developers, by means of several live coding sessions and rich pragmatic discussions, (2) demonstrating not only the usefulness of any testing technique we teach, but also how they are applied, as well as what trade-offs such techniques have in the real-world, (3) bringing guest lecturers who talk about the importance of software testing for their companies.
Software testing automation. The software engineering industry has long been advocating the automation of any software testing activity [12, 35, 41]. However, some software testing courses still focus on writing test case specifications solely as documents, and do not discuss how to automate them. In our course, to all the theoretical and systematic test design techniques we present, from functional testing to structural testing, from unit to system-level tests, students later write them in a form of an automated test. Mastering tools such as JUnit and Mockito, standard tools for test automation in Java, is a clear learning goal of our course. The importance of automation also strongly appears in our labwork, which we discuss next.
A hands-on labwork. We see the labwork as an important learning method. In our course, by means of a practical labwork assignment, students apply a selection of techniques to a 3k lines of code game written in Java, namely, J'PacMan. The labwork contains a set of 50 exercises in which students are able to exercise all the techniques we teach. It is important to notice that students not only generate test cases on the paper, but also automate them. A great amount of their work is in actually producing automated JUnit test cases.
In the following, we present the main deliverables of our labwork. The complete assignment can be found in our online appendix [1].
- **Part 0 (Pre-requisites).** Clone the project from Github, configure the project in your IDE, write your first JUnit test, run coverage analysis.
- **Part 1.** Write a smoke test, functional black-box testing, boundary tests, reflect on test understandability and best practices.
- **Part 2.** White-box testing, mock objects, calculate code coverage and apply structural testing, use decision tables for complex scenarios, reflect on how to reduce test complexity and how to avoid flaky tests.
- **Part 3.** Apply state-based testing, test reusability, refactor and reflect on test smells.
Test code quality matters. Due to the importance of automated testing activities, software testers will deal with large test codebases. Empirical research has indeed shown that test code smells often happen in software systems, and that their presence has a strong negative impact on the maintainability of the affected classes [3]. We often reinforce the importance of refactoring test code and make sure they are free of smells. To any test code we write during live coding sessions, we make sure that they are as free of smells as possible. Test smell catalogues such as the ones proposed by Meszaros [32] are deeply discussed in a dedicated lecture.
Design systems for testability. Designing software in such a way that it eases testability is a common practice among practitioners [13, 18, 29]. This requires us to not only discuss software testing in our course, but software architecture and design principles of testable software systems, such as dependency inversion [28], observability and controllability, in an entire dedicated lecture for the topic. Questions like "Do I need to test this behavior via an unit or a system test?", "How can I test my mobile application?" are extensively discussed not only through the eyes of software testing, but also to the eyes of software design.
Mixture of pragmatic and theoretical books. The two books we use as textbooks in the course are the "Foundations of software testing: ISTQB certification" [5], which gives students a solid foundation about testing theory, and the "Pragmatic Unit Testing in Java 8 with JUnit" [25], which gives students concrete and practical examples of how to use testing tools, like JUnit. We believe both complement each other and both are important for students who will soon become a software tester.
Interaction with practitioners. We strongly encourage their interaction with practitioners throughout our course. Having guest lectures from industry practitioners helps us to show the pragmatic side of software testing. Guests focus their lectures on how they apply software testing at their companies, tools they use, their pros and cons, and on the mistakes and challenges they face. In the 2017 edition, we also experimented with Ask-Me-Anything (AMA) sessions, where we called experts from all over the world via Skype and students had 15 minutes to ask any software-testing related questions.
Grading. We currently use the following formula to grade our students: $0.25 \times \text{labwork} + 0.75 \times \text{exam}$. The labwork (as we explain below) is composed of 4 deliverables, each graded by our TAs in a range of $[0..10]$. We later average the grades of four deliverables, which compose the labwork component of the grade. At the end of the course, we propose a 40-question multiple choice exam. Students may take a resit 6 weeks later if they did not pass in the first time. We also offer an optional midterm exam for students who want to practice beforehand.
4 RESEARCH METHODOLOGY
The goal of this study is to provide a better understanding of the difficulties and challenges that students face when learning pragmatic software testing.
To that aim, we analyze the data from 230 students of the 2016-2017 edition of our software testing course. We propose three research questions:
**RQ:** What common mistakes do students make when learning software testing?
RQ2: Which software testing topics do students find hardest to learn?
RQ3: Which teaching methods do students find most helpful?
To answer our research questions, we collect and analyze data from three different sources: the feedback reports that TAs give to students throughout the course, a survey with students, and a survey with the TAs, both performed after the course. We characterize the participants in Section 5. In the following, we detail the three parts of our methodology.
**Manual content analysis on the feedback.** As we explain in Section 3, students work on and produce four deliverables during the course. After each deliverable, our team of TAs manually reads students’ reports, source code, and tests, and with the help of a rubric, provides them with rich qualitative feedback.
This feedback usually contains several quotes that touch on a mix of different topics, such as mistakes they made in the exercises, tips on how to improve their existing work, issues on the written report, and even compliments for their good work. The language of such feedback reports is usually informal, as we do not give constraints to TAs on how the feedback should be.
We analyze the content of all feedback reports. To that aim, we first filter out any feedback that is not directly related to software testing (e.g., comments on exercises that were not done, or compliments). We then follow an iterative process, derived from standard qualitative data analysis procedures [37]: (1) we assign a code for each quote in the feedback; the code summarizes the essence of the quote, (2) if a quote does not belong to any existing codes, we introduce a new code, (3) each quote has just a single code; if a quote tackles two different problems, we split the original quote into two quotes, (4) to assign the correct code to a quote, we used our knowledge of the testing course, labwork, and the existing rubrics. We assigned 40 different codes to a total of 1,993 quotes. As a next step, we started an iterative merging process to derive the final themes, by grouping similar codes into higher-level themes, e.g., the theme “maintainability of test code” contains quotes from the “test quality”, and “test duplication” codes. We ended up with eight themes that we present in the Results (Section 6).
**Survey with students.** With the goal of capturing their perceptions on learning software testing, we asked students to answer a questionnaire that contained both open and closed questions at the end of the course.
The survey contains a total of 18 questions, none of which are required. The two closed questions of the survey asked students about the difficulty of learning and putting into practice the concepts and techniques we taught, and about the importance of the different activities we used throughout the course. In these questions, students had to choose from a five point Likert-scale, ranging from strongly disagree to strongly agree (see Figures 2 and 3). The open questions were mostly focused on understanding the students’ main challenges, difficulties, and suggestions of improvements for our testing course. We apply qualitative techniques to analyze the results of each open question individually, similarly to our analysis of the feedback reports. The full survey as well the full code book can be found in our online appendix [1].
We did not make answering the survey compulsory for the students. We received 84 complete answers out of the 230 students.

**Survey with Teaching Assistants.** Our TAs support students throughout the course, by answering their questions, supporting their work during the lab, and by grading their assignments. As a consequence of such intense contact with students, TAs obtain a good perspective on the challenges of teaching software testing.
We also performed a similar survey with TAs, focusing on what they perceive as challenges for students. The survey contained the same two closed questions from the students’ survey (challenges when applying software testing, and the importance of the different activities). In the open questions, we focused on asking about the common mistakes students do during the lab, as well as their perceptions on the challenges that students face.
We shared the survey internally at the end of our course. We also did not make answering the survey compulsory for TAs. At the end, we received 7 complete answers out of the 10 TAs.
5 CHARACTERIZATION OF THE PARTICIPANTS
**Students.** 66 students identify themselves as male, 8 as female, and 10 preferred not to answer. 89.3% of the students are between 18 to 24 years, five are between 25 and 34, and four are 17 or younger. Only three students were international students. In terms of Java knowledge, in a scale from 1 to 10, 9.5% of students consider their knowledge between 9 and 10, and 72% of them consider themselves between 7 and 8. Only 4 students consider themselves 5 or below.
Thanks to the introduction to JUnit that students receive during their very first course on programming, most of them already had some knowledge on software testing prior to our course. In fact, as we show in Figure 1, before the course starts, in a scale from 1 to 10, 39% of them consider themselves between 6 and 8, 44% between 4 and 5, and only 16% between 1 and 3. No student considered herself a 9 or 10. Students considered that their knowledge increased after the course. All of them considered their knowledge after the course as 6 or greater; 39% of them ranked themselves with a 8, and 14.6% with a 9. Two students ranked themselves with a 10.
Teaching Assistants. All TAs are between 18 and 24 years-old, one of them being female. They all ranked their Java knowledge between 8 and 10, and software testing knowledge between 7 and 8. Four of them are TAs for the first time in our software course; the other three TAs are performing this role for the third year in a row.
6 RESULTS
6.1 RQ1: What common mistakes do students make when learning software testing?
We characterize the labwork feedback in eight different themes (ordered by their frequency): test coverage, maintainability of test code, understanding testing concepts, boundary testing, state-based testing, assertions, mock objects, and tools.
Test coverage (416 times, 20.87%). Students commonly either miss tests, i.e., they do not provide all the expected tests for a given piece of code, or they write tests that are not totally correct, e.g., the test does not actually test the piece of code, or the test exercises the wrong class. In addition, we also observed cases (14) where the student actually “overtested” (i.e., wrote tests for more cases than required).
Maintainability of test code (407 times, 20.42%). Students often need advice on how to write maintainable test code. More specifically, test quality advices in general, such as better naming and excessive complexity (247), code duplication and lack of reusability (69), tests that could be split in two (31), better usage of test cleanup features, such as JUnit’s Before and After (47).
Understanding testing concepts (306 times, 15.35%). Students provide incomplete answers or have difficulties when it comes to questions that involve testing concepts and ideas, such as what flaky tests are about, advantages and disadvantages of unit and system tests, and the importance of removing test smells.
Boundary testing (258 times, 12.95%). Students often miss all the tests required to cover a boundary (142). As we also ask them to first build a decision table and then derive the tests, we also see that they often miss elements in the table (50) and generate tables that are not fully correct (46).
6.2 RQ2: Which software testing topics do students find hardest to learn?
In Figure 2, we show, based on the survey data, how students and TAs perceive the difficulty of each of the topics we teach.
Most students consider using the JUnit framework (Q1) as well as to think about the Act-Arrange-Assert pattern that composes any unit test (Q2) easy to learn. In fact, 76% and 73% of students consider it easy or very easy to learn JUnit and to use the AAA pattern, respectively. These perceptions are also shared by TAs, and matches with RQ1 results, as the number of feedback related to bad tool usage is small (4.21%).
Interestingly, applying MC/DC (Modified Condition/Decision Coverage) [7] criteria to test complicated conditions (Q7) was considered hard or very hard by 49% of the students; this is the hardest topic among all of them. However, it seems that other coverage criteria are easier to learn, as only 16% of students considered structural testing hard (Q6).
Applying software testing in a pragmatic way, as expected, was considered hard for students. Deciding how much testing is enough (Q14) is also considered a hard topic by 42% of students (the second most hard topic). TAs agree and even perceive this topic harder than the students. This result also matches with our findings on RQ1, where test coverage is the most prominent topic in feedback. In addition, writing the minimum set of tests that gives confidence (Q18) is considered hard for 25% of students and neutral for 40%. Choosing the right level of testing (e.g., unit, integration, or system tests) is not considered easy to all of them; 29% consider it easy,
while 71% of TAs perceive it as a hard topic for students. The stu-
dents perceive optimizing code for testability (Q10) as just somewhat challenging (35% find it easy, 41% are neutral, and 25% find it hard), 67% of TAs believe that testability is a hard topic for students. As we conjecture that TAs have a better understanding of testability than the students, these findings suggest that the students are not sufficiently aware of the difficulty of testability.
50% of the students are neutral, and 21% perceive it as a hard topic (Q3). Not a single TA perceived this topic as easy. We believe these findings highlight even more the importance of discussing even more the pragmatic side of software testing.
When it comes to testing code best practices, students had a contradicting perceptions. The usage of mocks to simulate a dependency (Q4) and writing fast, reproducible, and non-flaky tests (Q17) were considered easy topics to be learned by 42% and 56% of students, respectively. TAs agree that students learn these topics with less difficulties. However, when it comes to following testing best practices, students had a contradicting perceptions. The usage of mocks to simulate a dependency (Q4) and writing fast, reproducible, and non-flaky tests (Q17) were considered easy topics to be learned by 42% and 56% of students, respectively. TAs agree that students learn these topics with less difficulties. However, when it comes to following testing best practices (Q9), 46% of students perceive it as an easy topic, while 71% of TAs perceive it as a hard topic for students. The students’ perceptions also contradicts the results of RQ1, where we observe a large number of feedback focused on best practices in their assignments.
Finally, testability seems less challenging for students than for TAs. While students perceive optimizing code for testability (Q10) as just somewhat challenging (35% find it easy, 41% are neutral, and 25% find it hard), 67% of TAs believe that testability is a hard topic for students. As we conjecture that TAs have a better understanding of testability than the students, these findings suggest that the students are not sufficiently aware of the difficulty of testability.
6.3 RQ3: Which teaching methods do students find most helpful?
In Figure 3, we show how students perceive the importance of each learning activity we have in our software testing course.
Students perceive activities that involve practitioners as highly important. More specifically, guest lectures from industry (Q2) were considered important by 72% of participants. The Ask-me-Anything sessions (Q10), on the other hand, was considered important by only 32% of participants; 38% are neutral, and 30% do not consider them important.
Moreover, different interactions during the lecture are also considered important for students. Teachers performing live code (Q3) and discussions and interactions during the lecture (Q4) are considered important by 75% and 65% of students, respectively. We conjecture that discussions and live coding are moments in which students have the opportunity to discuss the topics they consider hard, such as how much testing is enough, which test level to use, and test code best practices (as seen in RQ1 and RQ2).
On the other hand, the two books we use as textbooks in the course are not considered fundamental for students. More specifically, 31% of students find the ISTQB [5] not important and 36% are neutral (Q6), whereas 29% of them find the PragProg [25] not important and 51% are neutral (Q5). Reading related papers (Q9) is also considered not important for 35% of them.
6.4 Limitations of our study
The qualitative analysis of the open questions in the survey was manually conducted by the first author of this paper. The analysis, therefore, could be biased towards the views of the authors. To mitigate the threat, we make all the data available for inspection in our online appendix [1].
TAs were responsible for giving feedback to students throughout the study. Although we instruct all TAs on how to grade and what kind of feedback to give (they all follow the same rubrics), different TAs have different personalities. In practice, we observed that some TAs provided more feedback than other TAs. While we believe this could have little impact on the percentages of each theme in RQ1, we do not expect any other theme to emerge.
In terms of generalizability, although we analyzed the behavior of 230 students, we do not claim that our results are complete and/or generalizable. Furthermore, most students were Dutch (we only had 3 international students answering our survey), which may introduce cultural bias to our results. We urge researchers to perform replications of this study in different countries and universities.
7 CONCLUSIONS
Software testing is a vital discipline in any Software Engineering curriculum. However, the topic poses several challenges to educators and to students. In this paper, we proposed a pragmatic software testing curriculum and explored students’ common mistakes, hard topics to learn, favourite learning activities, important learning outcomes, and challenges they face when studying software testing.
Researchers and educators agree that software testing education is fundamental not only to industry, but also to research. We hope this paper helps the community to improve even more the quality of their software testing courses. As Bertolino [4] states in her paper on the achievements, challenges, and dreams on software testing research: “While it is research that can advance the state of the art, it is only by awareness and adoption of those results by the next-coming generation of testers that we can also advance the state of practice. Education must be continuing, to keep the pace with the advances in testing technology”.
ACKNOWLEDGMENTS
We thank all the students and teaching assistants that followed our course in the last years.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/47139096/paper_testing.pdf", "len_cl100k_base": 7002, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25425, "total-output-tokens": 10282, "length": "2e12", "weborganizer": {"__label__adult": 0.0010814666748046875, "__label__art_design": 0.0011816024780273438, "__label__crime_law": 0.0008826255798339844, "__label__education_jobs": 0.127197265625, "__label__entertainment": 0.00017654895782470703, "__label__fashion_beauty": 0.000579833984375, "__label__finance_business": 0.0007801055908203125, "__label__food_dining": 0.001346588134765625, "__label__games": 0.0017518997192382812, "__label__hardware": 0.0012798309326171875, "__label__health": 0.0012273788452148438, "__label__history": 0.0006275177001953125, "__label__home_hobbies": 0.00036716461181640625, "__label__industrial": 0.0008578300476074219, "__label__literature": 0.0009822845458984375, "__label__politics": 0.0008091926574707031, "__label__religion": 0.0013523101806640625, "__label__science_tech": 0.00792694091796875, "__label__social_life": 0.00048732757568359375, "__label__software": 0.007080078125, "__label__software_dev": 0.83935546875, "__label__sports_fitness": 0.0009531974792480468, "__label__transportation": 0.0012416839599609375, "__label__travel": 0.0006012916564941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42030, 0.04621]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42030, 0.76287]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42030, 0.93456]], "google_gemma-3-12b-it_contains_pii": [[0, 733, false], [733, 5409, null], [5409, 11949, null], [11949, 18617, null], [18617, 24392, null], [24392, 28132, null], [28132, 34098, null], [34098, 42030, null]], "google_gemma-3-12b-it_is_public_document": [[0, 733, true], [733, 5409, null], [5409, 11949, null], [11949, 18617, null], [18617, 24392, null], [24392, 28132, null], [28132, 34098, null], [34098, 42030, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42030, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42030, null]], "pdf_page_numbers": [[0, 733, 1], [733, 5409, 2], [5409, 11949, 3], [11949, 18617, 4], [18617, 24392, 5], [24392, 28132, 6], [28132, 34098, 7], [34098, 42030, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42030, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
7fb60fa6d08c057bd559bc9b42295caa3a9e29af
|
Project Hoover: Auto-Scaling Streaming Map-Reduce Applications
Rajalakshmi Ramesh
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332
rrajalakshmi@gatech.edu
Liting Hu
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332
foxting@gatech.edu
Karsten Schwan
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332
schwan@cc.gatech.edu
ABSTRACT
Real-time data processing frameworks like S4 and Flume have become scalable and reliable solutions for acquiring, moving, and processing voluminous amounts of data continuously produced by large numbers of online sources. Yet these frameworks lack the elasticity to horizontally scale-up or scale-down their based on current rates of input events and desired event processing latencies. The Project Hoover middleware provides distributed methods for measuring, aggregating, and analyzing the performance of distributed Flume components, thereby enabling online configuration changes to meet varying processing demands. Experimental evaluations with a sample Flume data processing code show Hoover’s approach to be capable of dynamically and continuously monitoring Flume performance, demonstrating that such data can be used to right-size the number of Flume collectors according to different log production rates.
Categories and Subject Descriptors
D.4.7 [Operating Systems]: Organization and Design – distributed systems.
General Terms
Management, Performance, Design, Experimentation.
Keywords
Flume OG, Pastry, Scribe, Queuing Model.
1. INTRODUCTION
Web 2.0 companies like Facebook, Yahoo, LinkedIn, and Twitter generate large amounts of log data, including (1) user activity events like clicks, comments, or sharing, (2) operational metrics like call latency and errors, and (3) system metrics like CPU, memory, and network utilization. Such data is invaluable for debugging, performance management, and for commercial reasons. Consider for instance, an e-commerce website that collects logs to monitor the number of users who are currently viewing a particular product. Using this data, the company can increase sales by running micro-promotions that offer, say “20% off”, if more than 5000 users are currently viewing the same product.
Recent years have seen the development of distributed log aggregators specialized for collecting and processing online log data, such as Facebook’s Scribe [3], Yahoo’s Chukwa [10], and Cloudera’s Flume [2]. These systems convert log entries into events, which are then aggregated and processed by distributed sets of agents in multi-tier frameworks backed by key-value stores like HBase [5] and distributed file systems like HDFS [6].
Online log aggregators face challenges. First, their processing capabilities should be horizontally scalable -- up or down -- based on current volumes of input logs. Such elasticity is important because events will be delayed or even lost if the aggregate consumption rate of intermediate processing nodes, called ‘collectors’ in Flume, cannot keep pace with the rate at which log events are produced. Second, there is a need for load balancing across different sets of collectors, when log input rates are not evenly distributed across the system’s many sources. Third, such elasticity must function at volumes up to the few billion messages a day, as companies like Facebook collect everything from access logs, to performance statistics, to actions going to its News Feed. At these scales, however, the online monitoring required for elasticity at this scale is challenging, particularly for current commercial approaches that expose metrics through JMX MBeans [8], where each MBean’s attributes and operations are externally accessed through RMI [9]. RMI does not perform well at large scale due to the overheads of its registration logic, serialization, and its slow failure detection and cleanup. Recent solutions like Jolokia [7] address this by exposing MBeans over HTTP, Cloudera’s Flume allows users to gather node metrics via HTTP or by injecting them as a separate data flow along with the data being processed. However, those additional data flows add unmanaged overhead to the streaming data processing subsystem. Finally, research approaches like those described in [15] constitute potential solutions, but have not yet been deployed in commercial settings.
To enable auto-scaling online web log processing systems, we present Project Hoover, which is middleware that addresses the above monitoring challenges by integrating the Pastry/Scribe multicast framework [14] [11] with Cloudera’s Flume log processing system.
Novel approach to online metrics gathering. Rather than using an additional internal data flow to gather metrics, Hoover creates a separable external channel to export metrics to an ‘aggregator’ that operates alongside Flume’s central management master used for dynamic reconfiguration of Flume components. This is implemented via a light-weight group communication and event notification system (Scribe) built on a peer-to-peer overlay (Pastry).
Online assessment of Flume component ‘health’ enables auto-scaling. The aggregator uses dynamically collected metrics to assess operational characteristics of the Flume application’s execution. Specifically, this paper demonstrates the use of such metric data to model Flume components as a network of queues and measuring variables like average input rates, output rates, queue lengths, etc. This information is then used to auto-scale Flume to match its aggregate processing capacity to dynamically changing log input rates, to maintain desired end-to-end processing latencies.
Performance measurements demonstrate the efficiency of Hoover’s online monitoring and analysis, as well as its utility for
adjusting Flume performance to current conditions and needs. Specifically, they show only a small increase in anycast round-trip times as the number of nodes in the Pastry-based monitoring overlay increases. Further, because the average aggregation time per node is only on the order of tens of milliseconds, it is possible to aggregate statistics from a large number of nodes within time intervals of only a few minutes. Consequently enabled scale-up/down is shown useful via a simple auto-scaler implemented based on a Secant root finding method. It predicts the number of collectors required to maintain the health of the Flume system, by providing a good approximation for Flume configurations with high average queue lengths and low health scores.
The remainder of this paper is organized as follows. Section 2 discusses background and related work. Section 3 describes Hoover’s design. Section 4 evaluates Hoover with experiments. We conclude with directions for future work in Section 5.
2. BACKGROUND AND RELATED WORK
We first explain the design pattern for today’s multi-tier log processing systems like Flume. We then discuss related literature.
2.1 BACKGROUND
Flume is a distributed service for collecting and processing large amounts of log data generated by clients, ultimately placed into some persistent store for later use. Figure 1 shows a typical deployment of Flume comprised of three tiers: (1) the agent tier generates events from client logs; (2) collectors aggregate events from separate data logs and forward them to (3) the storage tier comprised of HBASE and the Hadoop Distributed File System (HDFS).
Every node in Flume has a source and a sink. The source tells it where to collect data, while the sink tells it where to send the data. A separate process, called the Flume master, is the central management point; it directs data flows by assigning source/sink configurations to all nodes, and it communicates dynamic configuration updates. Sinks can additionally be configured with ‘decorators’ that perform simple processing on data. For example, network throughput can be increased by batching events and then compressing them before moving them to the sink.

2.2 RELATED WORK
Facebook’s Scribe [3] and Yahoo’s Chukwa [10] gather logs based on the “push” model. Scribe uses local servers running on each node, in order to aggregate online logs and send them to a central collector (or to multiple collectors). Yahoo’s Chukwa is built on top of the Hadoop distributed file system and the MapReduce framework. It uses a push model in which each frontend node sends logs to a set of collectors over sockets. The collectors write log entries to HDFS. LinkedIn’s Kafka [13] gathers logs based on a “pull” model. A stream of messages of a particular type is defined by a topic, and a producer can publish messages to a topic. The published messages are then stored at a set of servers called brokers, which periodically write data into HDFS.
The systems outlined above facilitate reliable, scalable, efficient, and time critical aggregation and storage of live data. However, we are not aware of their ability to auto-scale their numbers of collectors based on log volume changes or workload imbalance. This will result in variable latencies in moving log data and raises the possibility of data loss and failure of collectors when they are overwhelmed.
Existing commercial log monitoring systems expose a limited set of metrics through JMX MBeans [8], where metric collection typically involves injecting JMX metrics into systems like Ganglia [4], Amazon CloudWatch [1], etc. Ganglia has been used at large scale to collect summary operational statistics in grids and clusters. However, its membership management uses native IP Multicast to communicate with its peer nodes, which is not appropriate for scale-out datacenter systems in which components join and leave frequently. Preferable would be lighter weight solutions with on-the-fly deployment and simple membership management.
Amazon’s CloudWatch provides a generic and comprehensive solution for monitoring resources, applications, and services. It monitors metrics generated by a customer’s applications, and it provides system-wide visibility into resource utilization, application performance, and operational health. However, its closed source nature restricts its use to Amazon web services and its cloud resources.
Project Hoover provides a simple way to collect metrics about the operational behavior of stream processing systems like those constructed with Flume. Specifically, we use Scribe and Pastry, which jointly provide efficient request reply routing and fault-recovery and can quickly adapt to the arrival and departure of nodes. Statistics from different flow paths are aggregated by publishing them in dedicated topics supported by Scribe, thus using its “push” model to gather statistics from nodes in each tier of the system. It then uses a “pull” model to aggregate the gathered statistics to form a global snapshot of the overall system. An external ‘aggregator’ uses aggregate information to run models like those that evaluate and/or predict log traffic to adjust the number of collector machines in the system.
3. HOOVER’S DESIGN
As shown in Figure 2, the Project Hoover middleware has four main elements. They are (1) monitoring of local statistics for each agent and collector, (2) separately summarizing/publishing statistics within the agent group and the collector group, (3) merging the summarized statistics of two groups and sending it to an external aggregator using a Scribe anycast message, and (4) the ability to run online models that use summary statistics to dynamically tune the number of collectors in the system.
3.1 Local Node Metric Collection
Hoover models every node in the Flume system as an individual queue, and then, agents and collectors belonging to a particular data flow form a network of queues. Flume nodes expose three queue variables: input rate, output rate, and queue length. However, Flume’s implementations of source and sink elements do not have built-in queues, thus requiring Hoover metric collection to emulate them, as explained next.
Flume’s ‘tail’ source reads a single line from a file and converts it into an event. The events are then pushed out of a sink element. We interpret the number of lines read per unit time as the input rate of the queue; the number of lines that exit the sink per unit time is the output rate of the queue; and the average difference between the number of lines read and the number of events that exit the sink per unit time is the length of the queue.
When a Pastry node receives a subscribe message from another node, it adds the node ID to its list of children and begins acting as a forwarder of the topic name. If the Pastry node is already a member of the same group, it stops forwarding the subscribe message. If a child does not hear from its parent for some timeout period, it sends a new subscribe message to the root and is spliced to the multicast tree. When the parent does not hear from its child, it removes that node from its list of children. Scribe overcomes root node failure by moving it to the Pastry node with the next numerically closest nodeId to the key computed from the topic name.
Hoover uses Scribe multicast trees for Flume agents and collectors to aggregate and summarize their local snapshots. Every logical node spawns an instance of a Pastry node and a custom Scribe client application [12]. Local snapshots from agents and collectors are collected to calculate a group snapshot that contains the group’s average input rate, average output rate, and average queue length. We create two aggregation trees, rooted at two rendezvous points, to disseminate local snapshots to other members. These two trees are the “AGENT” tree and the “COLLECTOR” tree. Agents subscribe to the “AGENT” tree, while collectors subscribe to the “COLLECTOR” tree.
As shown in Figure 3, each node has in its local snapshot local statistics stored as a set of (attributeName, value) pairs, such as (InputRate, 10). Periodically, each agent node triggers a multicast message and passes its local snapshot to every other node in the “AGENT” group. Other nodes save others’ values in their respective caches. The same actions are taken by collectors. Over some slightly longer period of time, once one collector receives results from the “AGENT” group, it will combine them with the average of its caches and sends the final global snapshot to the external global aggregator.
If \( t_n \) is the interval of time after which agents and collectors publish their local snapshots to other members of their respective Scribe groups, then the local snapshots contain the EMA of the topic variable values between times \( t_n \) and \( t_{n-1} \). Each logical node resets its local snapshot and multicasts it to other members of the same Scribe group. Once all nodes have finished multicasting their respective local snapshots, every node now contains an in-memory cache of local snapshots of all members in its group. From this snapshot, it can then compute the average input rate, average output rate, and average queue length for the group, to form a group snapshot. By simply replicating that snapshot to all other group members, the same snapshot data is now available to all group members, regardless of node failures or removals.
**Figure 2. Architecture of Hoover.**
Source and sink elements in logical nodes expose a Reporting object, which provides generic methods to add queue variables and other metrics as key-value pairs. The Reporting objects from the source and sink are refreshed during every heartbeat interval. Logical nodes then compute the Exponential Moving Average (EMA) of reporting objects to form a local snapshot. For other log processing systems, there may be alternative methods for defining queue variables.
**3.2 Local Metrics Aggregation**
Hoover obtains scalability and fault tolerance by using Scribe as the communication substrate to exchange Flume metrics. Scribe is an application-level group communication system built upon Pastry, a DHT-based P2P overlay. Each Pastry node is assigned a nodeId based on its IP address. Pastry routes messages to a given node by forwarding them to another node with nodeId numerically closest to the destination node. Every node maintains a small routing table with \( O(\log_2 n) \) entries, which implies that Pastry can route messages in \( O(\log_2 n) \) hops. Scribe nodes create and subscribe to multicast groups, called Topics. Subscribed members can publish messages to a topic, which will then be distributed in a multicast tree to all subscribed nodes. The root node of the multicast tree is the Pastry node with nodeId closest to the topic name. A new node can subscribe to the topic by computing the key from the topic name and then using Pastry to route a subscribe message to the topic towards the root node. When a Pastry node receives a subscribe message from another node, it adds the node ID to its list of children and begins acting as a forwarder of the topic. If the Pastry node is already a member of the same group, it stops forwarding the subscribe message. Fault tolerance is achieved via timeouts and keep-alive messages. Specifically, if a child does not hear from its parent for some timeout period, it sends a new subscribe message to the root and is spliced to the multicast tree. When the parent does not hear from
3.3 Global Metrics Exchange
Once the group snapshots have been computed by the agent and collector groups, they should be analyzed globally to determine the overall health of Flume components. Toward this end, the group snapshots from both the agent and collector groups are merged to give a single update vector of queue variables for the entire network of queues, called the global snapshot. The global snapshot is then passed to a regression model, which in turn uses them to predict the number of collectors required to handle the current aggregate volume of data.
We employ an independent node, called the aggregator, which is separated from the Flume system, to combine metrics and perform prediction. For example, let \( m \) and \( n \) be the number of agents and collectors in a data flow when it is polled at time \( t_n \). Let the local queue statistics of every node be \( < \lambda, \mu, \sigma > \), where \( \lambda, \mu, \sigma \) is the average input rate, average output rate, and average queue length of that node between two successive pollings, \( t_n - t_{n-1} \). The group snapshot of agent nodes is \( < \lambda_m, \mu_m, \sigma_m > \), where \( \lambda_m = \sum_{i=1}^{m} \lambda_i, \mu_m = \sum_{i=1}^{m} \mu_i \) and \( \sigma_m = \sum_{i=1}^{m} \sigma_i/m \). Similarly, the group snapshot of collector nodes is \( < \lambda_c, \mu_c, \sigma_c > \), where \( \lambda_c = \sum_{i=1}^{n} \lambda_i, \mu_c = \sum_{i=1}^{n} \mu_i \) and \( \sigma_c = \sum_{i=1}^{n} \sigma_i/n \). The merged global snapshot sent to the aggregator is \( < \lambda_m \mu, \sigma_c > \). The snapshot of every logical node is adjusted to the configured polling period \( p \) by linear interpolation or extrapolation, if the gap between two successive polling times, \( t_n - t_{n-1} \neq p \).
A global metrics exchange is initiated using a single pull operation, which sends an anycast message to the agent group. One of the agents receives the anycast message and computes the group snapshot from its in-memory cache. The agent group snapshot is then transmitted to the collector group. One of the collectors computes its group snapshot and merges it with the agent group snapshot to form the global snapshot of the entire queueing network, and then sends another anycast message to the aggregator with the global snapshot. Using anycast permits the aggregator to remain agnostic of Flume node failures and reconfiguration, reduces overall scribe message traffic, and avoids the need for procedures that manage global metric aggregation in lieu of node failures (e.g., assigning coordinators, dealing with failover, etc.).
3.4 Dynamic Instance Tuning
The aggregator applies the global snapshot collected during every polling period to a statistical analysis model used for auto-scaling. The experiments reported in this paper use the Secant root finding method to automatically scale the number of collectors based on the current health of the Flume subsystem. The intuition behind this method is that when the volume of events increases, the auto-scaler automatically adds more collectors to the system, thereby avoiding collector overload.
The health of the Flume system is assessed via two parameters, computed based on event output rate, event input rate and queue size. Let \( \alpha \) be the percentage of events that leave the system with respect to the input rate. In our experiment, we set \( \alpha \) to be 90, i.e., the Flume system is considered ‘healthy’ if 90 percent of events have been processed by the collectors, \( \mu_c / (90\% \times \lambda_o) \geq 1. \) Similarly, let \( \beta \) be the maximum percentage of input events that can wait in any of the queues in the Flume subsystem, i.e., if \( \beta = 0.2 \), then the Flume system is healthy if it is true that 99.8 percent of the events have exited the Flume system, \( 99.8\% \times (\sigma_0 + \sigma_c) / \lambda_0 \equiv 0. \) The overall health of Flume is denoted by \( f \):
\[
f = \frac{\mu_c}{\alpha \% \times \lambda_o} - \frac{(1-\beta\%) \times (\sigma_0 + \sigma_c)}{\lambda_o}
\]
When \( f \) \( \equiv 1 \), the flume subsystem is healthy. The auto-scaler’s health model computes \( f \) only after a certain volume of global queue statistics snapshots has been collected, called a window. Each window is computed as a load generator program that writes events of a given size to a set of files in a designated folder at a given rate. The agents read these files as input events using the ‘TailDir’ source. Agents are configured with ‘AutoBEChian’, a
\[
x_{n+1} = x_n + (1 - f(x_n)) \times \frac{x_n - x_{n-2}}{f(x_n) - f(x_{n-1})}
\]
In order to make this method robust, the following constraints have also been applied.
- If \( f(x_i) \geq \alpha \), then \( x_{n+1} = x_n \).
- If \( x_n = x_n + \gamma \) and \( f(x_{n+1}) \geq \alpha \), then \( x_{n+1} = x_n + \gamma \), where \( \gamma \) is a constant.
- If \( f(x_i) \geq \gamma \), then \( x_{n+1} = x_n + \gamma \).
This step also initiates scale-down when there is a larger number of collectors than is needed to handle some volume of input events.
Auto-scaling is implemented with an instance tuner composed of a Thrift client in the aggregator node and a Thrift server in the Flume Master. The Thrift server exposes interfaces to get and set the required number of collectors in the Flume system. The Flume Master is responsible for selecting the active and deactivating collectors when one of the clients sets a new instance count for a particular flow. The Master has pluggable Translation Managers, which transform or translate complex logical node configurations into compositions of simpler source and sink elements.
Flume already supports a Failover Translation Manager that translates certain source elements into a random list of collector nodes that are used as failover chains in logical nodes. These nodes are chosen from a master list of registered collectors. We have modified the Failover Translation Manager to support dynamic modification of the master list. The translation manager registers nodes with source element ‘autoCollectorSource’ into the master’s collector list. If \( x \) is the number of collector instances set by an Instance tuner client, then the translator picks up the first \( x \) collectors from the master collectors list and places it into a Consistent Hash Table with replication. The key for each collector is computed based on its hostname and a random index number. The elements in the hash table are adjusted when the instance count changes. The sink element ‘autoSink’ is translated into a list of three collectors chosen from the hash table such that they are numerically closer to the hostname of the logical node being configured. Since the elements in the hash table are replicated with different key values, the data loads on the collectors are evenly distributed.
4. EXPERIMENTAL EVALUATION
Hoover is evaluated over a Flume system with 50 agents and 10 collectors. Every agent has a load generator program that writes events of a given size to a set of files in a designated folder at a given rate. The agents read these files as input events using the ‘TailDir’ source. Agents are configured with ‘AutoBEChian’, a
sink element which automatically switches to a different collector in case of failure, thus guaranteeing best effort delivery. Collector nodes are configured to send events to HBase which periodically writes the events to HDFS. Our setup used 6 HBase region servers and HDFS data nodes. A single aggregator is used to collect global statistics for the default flow. Global snapshots are analyzed by a regression module which predicts the expected number of collectors using the Secant root finding method to automatically scale the number of collectors based on the health of the Flume subsystem, computed as explained above. The intuition behind this method is that when the volume of events increases, the auto-scaler automatically adds collectors to the system. The method is not described in detail because for this paper, our purpose is to demonstrate auto-scaling viability due to the monitoring and metric analysis capabilities of Project Hoover.
4.1 Aggregation and Round Trip Delays
The average aggregation time within a group is measured by recording the start time in the local Flume node and storing the difference between the receive time and start time in every receiver as part of the group snapshot. The group snapshot also records the number of local snapshots contained in it. The Round Trip Time (RTT) is calculated by recording the difference between the start time of the anycast probe message and the time when a global snapshot is received.
The following graph shows the average aggregation time and RTT in milliseconds within a group for a small volume of events, each of size 100 Bytes, generated at the rate of 1000 events/second. The scalability of the Pastry network can be verified by the results from [14]. Every Scribe node has a maximum of 6.2 children for a Pastry network size of 100,000 nodes. The aggregation time is in the order of tens of milliseconds. However, our current design uses a push model by multicasting updates to all members of the group. Hence, the number of local snapshots contained in a group snapshot shows some inconsistencies. This can be solved by using direct messaging instead of a group multicast, where each Flume node sends its local snapshot to its parent, which is then cascaded up to the root of the multicast tree [15]. The root can then multicast the final group snapshot to all other members.
4.2 Auto-Scaling
In order to evaluate the utility of Project Hoover for online control, we use if for online Flume auto-scaling, evaluated with three independent runs with varying numbers of agents and collectors in the system. The appropriate number of collectors in the system is calculated in phases, say n. Initially, every flow is given a single collector to handle all of the agents. Global snapshots, health scores $f(x_n)$, current number of collectors $x_n$ and predicted number of collectors $x_{n+1}$ for the next phase are recorded. The graphs below show that our simple auto-scaler provides a good approximation for agent and collector combinations that have high average queue lengths and low health scores in the initial phases. With this behavior, a system with lower health scores is scaled much faster than a system with higher health scores.
Figure 5 shows how the auto-scaler increases the system’s health scores gradually with varying numbers of agents and collectors. However, when pursuing a higher health score conflicts with the goal to also improve throughput, i.e., higher throughput, Project Hoover’s auto-scaler will strike a balance between health score and system throughput. As shown in the yellow line of Figure 5, initially the system is regarded as quite healthy with 20 agents and 1 collector, $(f(x_n) = 0.99991741)$. However, the throughput is quite low for phase $p_1$, so the auto-scaler increases the number of collectors even through this then causes a decrease in health.
Figure 6 shows that the auto-scaler balances the average queue lengths of the system in order to obtain increased throughput. When new collectors are added to the system, the additional load will be distributed across a larger number of collectors, so that events that were previously buffered in the agents and collectors can be processed more quickly, thereby improving Flume’s overall health.
deploying Hoover with a variety of datacenter codes, which we will demonstrate in our future work.
Second, Hoover has a dedicated component, called Aggregator, which periodically polls the members of the flow using anycast messages and gathers global snapshot data. Performance evaluations measuring aggregation delays within a group and across groups show that the approach is scalable for large group sizes.
Hoover’s framework is fully implemented, but additional work is required to better predict the number of active collectors for varying loads. This includes (1) analyzing the distribution of real live logs generation, e.g., Poisson or Normal distributions, and simulating it with a log generator; (2) mature queuing theory can be employed for better predicting the number of collectors; (3) additional methods are needed for load balancing collectors, particularly when they perform computationally expensive tasks.
6. REFERENCES
5. CONCLUSIONS AND FUTURE WORK
Project Hoover is scalable monitoring and aggregation middleware for event collection and aggregation systems like Flume. Its utility is demonstrated with methods that automatically right-scale the number of Flume collectors in the system depending on evaluations of overall system health. The auto-scaler models every Flume node as a queue and computes queue statistics such as average input/output rates and average queue length.
Project Hoover benefits the Flume log processing system because it permits the implementation of auto-scaling methods that can prevent the volume of input logs from overwhelming the intermediate processing nodes, called ‘collectors’, thereby maintaining the overall health of the Flume OG. Hoover’s simple design can be realized in a scalable fashion, without requiring changes to the applications being used or to underlining data center hardware/software. It has three unique characteristics:
First, Hoover uses a publish/subscribe based multicast and anycast mechanism deployed separately from the application (i.e., Flume in this paper), which aggregates local queue statistics to obtain a global snapshot of those statistics. Separation opens the door to
Figure 7. Graph showing the number of active collectors at the beginning of every phase. The last reading in the graph shows the predicted number of collectors for phase p4 to be added or removed at the end of p4.
Figure 8 shows that the average queue length of the system decreases as the predicted number of collectors is added to the system at the end of each phase. The yellow line in Figure 8 shows an increase in queue length at the beginning of phase p3. This instability is reflected in the overall health of the system at the beginning of phase p4, as shown in Figure 5. In response, the auto-scaler adds two more collectors to the system, as shown by Figure 7. This improves the system’s health and also decreases the average queue length.
Figure 8. Graph showing that average queue length of the system at the beginning of every phase.


|
{"Source-Url": "http://www.istc-cc.cmu.edu/publications/papers/2012/MBDS_camera_Liting2.pdf", "len_cl100k_base": 6649, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 27219, "total-output-tokens": 7650, "length": "2e12", "weborganizer": {"__label__adult": 0.00028896331787109375, "__label__art_design": 0.0004703998565673828, "__label__crime_law": 0.00031375885009765625, "__label__education_jobs": 0.0006265640258789062, "__label__entertainment": 0.00013208389282226562, "__label__fashion_beauty": 0.0001512765884399414, "__label__finance_business": 0.0004863739013671875, "__label__food_dining": 0.0003540515899658203, "__label__games": 0.00049591064453125, "__label__hardware": 0.0023937225341796875, "__label__health": 0.00048470497131347656, "__label__history": 0.000370025634765625, "__label__home_hobbies": 0.00012302398681640625, "__label__industrial": 0.0006837844848632812, "__label__literature": 0.00025773048400878906, "__label__politics": 0.00028395652770996094, "__label__religion": 0.0003991127014160156, "__label__science_tech": 0.2047119140625, "__label__social_life": 0.00010770559310913086, "__label__software": 0.03131103515625, "__label__software_dev": 0.75439453125, "__label__sports_fitness": 0.0002455711364746094, "__label__transportation": 0.000637054443359375, "__label__travel": 0.0002536773681640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33959, 0.02476]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33959, 0.57193]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33959, 0.90385]], "google_gemma-3-12b-it_contains_pii": [[0, 5828, false], [5828, 12086, null], [12086, 17409, null], [17409, 24661, null], [24661, 28942, null], [28942, 33959, null], [33959, 33959, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5828, true], [5828, 12086, null], [12086, 17409, null], [17409, 24661, null], [24661, 28942, null], [28942, 33959, null], [33959, 33959, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33959, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33959, null]], "pdf_page_numbers": [[0, 5828, 1], [5828, 12086, 2], [12086, 17409, 3], [17409, 24661, 4], [24661, 28942, 5], [28942, 33959, 6], [33959, 33959, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33959, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
0c1edaabe50af3b47379273232cac9952b7f13b3
|
The open architecture of WinALT
M. Ostapkevich
The necessity of the open architecture for fine-grained parallel model simulating system (WinALT) is discussed. The description of WinALT open architecture and external module interfaces is given. WinALT consists of the language and graphical user's interface subsystems and the kernel. The extensibility of WinALT is implemented in the kernel by a number of interfaces. The principal ones are object file format support interface and those of language and graphical subsystem extensibility. A number of samples is given to clarify the usage of all these interfaces.
1. Introduction
1.1. Fine grain simulating system WinALT
WinALT is a fine grain simulating system. It is intended to simulate complex dynamic systems, such as digital electronic devices (associative or systolic structures and 3D pipeline units), physical and biological systems, which are represented by cellular automata, neural and cellular-neural networks. It is based on the concept named the Parallel Substitution Algorithm (PSA) [1]. The general description of WinALT and the justification of its main features was presented in [2]. In this article the detailed description of the WinALT open architecture is given. The system has language and graphical means to execute, debug and examine cellular algorithms. Unlike its ancestors the system has the ability to be changed by a user, as it has the open architecture. In the article the brief description of open systems is given. The advantages of the open system concept for a Parallel Substitution Algorithm based simulating system are listed. The means of WinALT extensibility, scalability and interoperability are discussed in details. The description of WinALT interfaces and program samples, which use these interfaces, is given.
1.2. The concept of the open architecture
N. Wirth in [3] mentioned monolithic design of applications as one of the main causes of code size growth tendency. The term monolithic implies that an indivisible and inextensible set of binary modules exists and includes all the functions of a system, either important or useless for a certain user. A user cannot include new modules or exclude old ones out from such a system.
An open system is the exact antithesis of a monolithic one and it overcomes the faults mentioned above. The basic minimal set of functions is represented by a reduced set of modules and does not occupy huge space in a retrieval system. Such a system with the ability to include and exclude external modules may be tailored by a user.
In [4] the following distinguishing features of an open system were listed:
1. **Extensibility and scalability** is the possibility of a new function addition or an existing function modification while the rest functions remain unchanged.
2. **Portability** is the ability to be reimplemented on another platform without noticeable differences for a user.
3. **Interoperability** is the ability to communicate with other software systems.
It is worthy to be mentioned that the conception of an open design is not unique and was not first introduced in computer science. The same idea is widely exploited in many contemporary engineering sciences, such as architecture or mechanics. Though this approach bears other titles there, the four features listed above are typical for results of a professionally performed design.
1.3. **Is open architecture desirable for a PSA simulating system?**
Just as the open architecture concept is useful for most of the software, it also may be applied for the design of a PSA simulating system. The range of a PSA model complexity is rather wide. While in some cases the structure can be comprehended at one glance, other models may have a multilevel hierarchy of decomposition into submodels. Miscellaneous applications have significantly different sets of typical submodels. A PSA simulating system, which includes a fixed inextensible set (or library) of submodels has a limited use even if this set is rather versatile. Thus, the scalability and extensibility are highly desired features of a PSA simulating system.
The multitude of operating systems and hardware platforms and the absence of a leading platform imposes the necessity of support for more than one of them. The average lifetime of a PSA simulating system exceeds six years. The nowadays pace of hardware development speeds up the emergence of new operating systems and their versions. In six years the set of most widely used platforms alters significantly. The failure to easily port a PSA simulating system to newer platforms may cause its untimely disappearance because of disappearance of the single platform that it supports. The conclusion should be drawn that the portability of a PSA simulating
The open architecture of WinALT
system is invaluable for achieving a longer lifetime and the acceptance of this system in the widest set of applications, which may be simulated with the help of PSA.
PSA modeling is often just a phase in a chain rather than entire process of data transformations. Input data for simulation may be resulting data of another system. And vise versa, the results of PSA simulation may be used somewhere else. A PSA system that is able to participate in such a chain should possess such feature of an open system as the interoperability.
The presence of a user friendly interface is almost the universal requirement for the contemporary software. In many applications this requirement seems to be redundant, especially when a program has only a small list of possible requests and responses. PSA is oriented to graphical representation of both simulating objects and rules. Not only a PSA simulating system must have a convenient visualization of objects and rules, but also it should supply a user with the means to visualize single and massive substitutions and collision occurrences. The experience of previously designed PSA simulating systems (such as CIM [5]) exploitation made it evident that debugging and observations of a PSA algorithm without an advanced visualization and user friendly interface is internally complicated and time consuming. Thus all of the four features of open systems are either highly desirable or obligatory for a PSA simulating system.
1.4. Means of PSA simulating system open architecture implementation in WinALT
An attention was paid at the stage of WinALT design to meet all of the four features of open systems. To implement an extensible and scalable system WinALT was represented by the main binary executable file, which is constituted by the minimal set of modules required for WinALT execution, and an unlimited number of external modules.
The main binary executable file consists of a kernel and three main subsystems (Figure 1) [2]. These are the language, graphical user interface and visual design subsystems for PSA based models [1]. The WinALT kernel [6] is constituted by the object manager module and the external library support module, which is named ACL manager. As the name suggests the object manager implements operations with objects. Namely these are load, unload, create, delete, resize object; get, set cell value and name. ACL manager contains operations for external module support, such as load/unload. The WinALT external modules, which use the WinALT functions or some functions in other WinALT external modules are called ACLs. ACL stands for “Alt C Libraries”. ACL is actually a dynamically linked library in Win32 [7, 8, 9] or a shared library in UNIX [10].
Each external module that is intended to be used by WinALT must be registered. Whenever a user wants to exclude an external module from WinALT installation, the unregistration is activated. All the external module registration information is contained in a number of WinALT configuration objects. Usually these objects are located in the system directory.
All external modules provide WinALT with a standard set of interface functions. WinALT calls all the implemented operations of a certain module via these interface functions. On the other hand, WinALT provides external modules with an interface that allows the external modules to perform object transformations and other WinALT operations. The interface is implemented in the ACL manager [6].
The external module may be compiled and linked by any assembly or high level language compiler and linker or integrated developer's environment. The source of module interface functions is required. It may be accompanied by an unlimited number of other functions, source files. The WinALT library named aclstd.lib must be linked with each external module. A sample sequence of compilation and linkage stages for a C source texts is given in Figure 2.
Whenever a certain internal or external WinALT module has the need to utilize an operation that is implemented in another external module, ACL manager checks if the required module was already loaded. If it does not yet reside in memory, ACL manager loads it. Otherwise it increases the counter of its users. When the the module is not used any more, its user counter is decreased. When its value becomes zero, ACL manager unloads it.
WinALT was designed as a modular system. That allows to build simplified WinALT versions, e.g. versions without debug mode, graphical user's interface, etc. While such modules as graphical user's interface implementation considerably depend on a certain platform, the interpreter modules might be designed as portable. The WinALT language subsystem [6] uses only standard ANSI C [11] library and other WinALT interfaces and thus...
The open architecture of WinALT
can be easily ported to other platforms. ACL manager and object manager [6] constitute WinALT kernel, their implementation relies not only on ANSI C library, but also on some system functions, such as dynamically loaded libraries, memory mapped files and so on. But the porting of kernel also isn’t a complicated task, as the system functions mentioned above are presented in virtually all contemporary operating systems. With the help of conditional compilation the single WinALT kernel source exists for all the platforms. The most platform dependent pieces of code were gathered in a single module, which is to be rewritten for each new platform. Currently WinALT implementation exists on Win32 and Linux platforms.
The interoperability of WinALT is maintained primarily in the object manager, as the main thing WinALT should share with other applications is the cellular object, which is the central WinALT data object. The object manager has the ability to adopt new file format for WinALT objects. The manager’s design is examined in details in Section 4 of this article.
The last important feature of an open system is the user friendly interface. WinALT has an advanced graphical user’s interface (GUI), which was described in [12]. The interface allows a user to view and edit objects and sources in a number of modes. In the debug mode a user gets a comprehensive set of tools to investigate a behaviour of an algorithm execution, to localize an error, to examine collision occurrences and so on. In Section 5 the description of the mean of WinALT GUI extensibility is given.
2. Interface for external modules
The set of implemented functions alters for different modules. Thus, a unified approach is required to retrieve these sets. Any ACL must have an exportable function named ReqService, which takes four parameters. The first parameter bears the number of the issued request. The rest of parameters are pointers, which have different meaning for different requests. For some requests these parameters are ignored. Having just a single function, which dispatch calls to other functions inside a module allows extending interfaces without a modification in the set of exportable functions. But on the other hand this approach implies overheads while making a function call. To improve the performance a few extensively used external functions are called directly without the ReqService dispatcher function. Retrieving and storing cell value functions are the best examples of such an exception. Besides the necessity of implementation of the dispatcher function, different types of external modules impose their own specific limitations in the interface.
In the passage above the interface, which allows calling external functions from inside WinALT has been discussed. But another way of interaction
between WinALT and external modules exists, when an external module calls a WinALT function. For example a module may ask WinALT to create, modify or delete certain objects. Some exotic modules may want to ask WinALT to terminate. The modules that depend on WinALT version need to retrieve the version number and so on. This type of interaction is implemented by WinALT language subsystem interface. The interface is reachable via UA_SubmitServiceToACL function. Just as ReqService it accepts a request number and three pointers. Pointer to this function is always sent to a module at the initialization stage and it should be kept until a module does not have to handle WinALT objects or influence WinALT in other ways.
3. WinALT language extensibility
The same part of a model can often be used in more than one model. The means for its reusability would help to avoid its redundant reimplementation. It is well-known that coding is much less time consuming than testing and debugging. Each new implementation brings new errors. Instead of intensive exploitation of one well tested library a user has to debug a new one.
In WinALT a source code written in a certain high level language may be called from a simulating program. Such an import may be performed either for a source text or a compiled library. In both cases this source or binary file is linked with an interface library aclstd.lib. The linker creates an ACL library. Such a library may be loaded by a request in a WinALT program (Figure 3). The WinALT language gives two statements for library loading: import and use. Import makes interface functions visible by the long names with explicit specification of ACL name: library::function, for example, math::sqrt. Use statement enables short names, for example, sqrt for square root function in math.acl.
All the functions that should be visible in WinALT must be declared as interface functions. If a function is to be exported without modifications in its source (for example, a function from ANSI C standard library) a gate

Figure 3. Calling an ACL function from a WinALT program
function should be implemented. The gate function is declared just like any other interface function. All it has to do is to call a real meaningful function. A gate function may be written manually or generated automatically from its prototype by \texttt{stg} utility. Prototypes for \texttt{stg} are similar to those in C with a few exceptions. They have the following syntax:
\[
\text{result}\_\text{type function}\_\text{name}(\text{type1 name1}, \text{type2 name2}, \ldots, \\
\text{typeN nameN});
\]
All the functions that should be visible in WinALT must be declared as interface ones \textit{function}\_\text{name}, \textit{name1}, \textit{name2}, and \textit{nameN} are valid C identifiers; \textit{result}\_\text{type}, \textit{type1}, \textit{type2}, and \textit{typeN} are valid WinALT types (Table 1); \textit{function}\_\text{name} should be a name of an existing function. Formal parameter name may be omitted. The number of parameters has to coincide with that for the function named by \textit{function}\_\text{name}. ACL xc types are listed in Table 1. A list of examples of ANSI C and their respective xc prototypes is presented in Table 2. Comments are not allowed in xc files. For the sake of simplified implementation each prototype is placed on a single string. The last string contains ‘#' character.
### Table 1. Relations between ACL, ANSI C, and Win32 types
<table>
<thead>
<tr>
<th>xc type</th>
<th>C type</th>
<th>Win32 type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>int</td>
<td>int</td>
<td>IN</td>
<td>integer</td>
</tr>
<tr>
<td>string</td>
<td>char*</td>
<td>LPSTR</td>
<td>ASCII string</td>
</tr>
<tr>
<td>boolean</td>
<td>int</td>
<td>BOOL</td>
<td>logical</td>
</tr>
<tr>
<td>float</td>
<td>float</td>
<td>FLOAT</td>
<td>floating point value</td>
</tr>
<tr>
<td>void</td>
<td></td>
<td></td>
<td>error type</td>
</tr>
<tr>
<td>char</td>
<td>char</td>
<td>CHAR</td>
<td>ASCII character</td>
</tr>
</tbody>
</table>
### Table 2. ANSI C and ACL prototypes
<table>
<thead>
<tr>
<th>ANSI C prototype</th>
<th>xc prototype</th>
</tr>
</thead>
<tbody>
<tr>
<td>void func1(void);</td>
<td>void func1();</td>
</tr>
<tr>
<td>VOID func1(VOID);</td>
<td>void func1();</td>
</tr>
<tr>
<td>double sin(double);</td>
<td>float sin(float);</td>
</tr>
<tr>
<td>char* strcat(char*, char*);</td>
<td>string strcat(string, string);</td>
</tr>
</tbody>
</table>
In many cases when only import to WinALT is required the \texttt{stg} utility would be sufficient. However, it is often necessary to handle WinALT objects from an ACL library. In such a case it is possible either to utilize WinALT interface directly with the help of macrodefinitions from aclstd.h header file or to use aclstd.lib functions.
ACL(ILF_MarkBorder1) /* the implementation */
{
LPACL_OBJ objImage,
/* pointer to an object with an image to work with */
bufImage;
/* keeps new pixel values until synchronization */
LPSTR tmpName;
INT xIdx, yIdx, xSize, ySize;
ACL_ReturnInteger();
/* notify that return value is integer */
/* check if parameters passed have valid types */
if((ACL_PARAM_TYPE(0)!=RT_TYPE_LPSTR) ||
(ACL_PARAM_TYPE(1)!=RT_TYPE_INT) ||
(ACL_PARAM_TYPE(2) != RT_TYPE_INT)) return ILF_ERR_INCORRECT_PARAM_TYPE;
objImage = AS_StrToPtr(ACL_PARAM(0));
/* get pointer to the object */
if((objImage==NULL) || (objImage==(LPVOID)-1))
return ILF_ERR_OBJ_NOT_FOUND;
tmpName=AS_GetUniqueObjName();
/* generate unique name for buffer object */
ACL_GetObjSize(objImage, xSize, ySize);
/* get size of the source object */
bufImage = ACL_CreateObj(pazTmpObjName, xSize, ySize);
/* create object */
for(yIdx=1; yIdx<ySize-2; yIdx++) for(xIdx=1; xIdx<xSize-2; xIdx++){
INT value; BOOL bEqual;
bEqual = TRUE;
value = ACL_GetIntCellValue(objImage, xIdx, yIdx);
if((value != ACL_GetIntCellValue(objImage, xIdx + 1, yIdx)) ||
(value != ACL_GetIntCellValue(objImage, xIdx, yIdx + 1)) ||
(value != ACL_GetIntCellValue(objImage, xIdx + 1, yIdx + 1)) ||
(value != ACL_GetIntCellValue(objImage, xIdx - 1, yIdx)) ||
(value != ACL_GetIntCellValue(objImage, xIdx, yIdx-1)))
bEqual = FALSE;
/* set ACL_PARAM(2) value if the point belongs to border,*/
/* ACL_PARAM(1) otherwise */
ACL_SetIntCellValue(bufImage, bEqual ? ACL_PARAM(1) : ACL_PARAM(2),
xIdx, yIdx);
}
/* local function that copies all cells from bufImage to objImage */
SyncChanges(objImage, objNewImage); return ILF_OK;
Figure 4. An ACL function implementation sample
A sample of an ACL function, which marks a contour of a 2D image, is depicted in Figure 4. A pixel (a cell) is assumed as one of a contour if at least one of its neighbor cell values alters from that of the pixel. The source, intermediate and contoured images are shown in Figure 5.
4. Custom object file format support
Nowadays a lot of miscellaneous file formats for different types of information exist. Some of these specifications, such as GIF, are revised periodically. Spontaneously or because of convenience different applications have different standard formats. A WinALT user would not feel comfortable if he is forced to utilize a fixed set of formats without the ability to add a new one.
WinALT object manager [6] supports multiple file formats. Each file format is represented be a so called object driver, which is a dynamically linked library with a specific set of exported functions. WinALT package contains a number of such drivers. Some of them support internal object formats. Others implement some widely spread file formats, such as BMP.
A user may implement his own object drivers to enable the usage of his custom file format. Object driver's registration may be performed either by WinALT object driver registration dialogue or by modification and execution of config.src that is located in WinALT BIN directory. After the registration WinALT configuration contains the information about the new object driver (the driver's path, object prefix, file extension). When object manager encounters an object containing a prefix or file name extension relevant to this object driver, it loads the driver if it was not loaded before. When no more objects of a certain type reside in memory, their driver is unloaded by the object manager. Each object driver has a number of obligatory functions. These are object load and unload, cell value and name read and write, get object dimensions, amount of memory required to keep an object with specified dimensions.
Currently the following set of object drivers is implemented in WinALT:
1) the default driver;
2) the driver for objects with 1, 2 and 4 byte integer cells;
3) the driver for 1 bit logical cell values (suits for classical cellular object simulating [13]);
4) a number of object drivers for some subformats of BMP format.
5. WinALT graphical user's interface extensibility
5.1. General description
PSA is applicable for a versatile set of real problems. WinALT as an PSA based system should cover most of them. But the difference does not entirely reside only in the set of typical submodels or the most widely accepted file formats to keep objects for a certain problem. Different tasks require as well their own way of graphical representation of source, intermediate or resulting objects.
The best form of visualization comes from tight connection with real physical process in a model. For example, the results of sound or seismic signal filtering have one dimension and relatively large set of discrete values. Thus, the best way to represent such type of data is to show the dependence of signal level from time. The location on X axis denotes the position or time, while that on Y axis shows the value of signal. Colour may be used to place a number of such signals (or objects in PSA terminology) into the same region on the screen. The visualization of images evidently should have two dimensions with colour of each point denoting its value. PSA models for digital device simulation may require 2D or 3D visualization. Depending
on a cell complexity, a value may be showed as a colour square or a certain text in a rectangular area.
Other PSA applications may reveal more demands and peculiarities on how to show a cellular object. Thus, the ability to include and use new modes of object visualization is an essential feature of an PSA based system because of its influence on integral fitness for a certain application.
In WinALT, the custom object visualization mode support is implemented in the module named Object Visualization Engine.
5.2. Structure of the object visualization engine
Object visualization engine (OVE) is the part of WinALT GUI subsystem. It is built above the WinALT object manager. This module also uses extensively Win32 API GUI functions, because its main purpose is visualization. OVE is activated by other modules of WinALT GUI subsystem and by the language subsystem.
The principal data structure of the module is the visual object. It includes object manager logical object as its part. Other parts of this structure reflect miscellaneous parameters of visualization and object editing. For example, there are such fields as location and size on the screen, "undo" buffer to cancel recent modifications of an object, the visibility of object name, of the ruler (the rectangular region at the top and left edges of a visual object, its only destination is to show the coordinates of X and Y axis and so on.
5.3. Modes of visualization and external mode support libraries
Each visual object is shown in a certain mode of visualization. One such mode is supported by an ACL named OVD. OVD stands for Object Visualization Driver. It uses the same interfaces as any other ACL library. The painted image in a window is the result of cooperative work of the OVE and one or several OVD libraries. A window can contain one visual object in viewer or edit mode or a collection of objects in viewer mode. Each object is located in a rectangular region, which is divided into client and non-client areas. The former is painted by OVD, while the latter is created by OVE and includes title, frame and ruler and edit string at the bottom. The title is used only for multiple objects per window.
OVD may have a number of optional exportable functions, but there is only one obligatory. It has the symbolic name OVD_Paint and it is responsible for painting the client area of a visual object specified as a parameter in a window. It is the responsibility of OVD not to go beyond the borders of the client area. The source text in Figure 6 shows a simplified version of OVD_Paint, which paints integer cell values for a 2D visual object or a
.. declspec(dllexport) VOID OVD_Paint(PVISOBJ pvo, PDRAWINFO hdc)
{
INT xSize, ySize, zSize, xMax, yMax, xCurCoord, yCurCoord;
LPACL_CELL pcell;
struct Pos pos;
CHAR buf[8];
ACL_GetObjSize(pvo->plo, &xSize, &ySize, &zSize);
pos.z = pvo->posScroll.z;
xMax = min(xSize, pvo->sizeWin.x / pvo->sizeUnit.x + pvo->posScroll.x);
yMax = min(ySize, pvo->sizeWin.y / pvo->sizeUnit.y + pvo->posScroll.y);
for(pos.y = pvo->posScroll.y, yCurCoord = pvo->posWin.y; pos.y < yMax;
pos.y++, yCurCoord += Y_CELL){
for(pos.x = pvo->posScroll.x, xCurCoord = pvo->posWin.x; pos.x < xMax;
pos.x++, xCurCoord += pvo->sizeUnit.x)
{
fprintf(buf, "%d", pcell->value);
TextOut(hdc, xCurCoord, yCurCoord, buf, strlen(buf));
}
}
Figure 6. OVD_Paint implementation sample
.. declspec(dllexport) LPVOID OVD_Request(PVISOBJ pvo, INT req, LPVOID p1,
LPVOID p2)
{
struct Pos pos;
switch(req)
{
...
case OVD_REQ_CORRECT_SIZES:
ACL_GetObjSize(pvo->plo, &pos.x, &pos.y, &pos.z);
if(pvo->sizeWin.x < X_CELL) pvo->sizeWin.x = X_CELL;
if(pvo->sizeWin.y < Y_CELL) pvo->sizeWin.y = Y_CELL;
pvo->sizeUnit.x = pvo->sizeWin.x / pos.x;
if(pvo->sizeUnit.x < X_CELL)pvo->sizeUnit.x = X_CELL;
else pvo->sizeWin.x = pvo->sizeUnit.x * pos.x;
if(pvo->sizeWin.y > (pvo->sizeUnit.y * pos.y))
pvo->sizeWin.y = pvo->sizeUnit.y * pos.y;
return (LPVOID)1;
...
}
Figure 7. OVD_Request implementation sample
layer in 2D object. xCurCoord and yCurCoord denote logical coordinates in window. pos is a structure that contains the position of a cell in object.
**OVD_Request** is an optional OVD function, which allows to adjust OVE settings. For example, by default cell scale vary from 1 to 1024 pixels. But for some modes it senseless to decrease the scale below a certain value. Or the standard ruler is useless for some modes. Modifications of scale, size, altering the visibility of vertical or horizontal ruler may be implemented in **OVD_Request**. The source test presented in Figure 7 demonstrates a partial implementation of **OVD_Request**, which corrects sizes of a visual object. This sample has fixed minimal sizes for X axis (X_CELL) and Y axis (Y_CELL). It limits the maximal height, but the width is unlimited, for each increment of width, it increases X axis cell size.
### 5.4. Other means of graphical user's interface extensibility
WinALT GUI subsystem has an API that allows to create and manage child windows in the main WinALT window. This API is available for external modules via ACL interface. Thus, a WinALT program with a help of a certain ACL may perform any Win32 GUI operations. Should anything that goes beyond limits of rectangular cellular objects or the conception of visual modes arise, it would be implemented with the help of this API. This API is also supported for simplified WinALT versions under Win32.
### References
|
{"Source-Url": "https://nccbulletin.ru/files/article/ostapkevich_0.pdf", "len_cl100k_base": 6249, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 15000, "total-output-tokens": 7310, "length": "2e12", "weborganizer": {"__label__adult": 0.0003204345703125, "__label__art_design": 0.0005159378051757812, "__label__crime_law": 0.0002856254577636719, "__label__education_jobs": 0.0005955696105957031, "__label__entertainment": 9.41157341003418e-05, "__label__fashion_beauty": 0.00015532970428466797, "__label__finance_business": 0.0002168416976928711, "__label__food_dining": 0.0003314018249511719, "__label__games": 0.0007925033569335938, "__label__hardware": 0.0019426345825195312, "__label__health": 0.0003879070281982422, "__label__history": 0.0003635883331298828, "__label__home_hobbies": 8.988380432128906e-05, "__label__industrial": 0.0007128715515136719, "__label__literature": 0.0002651214599609375, "__label__politics": 0.00026297569274902344, "__label__religion": 0.000598907470703125, "__label__science_tech": 0.10687255859375, "__label__social_life": 7.486343383789062e-05, "__label__software": 0.0166168212890625, "__label__software_dev": 0.86767578125, "__label__sports_fitness": 0.0003025531768798828, "__label__transportation": 0.0005030632019042969, "__label__travel": 0.00017631053924560547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29952, 0.01407]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29952, 0.78878]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29952, 0.86513]], "google_gemma-3-12b-it_contains_pii": [[0, 2235, false], [2235, 4814, null], [4814, 7577, null], [7577, 9649, null], [9649, 12502, null], [12502, 14628, null], [14628, 17296, null], [17296, 19133, null], [19133, 20197, null], [20197, 22659, null], [22659, 25295, null], [25295, 26880, null], [26880, 29320, null], [29320, 29952, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2235, true], [2235, 4814, null], [4814, 7577, null], [7577, 9649, null], [9649, 12502, null], [12502, 14628, null], [14628, 17296, null], [17296, 19133, null], [19133, 20197, null], [20197, 22659, null], [22659, 25295, null], [25295, 26880, null], [26880, 29320, null], [29320, 29952, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29952, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29952, null]], "pdf_page_numbers": [[0, 2235, 1], [2235, 4814, 2], [4814, 7577, 3], [7577, 9649, 4], [9649, 12502, 5], [12502, 14628, 6], [14628, 17296, 7], [17296, 19133, 8], [19133, 20197, 9], [20197, 22659, 10], [22659, 25295, 11], [25295, 26880, 12], [26880, 29320, 13], [29320, 29952, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29952, 0.07368]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
6670090149fdb46ba68a4d1013d00fb3e7ce24aa
|
Sequential algorithms and innocent strategies share the same execution mechanism
Pierre-Louis Curien
(IRIF, πr², CNRS – Paris 7 – INRIA)
April 2019, Galop workshop, Prague
PLAN of the TALK
1. Geometric abstract machine “in the abstract” : tree interaction and pointer interaction (designed in the setting of Curien-Herbelin’s abstract Böhm trees)
2. Turbo-reminder on sequential algorithms (3 flavours, with focus on two : as programs, and abstract)
3. Geometric abstract machine in action
4. Turbo-reminder on HO innocent strategies for PCF types (2 flavours, “meager and fat” = views versus plays)
5. Geometric abstract machine in action
6. (Inconclusive !) conclusion : the message is : “il y a quelque chose à gratter”
Tree interaction
Setting of alternating 2-players’ games where Opponent starts. Strategies as trees (or forests) branching after each Player’s move. Interaction by tree superposition:
<table>
<thead>
<tr>
<th>STRATEGIES</th>
<th>EXECUTION</th>
</tr>
</thead>
<tbody>
<tr>
<td>$x \ a \ \begin{cases} b & c \ b' & \ldots \end{cases}$</td>
<td>$\langle x, 1 \rangle \ a \ \begin{cases} \langle b, 3 \rangle & c \ b' & \ldots \end{cases}$</td>
</tr>
<tr>
<td>$a \ b \ \begin{cases} c & \ldots \ d & \ldots \end{cases}$</td>
<td>$\langle a, 2 \rangle \ b \ \begin{cases} \langle c, 4 \rangle & \ldots \ d & \ldots \end{cases}$</td>
</tr>
</tbody>
</table>
The trace of the interaction is the “common branch” $x \ a \ b \ c$:
Step $n$ of the machine played in one of the strategies always followed by step $(n + 1)'$ in the same strategy. Next move $(n + 1)$ is played in the other strategy (choice of branch dictated by $(n + 1)'$).
Now, in addition, Player’s moves are equipped with a pointer to an ancestor Opponent’s move.
\[
\begin{align*}
&\text{STRATEGIES} & \quad & \text{EXECUTION} \\
& x \ a \ \\
& \begin{cases} b \ [c, \leftarrow] \ \\
b' \ \ldots \
\end{cases} & \quad & \langle x, 1 \rangle \ a \ \\
& \begin{cases} \langle b, 3 \rangle \ [c, \leftarrow] \ \\
\langle b', 5 \rangle \ \ldots \
\end{cases} \\
& a \ [b, \leftarrow] \ \\
& \begin{cases} c \ [b', \leftarrow] \ \\
d \ \ldots \
\end{cases} & \quad & \langle a, 2 \rangle \ [b, \leftarrow] \ \\
& \begin{cases} \langle c, 4 \rangle \ [b', \leftarrow] \ \\
d \ \ldots \
\end{cases}
\end{align*}
\]
If \((n + 1)’\) points to \(m\), then \((n + 1)\) should be played under \(m’\).
Concrete data structures
A concrete data structure (or cds) $M = (C, V, E, \vdash)$ is given by three sets $C, V, \text{ and } E \subseteq C \times V$ of cells, values, and events, and a relation $\vdash$ between finite parts of $E$ (or cardinal $\leq 1$ for simplicity) and elements of $C$, called the enabling relation. We write simply $e \vdash c$ for $\{e\} \vdash c$. A cell $c$ such that $\vdash c$ is called initial.
(+ additional conditions: well-foundedness, stability)
Proofs of cells $c$ are sequences in $(CV)^*$ defined recursively as follows: If $c$ is initial, then it has an empty proof. If $(c_1, v_1) \vdash c$, and if $p_1$ is a proof of $c_1$, then $p_1 \ c_1 \ v_1$ is a proof of $c$.
Configurations (or strategies, in the game semantics terminology)
A configuration is a subset $x$ of $E$ such that :
1. $(c, v_1), (c, v_2) \in x \Rightarrow v_1 = v_2$.
2. If $(c, v) \in x$, then $x$ contains a proof of $c$.
The conditions (1) and (2) are called consistency and safety, respectively.
The set of configurations of a CDs $\mathcal{M}$, ordered by set inclusion, is a partial order denoted by $(\mathcal{D}(\mathcal{M}), \leq)$ (or $(\mathcal{D}(\mathcal{M}), \subseteq)$).
Some terminology
Let $x$ be a set of events of a cds. A cell $c$ is called:
- filled (with $v$) in $x$ iff $(c, v) \in x$,
- accessible from $x$ iff $x$ contains an enabling of $c$, and $c$ is not filled in $x$ (notation $c \in A(x)$).
Some examples of cds’s
(1) Flat cpo’s : for any set \( X \) we have a cds
\[
X_\bot = (\{\top\}, X, \{\top\} \times X, \{\bot\}) \quad \text{with} \quad D(X_\bot) = \{\emptyset\} \cup \{(? , x) \mid x \in X\}
\]
Typically, we have the flat cpo \( \mathbb{N}_\bot \) of natural numbers.
(2) Any first-order signature \( \Sigma \) gives rise to a cds \( M_\Sigma \):
- cells are occurrences described by words of natural numbers,
- values are the symbols of the signature,
- all events are permitted,
- \( \vdash \epsilon \), and \( (u, f) \vdash u_i \) for all \( 1 \leq i \leq \text{arity}(f) \).
Product of two cds’s
Let $M$ and $M'$ be two cds’s. We define the product $M \times M' = (C, V, E, \vdash)$ of $M$ and $M'$ by:
- $C = \{c.1 \mid c \in C_M\} \cup \{c'.2 \mid c' \in C'_{M'}\}$,
- $V = V_M \cup V_{M'}$,
- $E = \{(c.1, v) \mid (c, v) \in E_M\} \cup \{(c'.2, v') \mid (c', v') \in E'_{M'}\}$,
- $(c_1.1, v_1), \ldots, c_n.1, v_n) \vdash c.1 \Leftrightarrow (c_1, v_1), \ldots, (c_n, v_n) \vdash c$ (and similarly for $M'$).
Fact: $M \times M'$ generates $D(M) \times D(M')$.
Sequential algorithms as programs
Morphisms between two cds’s $\mathbb{M}$ and $\mathbb{M}'$ are forests described by the following formal syntax:
$$F ::= \{T_1, \ldots, T_n\}$$
$$T ::= \text{request } c' \text{ } U$$
$$U ::= \text{valof } c \text{ is } \{\ldots v \mapsto U_v \ldots\} \mid \text{output } v' \text{ } F$$
satisfying some well-formedness conditions:
— A request $c'$ can occur only if the projection on $\mathbb{M}'$ of the branch connecting it with the root is a proof of $c'$.
— Along a branch, knowledge concerning the projection on $\mathbb{M}$ is accumulated in the form of a configuration $x$, and a valof $c$ can occur only if $c$ is accessible from the current $x$. In particular, no repeated valof $c$!
Exponent of two cds’s
If $M, M'$ are two cds’s, the cds $M \to M'$ is defined as follows:
— If $x$ is a finite configuration of $M$ and $c' \in C_{M'}$, then $xc'$ is a cell of $M \to M'$.
— The values and the events are of two types:
— If $c$ is a cell of $M$, then $\text{valof } c$ is a value of $M \to M'$, and $(xc', \text{valof } c)$ is an event of $M \to M'$ iff $c$ is accessible from $x$;
— if $v'$ is a value of $M'$, then $\text{output } v'$ is a value of $M \to M'$, and $(xc', \text{output } v')$ is an event of $M \to M'$ iff $(c', v')$ is an event of $M'$.
— The enablings are given by the following rules:
\[
\begin{align*}
\vdash \emptyset c' & \quad \text{iff} \quad \vdash c' \\
(yc', \text{valof } c) \vdash xc' & \quad \text{iff} \quad x = y \cup \{(c, v)\} \\
(xd', \text{output } w') \vdash xc' & \quad \text{iff} \quad (d', w') \vdash c'
\end{align*}
\]
An example of a sequential algorithm
The following is the interpretation of
$$\lambda f. \text{case } f \text{T F}[T \rightarrow F] : (\text{bool}_{11} \times \text{bool}_{12} \rightarrow \text{bool}_{1}) \rightarrow \text{bool}_{\epsilon}$$
\[\text{request} ?_\epsilon \text{ valof } \bot \bot \text{?}_1 \begin{cases} \text{is valof } ?_{11} \text{ valof } T \bot \text{?}_1 \{ \text{is valof } ?_{12} \text{ valof } T F \text{?}_1 \{ \text{is output } T_1 \text{ output } F_\epsilon \\
\text{is valof } ?_{12} \text{ valof } F \text{?}_1 \{ \text{is valof } ?_{11} \text{ valof } T F \text{?}_1 \{ \text{is output } T_1 \text{ output } F_\epsilon \\
\text{is output } T_1 \text{ output } F_\epsilon \\ \end{cases} \end{cases} \]
to be contrasted with the interpretation of the same term as a set of views in HO semantics:
\[?_\epsilon \text{?}_1 \begin{cases} ?_{11} \text{T}_{11} \\
?_{12} \text{F}_{12} \\
T_1 \text{F}_\epsilon \end{cases} \]
An example of execution of sequential algorithms
$F' : B \times M_\Sigma \rightarrow B$ explores successively the root of its second input, its first input, and the first son of its second input (if of the form $(f(\Omega, \Omega))$ to produce $F$, while $F = \langle F_1, F_2 \rangle$, where $F_1 : M_\Sigma \rightarrow B$ (resp. $F_2 : M_\Sigma \rightarrow M_\Sigma$) produces $F$ without looking at its argument (resp. is the identity).
Branch of $F'' = F' \circ F : M_\Sigma \rightarrow B$ being built :
$$\{\langle \text{request }?, 1 \rangle \text{ valof } \epsilon \langle \text{is } f, 2 \rangle \text{ valof } 1 \langle \text{is } f, 3 \rangle \text{ output } F\}$$
Branch of $F'$ being explored :
$$\{\langle \text{request }?, 1.1 \rangle \text{ valof } \epsilon_2 \langle \text{is } f_2, 2.2 \rangle \text{ valof } ?_1 \langle \text{is } F_1, 2.4 \rangle \text{ valof } 1_2 \langle \text{is } f_2, 3.2 \rangle \text{ output } F\}$$
Branches of $F$ being explored :
$$\begin{cases}
\langle \text{request }?_1, 2.3 \rangle \text{ output } F_1 \\
\langle \text{request }\epsilon_2, 1.2 \rangle \text{ valof } \epsilon \langle \text{is } f, 2.1 \rangle \text{ output } f_2 \langle \text{request } 1_2, 2.5 \rangle \text{ valof } 1 \langle \text{is } f, 3.1 \rangle \text{ output } f_2
\end{cases}$$
Pointer interaction : $2.5'$ points to $(2.2)$, hence $2.5$ is played under $(2.2)'$. Pointers are implicit in sequential algorithms, i.e., can be uniquely reconstructed : each $\text{valof } c$ points to $\text{is } v$, where $\text{is } v$ follows $\text{valof } d$ and $(d, v) \vdash c.$
Equivalent definitions of sequential algorithms
We have 3 equivalent definitions of **sequential algorithms**:
1. as **programs** (our focus here) \(\leadsto\) **ABSTRACT MACHINE**
2. as **configurations** of \(M \to M' \leadsto\) **CART. CLOSED STRUCTURE**
3. as **abstract algorithms** (or as pairs of a function and a computation strategy for it). Abstract algorithms are the **fat** version of configurations: if \((yc', u) \in a, y \leq x, \) and \((xc', u) \in E_{M \to M'}, \) then we set \(a^+(xc') = u.\) If we spell this out (for \(y \leq x\)):
\[
\begin{align*}
(yc', \text{valof } c) \in a \text{ and } c \in A(x) \Rightarrow a^+(xc') &= \text{valof } c \\
(yc', \text{output } v') \in a \Rightarrow a^+(xc') &= \text{output } v'
\end{align*}
\]
\(\leadsto\) **“CONCEPTUAL” COMPOSITION**
Composing abstract algorithms
Let $M$, $M'$ and $M''$ be cds’s, and let $f$ and $f'$ be two abstract algorithms from $M$ to $M'$ and from $M'$ to $M''$, respectively. The function $g$, defined as follows, is an abstract algorithm from $M$ to $M''$:
$$g(xc'') = \begin{cases}
\text{output } v'' & \text{if } f'((f \cdot x)c'') = \text{output } v'' \\
\text{valof } c & \text{if } \begin{cases}
f'((f \cdot x)c'') = \text{valof } c' \\
f(xc') = \text{valof } c
\end{cases}
\end{cases}$$
Perspective
Thus, sequential algorithms admit a meager form (as programs or as configurations) and a fat form (as abstract algorithms).
Similarly, innocent strategies as sets of plays are in fat form, while the restriction to their set of views is their meager form.
— Fat composition is defined synthetically.
— Meager composition is defined via an abstract machine: the same for both = the Geometric Abstract Machine (with the proviso that the execution of sequential algorithms uses an additional call-by-need mechanism added to the machine).
PCF Böhm trees
\[ M := \lambda \vec{x}.W \quad \text{(the length of } \vec{x} \text{ may be zero)} \]
\[ W := n \mid \text{case } xM [\ldots m \rightarrow W_m \ldots] \]
Taking the syntax for PCF types \( \sigma ::= \text{nat} \mid \sigma \rightarrow \sigma \), we have the following typing rules:
\[
\Gamma, x_1 : \sigma_1, \ldots x_n : \sigma_n \vdash W : \text{nat} \\
\Gamma \vdash \lambda x_1 \ldots x_n. W : \sigma_1 \rightarrow \ldots \rightarrow \sigma_n \rightarrow \text{nat} \\
\ldots \Gamma, x : \sigma \vdash M_i : \sigma_i \ldots \ldots \Gamma, x : \sigma \vdash W_j : \text{nat} \ldots \]
\[ \Gamma \vdash n : \text{nat} \quad \Gamma, x : \sigma \vdash \text{case } xM_1 \ldots M_p [m_1 \rightarrow W_1 \ldots m_q \rightarrow W_q] : \text{nat} \]
where, in the last rule, \( \sigma = \sigma_1 \rightarrow \ldots \rightarrow \sigma_p \rightarrow \text{nat} \)
PCF Böhm trees as strategies: an example
All PCF Böhm trees can be transcribed as trees. We decorate PCF types $A$ as $[[A]]^ε$, where each copy of $\text{nat}$ is decorated with a word $u \in \mathbb{N}^*$:
$$[[A^1 \rightarrow \ldots \rightarrow A^n \rightarrow \text{nat}]]_u = [[A^1]]_{u_1} \rightarrow \ldots \rightarrow [[A^n]]_{u_n} \rightarrow \text{nat}_u$$
All moves in the HO arenas for PCF types are of the form $?_u$ or $n_u$.
Moreover $?_u$ has polarity 0 (resp. P) if $u$ is of even (resp. odd) length, while $n_u$ has polarity P (resp. O) if $u$ is of even (resp. odd) length.
The PCF Böhm tree $\lambda f.\, \text{case } f\, 3 \, [4 \rightarrow 7, \, 6 \rightarrow 9]$ reads as follows:
$$
\lambda f.\, \text{case } f \begin{cases}
(3) & 4 \rightarrow 7 \\
6 \rightarrow 9
\end{cases} \quad h = ?_\epsilon[?_1, \leftarrow] \begin{cases}
?_{11}[3_{11}, \leftarrow] & 0 \\
4_1[7_\epsilon, \leftarrow] & 1 \\
6_1[9_\epsilon, \leftarrow] & 1
\end{cases}
$$
PCF Böhm trees as strategies: full compilation
We need an auxiliary functions
\[ \text{arity}(A, \epsilon) = n \quad \text{arity}(A, iu) = \text{arity}(A^i, u) \quad (A = A^1 \to \ldots \to A^n \to \text{nat}) \]
\[ \text{access}(x, (\vec{x}, u) \cdot L, i) = \begin{cases} [?_{uj}, i \leftarrow] & \text{if } x \in \vec{x} \text{ with } x = x_j \\ \text{access}(x, L, i + 1) & \text{otherwise} \end{cases} \]
We translate \( M : A \) to \( \llbracket M \rrbracket^1 \), where
\[
\llbracket \lambda \vec{x}.W \rrbracket_u^L = ?_u \llbracket W \rrbracket_u^{(\vec{x}, u) \cdot L}
\]
\[
\llbracket n \rrbracket_u^L = n_u \quad \text{(pointer reconstructed by well-bracketing)}
\]
\[
\llbracket \text{case } x\vec{M} [\ldots m \to W_m \ldots] \rrbracket_u^L = [?_{vj}, i \leftarrow] \begin{cases} \cdots \llbracket M_l \rrbracket_{vj l} \\ \cdots \llbracket W_m \rrbracket_u^L \\ \cdots \end{cases}
\]
where \( \text{access}(x, L, 0) = [?_{vj}, i \leftarrow] \) and \( 1 \leq l \leq \text{arity}(A, vj) \).
An example of execution of HO strategies: the strategies
\[ K_{\text{Kierstead}1} = \lambda f. \text{case } f(\lambda x. \text{case } f(\lambda y. \text{case } x)) \]
applied to
\[ \lambda g. \text{case } g(\text{case } gT [T \rightarrow T, F \rightarrow F]) [T \rightarrow F, F \rightarrow T] \]
An example of execution of HO strategies: the execution
\[
\langle ?, 1 \rangle [?, 0] \\
\quad \left\{ \begin{array}{l}
\langle ?, 3 \rangle [?, 1] \\
\quad \left\{ \begin{array}{l}
\langle ?, 5 \rangle [?, 1] \\
\quad \left\{ \begin{array}{l}
\langle T_{111}, 15 \rangle [T_{11}, 1] \\
\quad \langle F_1, 17 \rangle [F_{11}, 1] \\
\quad \langle ?_{11}, 9 \rangle [?, 1] \\
\quad \left\{ \begin{array}{l}
\langle F_{111}, 11 \rangle [F_{11}, 1] \\
\quad \langle T_1, 13 \rangle [T_{11}, 1] \\
\quad \langle T_1, 19 \rangle [T_{\epsilon}, 1]
\end{array} \right.
\end{array} \right.
\end{array} \right.
\end{array}
\right.
\end{array} \right.
\end{array} \right.
\end{array}
\end{array}
\end{array}
\end{array}
\end{array}
\end{array}
A form of conclusion
Sequential algorithms and HO innocent strategies differ in at least two respects:
— Sequential algorithms are intensional even for purely functional programs, cf. example \( \lambda f. \text{case } f \ T \ F \ [T \rightarrow F] \)
— Sequential algorithms have memory (or work in call-by-need manner), e.g. the model “normalises”
\[ \lambda x. \text{case } x \ [3 \rightarrow \text{case } x \ [3 \rightarrow 4]] \]
into
\[ \text{request } ?_\epsilon \ \text{valof } ?_1 \ \{ \text{is } 3_1 \ \text{output } 4_\epsilon \]
As for the second aspect, one could think of a multiset version of the exponent of two cds’ (cf. the two familiar “bangs” in the relational and coherent semantics of linear logic).
|
{"Source-Url": "https://www.irif.fr/~curien/SA-HO-Galop-2019.pdf", "len_cl100k_base": 5543, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 51021, "total-output-tokens": 6596, "length": "2e12", "weborganizer": {"__label__adult": 0.0005340576171875, "__label__art_design": 0.0008878707885742188, "__label__crime_law": 0.0006289482116699219, "__label__education_jobs": 0.0011196136474609375, "__label__entertainment": 0.00017881393432617188, "__label__fashion_beauty": 0.0002276897430419922, "__label__finance_business": 0.0003662109375, "__label__food_dining": 0.0007328987121582031, "__label__games": 0.003025054931640625, "__label__hardware": 0.0015153884887695312, "__label__health": 0.0008473396301269531, "__label__history": 0.0005536079406738281, "__label__home_hobbies": 0.00022518634796142575, "__label__industrial": 0.0011262893676757812, "__label__literature": 0.000728607177734375, "__label__politics": 0.0005388259887695312, "__label__religion": 0.0009851455688476562, "__label__science_tech": 0.1634521484375, "__label__social_life": 0.00015878677368164062, "__label__software": 0.00638580322265625, "__label__software_dev": 0.81396484375, "__label__sports_fitness": 0.0006117820739746094, "__label__transportation": 0.0011463165283203125, "__label__travel": 0.00031304359436035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15643, 0.0157]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15643, 0.50203]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15643, 0.61452]], "google_gemma-3-12b-it_contains_pii": [[0, 175, false], [175, 732, null], [732, 1639, null], [1639, 2440, null], [2440, 3152, null], [3152, 3646, null], [3646, 3885, null], [3885, 4487, null], [4487, 4980, null], [4980, 5713, null], [5713, 6606, null], [6606, 7559, null], [7559, 9164, null], [9164, 9970, null], [9970, 10463, null], [10463, 11012, null], [11012, 11890, null], [11890, 12868, null], [12868, 13880, null], [13880, 14180, null], [14180, 14915, null], [14915, 15643, null]], "google_gemma-3-12b-it_is_public_document": [[0, 175, true], [175, 732, null], [732, 1639, null], [1639, 2440, null], [2440, 3152, null], [3152, 3646, null], [3646, 3885, null], [3885, 4487, null], [4487, 4980, null], [4980, 5713, null], [5713, 6606, null], [6606, 7559, null], [7559, 9164, null], [9164, 9970, null], [9970, 10463, null], [10463, 11012, null], [11012, 11890, null], [11890, 12868, null], [12868, 13880, null], [13880, 14180, null], [14180, 14915, null], [14915, 15643, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15643, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15643, null]], "pdf_page_numbers": [[0, 175, 1], [175, 732, 2], [732, 1639, 3], [1639, 2440, 4], [2440, 3152, 5], [3152, 3646, 6], [3646, 3885, 7], [3885, 4487, 8], [4487, 4980, 9], [4980, 5713, 10], [5713, 6606, 11], [6606, 7559, 12], [7559, 9164, 13], [9164, 9970, 14], [9970, 10463, 15], [10463, 11012, 16], [11012, 11890, 17], [11890, 12868, 18], [12868, 13880, 19], [13880, 14180, 20], [14180, 14915, 21], [14915, 15643, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15643, 0.01794]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
8f7de3c5fea1fc7fd9dd7f7dde35738ede3dc1b2
|
A Generic Framework for Visualizing the News Article Domain and its Application to Real-World Data
Elisabeth Lex, Christin Seifert, Wolfgang Kienreich and Michael Granitzer
Know-Center, Competence Centre for Knowledge-Based Applications and Systems
Inffeldgasse 21a
8010 Graz, Austria
{elex|ceifert|wkien|mgrani}@know-center.at
ABSTRACT: In this work we present APA Labs, a generic framework for visualizing the news article domain. APA Labs is a web-based platform enabling retrieval and analysis of news repositories provided by the Austrian Press Agency. APA Labs is designed as a rich internet application combined with a modular system of interactive visualizations. News articles are analyzed using domain specific named entity extraction methods combined with language specific heuristics. The proposed methods were subject to an evaluation procedure outlined in this contribution. This article illustrates the domain, the underlying concepts and implementation details. Several visualization modules are presented and an outlook on planned modules is given. Being online for around six months the community feedback as well as the easy integration of new modules shows the success of the underlying concept and the platform itself.
Keywords: H.3.3 [Information Search and Retrieval]: Search process, H.2.4 [Systems]: Textual Databases, [H.3.5] Online Information Services: Web-based services
1. Introduction
With the advent of the Internet and the evolution of Web 2.0 innovative scenarios for producing, providing and consuming information emerged. Before the rise of the Web, press and news agencies have been in charge for gathering and distributing news items to the wide public; thus granting access to content was the main task. However, with the grown popularity of the web the absolute monopoly of the news agencies disappeared and news items became increasingly available to the public. Naturally, this conflicts with the traditional business concept of news and press agencies. Due to the changing consumer needs resulting from ongoing developments in Web 2.0, solely providing content is no longer sufficient for attracting paying customers. News agencies have to extend their services to not only provide high quality content but also to develop novel intelligent services combined with high quality and noise-free content. Many recent advances in information retrieval and information visualization have been applied to news article repositories. Unfortunately, resulting applications have been accessible mostly to experts and closed communities. For example, systems like Galaxy of News [Rennison, 1994], Lighthouse [Leuski and Allen, 2002] and InfoSky [Granitzer et al., 2004] proposed novel visual metaphors to facilitate explorative analysis of large news article repositories. Also, the APA Online Manager [Kienreich, 2005] has provided a number of interactive visualizations to support analysis of search results obtained from news repositories. While incorporating new visualizations and access metaphors, the APA Online Manager is only available as a rich-client application to paying subscribers.
We present the generic framework APA Labs [APALabs], first introduced in [Kienreich et al., 2008], an experimental, web-based platform supporting retrieval and analysis of news articles obtained from the archives of the Austrian Press Agency [APA]. APA Labs utilizes many concepts usually summarized under the term Web 2.0 [OReilly, 2005] and invites users to participate in developing, testing and evaluating novel ways to access news agency repositories. Especially the perpetual beta paradigm and the involvement of users for judging usability of visualization are important steps towards a living laboratory for analyzing news data. The evolvement of the Web has shown that when developing innovative methods participation of users, usability and user acceptance are crucial.
In this work we refer to the version of APA Labs as of September 2008. Due to ongoing development the appearance of the application may change over time.
The remainder of this contribution is organized as follows: We briefly introduce the application domain in section 2 and then outline the concept underlying APA Labs in section 3. We describe implementation details in section 4 and report on the modules currently available in APA Labs in section 5. Section 6 gives an evaluation of the accuracy of our pre-processing techniques while a future outlook and a conclusion is provided in Section 7 and Section 8.
2. Domain
News articles are published by a wide variety of sources. News agencies collect, store and organize news articles and distribute them to paying subscribers. The work outlined in this contribution is based on services provided by the Austrian news agency APA. The news article archive of APA contains 100 Million articles gathered from over 200 sources. Approximately 10,000 articles are added every day. The majority of articles stored and distributed by APA are in German language.
News articles exhibit distinct structures. For example, the first paragraph of an article often forms a summary of the article as a whole. The first occurrence of a person in an article is usually composed of title and full name, while subsequent occurrences mention family name, title or a combination of both. Further, structures can be identified and leveraged for domain-dependent enhancement of retrieval and visualization results.
3. Concept
The emergence of Web 2.0 has been both beneficial and challenging for news agencies. The technological advances associated with Web 2.0 have enabled news agencies to provide services on a higher level of quality and to a wider audience. Rich internet applications have proven capable of replacing the specialized clients used to access large news repositories in the past.
However, the commercial concepts associated with Web 2.0 have been met with less enthusiasm. The traditional business model of news agencies offers services to registered subscribers paying a per-article charge. The content of an article is considered the primary commodity which generates value. This business model limits the application of Web 2.0 concepts like the Long Tail or Mash-ups because once the content of an article has become available in the public domain its value is greatly reduced.
The basic idea behind APA Labs is to give the general public access to novel retrieval and visualization services applied to news article repositories in the framework of a rich internet application. Visitors are invited to evaluate the services and to provide feedback. The number of news articles available for evaluation purposes has been limited to avoid conflicts with the business model outlined. However, the imposed limitations have been carefully balanced to retain added value for visitors. Within the range of available articles, no further restrictions are applied and the full article content can be accessed free of charge.
The expected benefits are manifold: APA Labs generates public awareness for the provided services and documents technological leadership. The rich internet application enables the Austrian Press Agency to field-test new services early in the development cycle, in accordance with the concept of the perpetual beta proposed by the Web 2.0 paradigm. Services which have been proven as useful by visitors can rapidly be integrated into the business model, enabling the Austrian Press Agency to adequately respond to new trends in today's highly volatile markets.
4. Implementation
APA Labs has been implemented in Java as a web application based on J2EE technology [J2EE]. A client-side rich internet application utilizes JavaScript and AJAX technology to communicate with an Apache Tomcat Web Server Version 5.5 [Tomcat] providing content through Java Servlets and Java Server Pages. The APA Labs server is built around a central request handler servlet which accepts and forwards requests issued by registered sessions. A session is registered on creation and assigned light-weight, volatile state data through session attributes. Heavy-weight, persistent state data is stored in a separate repository and referenced by session identifiers to reduce synchronization cost in clustered environments. Figure 1 illustrates the architecture of the APA Labs server and client elements. Functional components connected via well-defined interfaces are described in more detail in this section.

4.1 Functional Components
All components have been implemented following a singleton design pattern. Requests requiring access to such components are queued and processed in sequence.
4.1.1 Search and Retrieval
The underlying repository of news articles is accessed using a HTTP-based query interface to a generic search engine, the APA PowerSearch engine [APAPowerSearch]. This search engine supports Boolean queries with a wide range of operators and returns a relevance-ranked list of news articles. For each news article the title, the medium and the publication date of the document is provided. Search results are stored in the session-specific repository of heavy-weight data. In order to avoid page-loading delays on the client-side result details like article content are loaded only when needed. For instance, displaying the search result list does not require the article content. Instead it will be loaded only on user demand.
4.1.2 Pre-processing
Pre-processing works on the content of the retrieved news articles. First, relevant noun sequences are identified using stop word lists, stemming and language-specific heuristics. Second, an entity extraction procedure uses these noun sequences to extract Named Entities (NE) by applying statistics, heuristics and gazetteer lists. The entity types recognized by the framework currently include people, geographic locations, web addresses, date and time expressions and predefined topics of interest. Details on pre-processing and estimates for the quality achievable is given in section 6 in conjunction with an evaluation of the implemented search and retrieval component.
4.1.3 Rendering Framework
The server-side generation of visualizations has been implemented using a rendering framework based on the Java bindings for OpenGL [Segal and Akeley, 2006] [JavaOpenGL]. Most modern server machines feature rudimentary graphical capabilities which are usually not employed in a web application. The rendering framework exploits these capabilities to generate complex visualizations at a rate of several frames per second while placing minimal load on the central processing unit. The computed visualizations are delivered to clients as a combination of compressed images and structured image maps designating interaction areas.
4.1.4 Visualization
The visualization component is an abstract container enabling the definition of different user interfaces to visually analyze search results. A concrete implementation determines the types of extracted entities used and their visual representation. As depicted in Figure 1 the required data is loaded from the session data repository where the results of the search and retrieval component and the pre-processing component are stored. The visualization component uses the 3D rendering framework to generate the final visualization and delivers the result to the client. When creating new visualizations only this component needs to be implemented. Visualization modules currently available in APA Labs are described in more detail in section 5.
4.1.5 Feedback Component
The feedback component provides functionality to collect and evaluate user generated feedback. User feedback is commonly used in Web 2.0 applications to gather the opinion of the community and enables the application providers to react on usage trends and user approval. The feedback component provided in APA Labs is added to each visualization module and enables the user to rate the visualization according to functionality, design and usability on a 5-point Likert scale [Likert, 1932]. Further, users can comment on the visualizations via email or a feedback form. A module manager at the Austrian Press Agency collects the user feedback which serves as a basis for further decision processes like improving the visualization or adding the originally experimental visualization to the product range of APA.
4.2 Interfaces
All components of APA Labs are connected via well-defined interfaces. There are two main interfaces available in APA Labs. The first interface links the APA Labs framework and the APA search engine. The search engine provides a Representational State Transfer Architecture (REST) [Fielding and Taylor, 2002] interface and is accessed via HTTP using a clear syntax. The second interface connects the framework itself and all visualization modules. Modules following the interface definition can be easily integrated in the system. On the developer side new visualization modules can be implemented without knowing the underlying logic of the system. Also data exchange and session handling is accomplished by framework components.
5 Modules
The user interface of APA Labs provides conventional means to search for news articles, to navigate search result sets through relevance-ranked lists and to display article content. In addition, the user interface integrates a set of custom modules which feature consistent design and interactivity. The general layout of the platform is illustrated in Figure 2. There are two different types of modules available: (i) modules operating on a set of documents and (ii) modules analyzing a single document. All modules share the ability to provide an alternative way to formulate a search query, to navigate a result
set or to analyze article content. The modules are implemented as classes within the application framework and, like described earlier, access result sets, extracted entities and rendering facilities through unified interfaces. This section describes the modules currently available in APA Labs.

**Figure 2: Overview of the APA Labs Platform**
5.1 Geospatial Visualization
Generally, geospatial visualizations display information entities referencing geographical locations on appropriate maps [Scharl and Tochertermann, 2007]. This area of research has recently attracted much attention. Geospatial visualization is a natural extension for systems presenting news articles because most news articles reference one or more geographic locations. More than 85% of all articles available in the archives of the Austrian Press Agency contain at least one geographical reference. The geospatial visualization present in APA Labs extracts geographic locations from a set of documents resulting from a preceding search query.

**Figure 3: Example of the Geospatial Visualization for the search query "WM" (world championship)**
Figure 3 displays the Geospatial Visualization of a search result set obtained in APA Labs. Cones have been positioned on a map of Austria to denote locations mentioned in one or more articles. The size of each cone encodes the number of occurrences identified for its location. The cones are rendered using a semi-transparent material to alleviate occlusion effects. Moving the mouse pointer over a cone displays the name of the location and the number of references identified for it in form of a tool tip window. Clicking on a location instantly filters the search result set to contain only articles referencing the selected location.
One benefit of the Geospatial Visualization is the ability to identify geographical hot spots for a particular topic at a glance. Another benefit is the ability to quickly refine the search results by region.
5.2 Tag Cloud Visualization
Tag clouds are text-based visual representations of a set of words (tags) usually depicting tag importance by font size. The popularity of this type of visualization has steadily grown due to recent trends in social and collaborative software. In contrast to many other types of visualizations, tag clouds do not use real-world models or metaphors. A tag cloud is a visual...
abstraction and thus suitable for visualizing information entities of arbitrary types. This fact makes tag clouds uniquely suited for topical browsing [Kuo et al., 2007] and for browsing of news articles.
Tag clouds have become very popular in Web 2.0 applications, such as del.icio.us and flickr. Most state-of-the-art tag cloud algorithms layout tags inside rectangular boundaries. This constrains the general layout of web sites. In contrast, proposed non-rectangular layouts suffer from huge white-spaces between tags or tag overlap. Recently we presented an algorithm capable of dealing with polygonal boundaries and therefore allowing more flexible website designs [Seifert et al., 2008].
Figure 4: Process overview of the tag layout algorithm
Figure 4 shows an overview of the algorithm for tag layout. The algorithm takes a set of tags and a convex polygon as input. Each tag is assigned a relevance value. Besides initial values for the minimal font size, the maximal allowed and the minimal allowed font sizes are given. The resulting font size for a particular tag is calculated from its relevance value and the current font size interval. From the tag’s font size and the string itself a bounding box for each tag is computed. The bounding boxes serve as input for the core tag layout algorithm. The core algorithm tries to place the bounding boxes circularly starting from the centre of mass of the bounding polygon. In order to account for westernized reading direction the algorithm uses a heuristic to preferably place tags to the left and the right of existing tags. The box layout has to start again if not all bounding boxes fit inside the polygon. In this case a parameter adjustment takes place and the box layout starts again. The whole layout trial stops if either the thresholds for string truncation and font size are reached or all tags were successfully laid out. For details on the algorithm, a technical evaluation and a user study refer to [Seifert et al., 2008].
Figure 5 shows two example tag clouds inscribed in a regular polygon and a circle, which in the discrete 2D space can be approximated by a polygon.
Figure 5: Examples tag layouts in arbitrary polygons. Left: 30 tags in a regular octagon. Right: 10 tags in a circle
Figure 6 displays the tag cloud visualization of a search result set obtained in APA Labs. Despite the algorithm can lay out tags in arbitrary convex polygons, for the APA Labs visualization a rectangular border is used to guarantee a common visual appearance of all visualizations within APA Labs.
In the context of APA Labs tags have been derived from extracted entities. The entity extraction procedure is applied to a set of documents. Each tag denotes either the name of a person, the label of a geographical location or a general term. Font size and color are measures for tag weight: Highly relevant tags are rendered using a larger font and have less transparency applied. Tag weights are determined based on the number of occurrences in the result set.
Figure 6: Example of the Tag Cloud Visualization for the search query "Schule" (school)
Due to the fact that the bounding boxes of the tags are not visible clicking a small tag may be difficult because the user cannot immediately grasp whether the mouse pointer is exactly above the particular tag. Therefore the tooltip provides a quick feedback which tag is currently selected. Clicking on a tag instantly filters the search result set to contain only articles referencing the selected tag.
The most prominent benefit offered by the tag cloud visualization is the ability to identify the major subtopics present in a result set at a glance. Besides the user can immediately skim through the most important words in the retrieved news articles and therefore gets a general idea about the contents provided within the resulting document set.
5.3 Parliament Visualization
One of the most common forms of media observation carried out by experts is the analysis of the impact statements made by public figures in news articles. This type of observation is also referred to as media diffusion analysis. For the general public, the most interesting public figures are probably the voted representatives and leading politicians of a country. The Parliament Visualization module integrated in APA Labs enables users to instantly determine which members of the Austrian government and parliament have been mentioned in the context of a search result set.
Figure 7: Example of the Parliament Visualization for the search query "Wahl" (election)
Figure 7 displays the Parliament Visualization of a search result set obtained in APA Labs. The basis of the visualization is formed by a three-dimensional, stylized model of the Austrian parliament. The curved rows of seats visible in the background of the image belong to the Members of Parliament. In the foreground, a straight row of seats is reserved for the ministers forming the government. A three-dimensional icon in the shape of a stylized human is attached to each set and oriented to face the observer’s point of view. The icon represents the person holding the seat in parliament. For each seat, the name and the party association of the person holding it is known.
Initially, all icons are colored grey and have the same size. For each person mentioned at least once in the current result set, the according icon is colored in the colors of that person’s political party. Icons are scaled along the vertical axis based on the number of references present in the current result set.
Moving the mouse pointer over an icon displays the name of the Member of Parliament or minister and the number of references identified for him or her in the form of a tooltip window. Clicking on an icon instantly filters the search result set to contain only articles referencing the selected person.
The major benefit provided by the Parliament Visualization is the ability to instantly identify which politicians are associated with a specified topic.
5.4 Round Table Visualization
The Round Table Visualization shows the current seven top candidates for the Austrian parliament elections taking place in autumn 2008. This visualization allows users to identify who of the top candidates is associated with a specific subject in the media or is mentioned in context of a certain issue discussed in the news. For instance, users can immediately learn what opinion a particular politician states when e.g. the reformation of the Austrian health system is discussed in public.
Figure 8: Example of the Round Table Visualization for the search query "Wahl" (election)
Figure 8 gives an example of the Round Table Visualization. The three-dimensional visualization provides a semicircular arrangement of seven figures. The figures correspond to the top candidates of the parties which are eligible for election. The figures are colored according to the particular party affiliation. In front of each figure a label providing the name of the politician is placed. The size of the figures corresponds to the number of occurrences of the particular person in the search result set. Politicians occurring more often in relevant thematic context to the original search query are displayed larger than others. The names of the politicians and the exact number of hits are also available as tool tip. In addition to that a speech bubble represented by a small tag cloud is provided over each figure. The speech bubble contains the most important names, locations and terms extracted from the set of documents related to the particular politician. The document set related to a particular person can be obtained by clicking the figure corresponding to the person. This results in a refinement of the original search query.
The Round Table Visualization provides a profound overview of the current political discussion taking place in the present election campaign in Austria. Because the Round Table Visualization also shows the most important keywords each top candidate states to an individual topic it supports users in finding out which issues are covered by each politician. The persons shown in the visualization can, of course, be changed to reflect the actual political situation in Austria.
5.5 Brockhaus Look-Up
Mashups are one of the core concepts commonly associated with the term Web 2.0. A mashup combines data and services to create new functionality. This agile concept has proven to be beneficial for both, non-commercial projects and business-to-business applications. APA Labs features a Brockhaus Look-Up module which integrates services provided by the Brockhaus Encyclopaedia [Brockhaus] with services provided by the APA to present users with a lexical context for articles or selected text ranges.
Figure 9: Example of the Brockhaus Look-Up, displaying content of a selected article after search for "Mehrwertssteuer" (engl. VAT)
In Figure 9 the full text of a news article is shown as text to the left and the Brockhaus Look-Up is shown in the sidebar to the right. The Brockhaus Look-Up identifies relevant terms within a text using the entity extraction methods described in section 4.1.2. It then looks up these terms in the Brockhaus Encyclopedia and returns a relevance-ranked list of found encyclopedia articles which constitute a topical context of the analyzed text.
The Brockhaus Encyclopedia is considered the prevalent multimedia encyclopedia in the German-speaking domain. It contains approximately 240,000 articles and 350,000 keywords and thus ranks among the world's largest encyclopedias. The generation of lexical context is started automatically for the full text of an article whenever article content is being displayed. Clicking the button "from selection" in the sidebar starts generation of lexical context for the currently selected text range. Found encyclopedia articles are displayed in the sidebar with title and text preview.
The major benefit of the Brockhaus module is the availability of a lexical context explaining unfamiliar domains to users. Another benefit is that users can directly look up unknown terms in the encyclopedia.
6. Evaluation of Pre-processing Techniques
All proposed modules strongly depend on the quality of the pre-processing components. Especially the Brockhaus Look-up module relies on the capability of the components identifying relevant terms and their context. Therefore, in the following section the accuracy of the pre-processing methods is estimated. Hereby the focus lies on the identification of key terms and named entities as well as on resolving context information; essential criteria for automatically linking encyclopedic resources to news articles.
Because no specialized test dataset was available for our domain and creating such a dataset is time consuming and costly, our evaluation is performed on a standard test dataset. Since the online encyclopedia Wikipedia exhibits similar structures as the Brockhaus encyclopedia, a Wikipedia-based standard dataset, the INEX 2007 Link-the-Wiki dataset [Huang et al., 2007], was used for evaluation. Wikipedia pages contain links manually annotated by users. These links can be regarded as important key phrases as well as links pointing to relevant encyclopedic resources. These aspects lead to the conclusion that the Wikipedia serves as a suitable test dataset for the intended evaluation.
The INEX 2007 Link-the-Wiki dataset consists of 659,413 Wikipedia pages taken from the Wikipedia XML Corpus described in [Denoyer and Gallinari, 2005]. Like in the 2007 INEX track, 90 topics from the dataset are used as test set whereas one topic corresponds to a Wikipedia page. The challenge is to identify the 8,392 annotated links available in the test set using the pre-processing methods. Each test page contains 94.29 links on average, with a minimum of 6 and a maximum of 521 links. In total, the test collection links to 5590 unique Wikipedia pages.
In general, the pre-processing is based on standard tasks like tokenization, sentence boundary detection and part-of-speech tagging enabling the elimination of non-nouns. In order to identify named entities finite state machines for gazetteer matching are used - similarly to those used in GATE [Cunningham et. al., 2002] - as well as simple grammatical rules. For instance, person detection exploits lists of known names (e.g. names of politicians) as well as forename lists combined with grammars based on noun phrases for detecting possible surnames.
Encyclopedic resources exhibit structures appropriate for being used as gazetteer lists because nearly every page in either the Brockhaus or the Wikipedia has a precise title describing the actual topic, e.g. the name of a person or a location. However, such a topic matching process is purely syntactical in nature and requires disambiguation strategies as post processing steps.
After detecting all possible entities using the outlined matching approach, the words in the neighborhood denote the entity's context. The number of words to consider depends on the segmentation level, which is defined as either the whole document, an automatically detected part of a document, the so-called topic [Choi, 2000], or a single sentence. Considering the segmentation level all identified nouns are used as input for a Boolean OR query performed on the indexed training data sets. The Java based Open Source Search Engine Lucene [Lucene] has been used as an underlying search backend. If the entity of the context is contained in the result set, the score for this entity is used as confidence in the correctness of the annotation.
For each document the pre-processing procedure returns a ranked list of entities and their position within the document. These results can be evaluated using standard Information Retrieval measures.
<table>
<thead>
<tr>
<th>Document Level</th>
<th>Segmentation</th>
<th>11-pt. AP</th>
<th>R-prec</th>
<th>Micro Precision</th>
<th>Micro Recall</th>
</tr>
</thead>
<tbody>
<tr>
<td>Topic</td>
<td>0.2833</td>
<td>0.3681</td>
<td>0.2215</td>
<td>0.4317</td>
<td></td>
</tr>
<tr>
<td>Sentence</td>
<td>0.2842</td>
<td>0.3679</td>
<td>0.2244</td>
<td>0.4318</td>
<td></td>
</tr>
<tr>
<td>Document</td>
<td>0.2854</td>
<td>0.3539</td>
<td>0.1449</td>
<td>0.5810</td>
<td></td>
</tr>
</tbody>
</table>
Table 1: Evaluation of disambiguation quality of different segmentation levels
In Table 1 the result for different segmentation levels in terms of 11-point average precision (11-pt. AP), R-precision as well as micro precision and recall are given. While precision is around 20 percent, half of the entities can be correctly identified. However, the precision achieved is determined by the values of R-prec. R-prec is defined as the precision at R where R denotes the number of relevant documents in the test set. In this case 1/3 of the links can be considered as relevant.
Employing this evaluation strategy it has to be considered that Wikipedia pages are annotated by humans, following not only logical rules but also aesthetic criteria. As outlined in [Wu and Weld, 2007] entities are most often annotated only on their first occurrence, therefore it is assumed that accuracy can be increased significantly.
However, user feedback indicates that the quality of the proposed pre-processing methods is sufficient for the APA Labs community's needs. Also, evaluations done by domain experts revealed an acceptable quality. Nevertheless, further improvement will focus on the use of machine learning techniques and supporting a broader range of entities as well as a higher semantic richness by utilizing ontologies.
7. Future Work
The design of APA Labs encourages a continuous process of gathering evaluation results and developing and integrating novel modules. User feedback for the modules outlined in this paper will be collected and analyzed in detail. The Austrian Press Agency will decide which modules are candidates for being integrated into its commercial services based on the collected user feedback. A preliminary analysis of the feedback gathered so far indicates that the Parliament Visualization and the Geospatial Visualization are likely candidates for such a step.
New modules will be developed based on input from the community and on research findings in the field of information and knowledge visualization. For instance, a new visualization module is planned to illustrate the different branches of Austrian industries. In this visualization, Austrian industry companies will be extracted from a collection of APA news articles and will be displayed on a map of Austria. Each individual industry branch will be represented by a characteristic icon. If e.g. the name of a factory is mentioned in conjunction with a search query a small icon in shape of a factory will be placed on the correct geographic location on the map of Austria. The aim of this visualization is to provide an instant overview of the Austrian industrial development.
We also plan to extend our applications to trend visualizations [Piche, 1995] where a document set is analyzed over time. These visualizations are very promising to identify trends in the fields of tourism, industry or the financial world. Investigation of news article collections using trend visualizations could lead to a better identification of trends emerging within a specific time period.
8. Conclusions
This contribution presented the generic framework APA Labs. The experimental web-based platform provides novel methods to access the repository of the news archive of the Austrian Press Agency APA. Traditional technologies and business models usually favored by news agencies are modified and augmented by several Web 2.0 concepts. APA Labs is designed as a combination of a rich internet application with a modular system of interactive visualizations using server-side entity extraction and three-dimensional rendering capability. The labs platform enables APA to test the acceptance of new services and get early user feedback in the development cycle. Due to its online availability APA Labs generates public awareness to the products of the Austrian Press Agency and hopefully initiates innovative developments beneficial for both company and customers.
9. Acknowledgement
The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labor and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.
References
|
{"Source-Url": "https://pdfs.semanticscholar.org/3c11/f3df80d53ed3485a62b01738719c0c24105c.pdf", "len_cl100k_base": 6658, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23490, "total-output-tokens": 8719, "length": "2e12", "weborganizer": {"__label__adult": 0.00030612945556640625, "__label__art_design": 0.0011339187622070312, "__label__crime_law": 0.0005898475646972656, "__label__education_jobs": 0.0023975372314453125, "__label__entertainment": 0.0003256797790527344, "__label__fashion_beauty": 0.00020134449005126953, "__label__finance_business": 0.0008955001831054688, "__label__food_dining": 0.0003628730773925781, "__label__games": 0.0007228851318359375, "__label__hardware": 0.0008392333984375, "__label__health": 0.0004222393035888672, "__label__history": 0.0007624626159667969, "__label__home_hobbies": 9.101629257202148e-05, "__label__industrial": 0.00030493736267089844, "__label__literature": 0.0008373260498046875, "__label__politics": 0.0008897781372070312, "__label__religion": 0.00040984153747558594, "__label__science_tech": 0.10198974609375, "__label__social_life": 0.00020241737365722656, "__label__software": 0.1405029296875, "__label__software_dev": 0.7451171875, "__label__sports_fitness": 0.00016486644744873047, "__label__transportation": 0.0004503726959228515, "__label__travel": 0.0002925395965576172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39842, 0.04025]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39842, 0.341]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39842, 0.86979]], "google_gemma-3-12b-it_contains_pii": [[0, 5025, false], [5025, 8539, null], [8539, 13815, null], [13815, 16260, null], [16260, 18824, null], [18824, 22285, null], [22285, 25181, null], [25181, 31463, null], [31463, 36196, null], [36196, 39842, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5025, true], [5025, 8539, null], [8539, 13815, null], [13815, 16260, null], [16260, 18824, null], [18824, 22285, null], [22285, 25181, null], [25181, 31463, null], [31463, 36196, null], [36196, 39842, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39842, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39842, null]], "pdf_page_numbers": [[0, 5025, 1], [5025, 8539, 2], [8539, 13815, 3], [13815, 16260, 4], [16260, 18824, 5], [18824, 22285, 6], [22285, 25181, 7], [25181, 31463, 8], [31463, 36196, 9], [36196, 39842, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39842, 0.03788]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
bcd15e44b8f4c3e83b6622434fcbb5cf02fe1054
|
Device driver synthesis for embedded systems
Julien Tanguy, Jean-Luc Béchennec, Mikaël Briday, Sébastien Dubé, Olivier Henri Roux
To cite this version:
Julien Tanguy, Jean-Luc Béchennec, Mikaël Briday, Sébastien Dubé, Olivier Henri Roux. Device driver synthesis for embedded systems. 18th IEEE International Conference on Emerging Technologies Factory Automation, Sep 2013, Cagliari, Italy. <hal-00942323>
HAL Id: hal-00942323
https://hal.archives-ouvertes.fr/hal-00942323
Submitted on 5 Feb 2014
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Device driver synthesis for embedded systems
Julien Tanguy†‡, Jean-Luc Béchennec‡, Mikaël Briday‡, Sébastien Dubé‡
Olivier H. Roux‡
†See4sys,
Espace Performance La Fleuriaie, 44481 Carquefou CEDEX, France
‡LUNAM université, IRCCyN Lab, École Centrale de Nantes
1, rue de la Noë, 44 300 Nantes, France.
E-mail: {julien.tanguy, sebastien.dube}@see4sys.com
{jean-luc.bechennec, mikael.briday, olivier-h.roux}@irccyn.ec-nantes.fr
Abstract
Currently the development of embedded software managing hardware devices that fulfills industrial constraints (safety, real time constraints) is a very complex task. To allow an increased reusability between projects, generic device drivers have been developed in order to be used in a wide range of applications. Usually the level of genericity of such drivers require a lot of configuration code, which is often generated. However, a generic driver requires a lot of configuration and need more computing power and more memory needs than a specific driver. This paper presents a more efficient methodology to solve this issue based on a formal modeling of the device and the application. Starting from this modeling, we use well-known game theory techniques to solve the driver model synthesis problem. The resulting model is then translated into the actual driver embedded code with respect to an implementation model.
By isolating the model of the device, we allow more reusability and interoperability between devices for a given application, while generating an application-specific driver.
1 Introduction
The development of device drivers in embedded systems is a critical and error-prone task. Because a device driver is the interface between the hardware device and the application or the operating system, the designers must have a knowledge of all those three components in order to develop efficient and safe drivers. The security aspect is emphasized by the execution context of most drivers: Being executed with supervisor privileges, any error in a driver may have a serious impact on the integrity of the entire system.
Another difficulty for designing device drivers is the device datasheet. Although it is designed to help a driver designer by explaining briefly how the device works, it does not document all possible behaviors. For example, a datasheet might specify that a device must be shut down in order to change some configuration registers, but it does not explain the outgoing of a configuration register change while the device is running.
To improve driver correctness and quality, a number of verification techniques [2, 7] has been developed. An alternative to verification is to improve the development process by synthesizing the driver from a formal specification. The verification method ensures that the driver behaves correctly and can check protocol violations between the application and the driver, while the synthesis approach ensures a correct-by-construction driver. However, for configurability and inter-operability reasons, such generated drivers, still conform to the traditional model of a driver consisting of multiple API endpoints, with minimal state.
Our research targets real-time embedded systems with hard timing constraints, mainly but not exclusively for automotive systems. These systems have usually high requirements in terms of functional safety, but on the other hand they have few resources in terms of computing power and memory storage.
An example of such constraints is the reaction time of an airbag controller, which has to be around a few microseconds.
Given these constraints, the automotive industry have developed AUTOSAR, a configurable architecture [4]. It defines a basic software architecture, consisting of several generic modules which implements all possible features.
These modules are usually defined as i) a core of basic functionalities which can do everything, ii) some configuration code which selects or refines the previously defined
behaviors and wrapper code to encapsulate the module functionalities in APIs — see figure 1. The configuration code is usually generated at compile-time and compiled along the core code, but the specification allows a post-compilation configuration which is passed to the core code by pointers.
This high level of configurability at every level increases greatly the complexity of such systems; they usually require multiple modules and abstraction levels. It can also result in a lot of dead code and if the configuration is not perfectly tuned to the application demands unnecessary behaviors make it into the code and may be executed. This comes at the cost of decreased performance and greater memory footprint, in terms of stack size, ROM and RAM usage. The consistency of the configuration must also be checked in order to be sure that the driver cannot behave in an unspecified way.
Even with consistency checks, there is no certain method to ensure that all behaviors which make their way to the final binary will be used by the application. However, the automotive standards are quickly evolving and safety constraints are becoming more and more strict.
Driven by the industrial need for more formalism and verification, we have developed a synthesis approach based on a formal modeling of the system. By using a formal model approach, we can use well-developed model-checking techniques to ensure safety constraints functionally on the model and on the generated code.
This methodology has the particularity of being a more application specific approach than existing conventional drivers, which reduces the number of abstraction layers between the application and the driver, and generates the sufficient and necessary behavior, producing a small code.
**Related works** Some work has already been done in driver synthesis.
The Devil language [5] is a Domain Specific Language (DSL) targeted at the description of basic communication protocols with a device. Devil comes with tools to check for consistency of such models. However, being a low-level Domain-Specific Language, it focuses on the interface between the device and the device driver. The latter remains developed in a classical way.
Wang and Malik[10] propose another model which allows to generate full drivers and to check some properties in the model. While the approach is interesting, it targets UNIX-like systems, respecting the traditional driver model for compatibility reasons.
The Termite tool [9] uses a generic approach to driver synthesis, by specifying a driver in three different specifications:
- a device-class specification, which defines the messages used internally for a class of device drivers;
- a device specification, which defines the access protocol with the device;
- an OS specification, defining the communication protocols between the driver and the Operating System.
These three separate specifications allows reusability and exchangeability of devices and operating systems; their specifications depending only on device-class one. However, Termite-generated drivers work only in the context of a special framework, which simplifies the internal structure of the driver. For example, all events going in and out of the driver (API calls, hardware interrupts, etc.) are serialized and dealt with several handlers sequentially. This serialization behaves nicely in the context of UNIX drivers for desktop use, because these systems have enough computing power to handle all events in a reasonable time, but in the context of embedded systems, which have a very limited computing power and memory, the additional memory and computing cost cannot be afforded.
**Our contribution** We propose a new approach to device driver synthesis using an untimed reachability game on a formal model of the device, controlled by the application. Such information is often unavailable until runtime, but in the case of critical embedded systems it is known at compile-time.
By introducing more information from the application, it is possible to reduce the complexity of the exposed API, thus reducing the number of errors that can be made. For instance, instead of having to initialize an analog to digital
converter, setting up the conversions settings and starting the conversion — which is a typical usage of a conventional API for sampling a speed value, or sending a temperature message through the network. In this context, the driver would perform such initializations and configuration automatically, depending on the current objective.
This allows to generate more application-specific drivers, and limits the need for abstraction layers, since the driver API is exposed directly to the application.
Outline of the paper This paper is organized as follows: first we present the underlying modeling which supports the methodology presented in section 3. The methodology is explained on a simple example in section 4. At last some concluding remarks and considerations about future work are presented in section 5.
2 Definitions
This methodology relies on a model derived from Labeled Transitions Systems (LTS), in which transitions can have guards. In order to define this model formally, let us define some common terms beforehand.
Let $\mathbb{N}$ be the set of natural numbers. For a finite set $E$, we denote by $2^E$ the set of all its subsets. Let $\gamma_P$ be a propositional logic over the predicates $p \in P$, e.g. of the form
$$\varphi := p \lor \neg \varphi \lor \varphi \land \varphi,$$
where $p \in P$
For $A \subseteq P$, we define the semantics of such propositional logic:
- $A \models p$ iff $p \in A$;
- $A \models \neg \varphi$ iff $A \not\models \varphi$
- $A \models \varphi \land \psi$ iff $A \models \varphi$ and $A \models \psi$
For $g, g' \in \gamma_P$ we say that $g$ and $g'$ overlap if
$$\exists A \subseteq P, \text{ such that } A \models g \text{ and } A \models g'.$$
Definition 1 (Guarded labeled transition system) A guarded labeled transition system (GLTS) is the tuple $(Q, Q_0, A, P, E, l)$, where
- $Q$ is a set of states;
- $Q_0$ is a set of initial states;
- $A$ is a set of actions;
- $P$ is a set of atomic properties;
- $E \subseteq Q \times \gamma_P \times A \times Q$ is the set of edges between the states;
- $l \subseteq Q \times 2^P$ is a labeling function.
Deriving the definition for standard Labeled Transition Systems, we say that a GLTS $(Q, Q_0, A, P, E, l)$ is deterministic if:
- $|Q_0| = 1$. We denote it as $q_0$.
- if $(q, a, g', q')$ and $(q, a, g'', q'') \in E$, then $g', g''$ do not overlap if $q' = q''$.
In the sequel we will only consider deterministic GLTS.
We also define an asynchronous product operation on networks of GLTS. For the following definition, we consider $n$ GLTS $S_i = (Q_i, q_0^i, A_i, P_i, E_i, l_i)$, $i \in [0, n]$, where $\forall i, j \in [0, n], i \neq j \implies A_i \cap A_j = \emptyset$. We denote $A_i^* = A_i \cup \{\bullet\}$, where $\bullet \notin A_i$.
Definition 2 (Asynchronous product of GLTS) The asynchronous product $S = S_0 \times \cdots \times S_n$ of the $n$ GLTS is the GLTS $(Q, q_0, A, P, E, l)$ where:
- $Q = Q^0 \times \cdots \times Q^n$,
- $q_0 = (q_0^0, \ldots, q_0^n)$,
- $A = A_0 \cup \cdots \cup A_n$
- $P = P_0 \cup \cdots \cup P_n$
- $((q_0^0, \ldots, q_n), a, g, (q_0^1, \ldots, q_n)) \in E$ such that
$$\forall i \in [0, n], \{ g = g_i \text{ and } (q_i, a, g, q_i') \in E \text{ if } A_i \models q_i \text{, otherwise.}$$
- for $q = (q_0, \ldots, q_n) \in Q$, $l(q) = l_0(q_0) \cup \cdots \cup l_n(q_n)$
Definition 3 (Semantics of a GLTS) The behavioral semantics of a GLTS $(Q, q_0, A, P, E, l)$ is the LTS $(Q, q_0, A, \rightarrow)$, where $\forall (q, a, g, q') \in E, (q, a, q') \in \rightarrow \iff l(q) \equiv g$.
3 Methodology
The synthesized driver is derived from two separate models: one of the device, which models the internal behavior of the device, and one of the application settings, which models how the device will be used by the application.
In order to synthesize such drivers, we propose the following workflow:
1. Model the hardware device, with some synchronization primitives. This modeling does not require any knowledge about the application, thus can be done once for a particular device.
2. Model the application configurations, or modes of operation the application needs the device to be in.
3. Define driver objectives;
4. Generate the configured device model and compute strategies;
5. Translate abstract actions into actual code.
The application settings model is the representation of how the device is used by the application. In this model, several functional modes are defined, each mode representing a set of values of the configuration registers.
3.1 Modeling the components
Modeling the device The first step — the device model — models only the device behavior at register level: writing to control and configuration registers, reading from data and status registers, and sending interrupt to the driver.
It is the only reusable model between different applications, and can be part of some sort of model database. It is based only on the device datasheet. As part of the synthesis methodology, a device modeling methodology is proposed.
First, the set of all register fields is partitioned into three sets, depending on the effects a register read/write has on the device:
- the Control Fields. Writing in a control field has an immediate effect on the device’s behavior.
- the Configuration Fields. Writing to a configuration field has no immediate effect on the device, but alters future behavior of the device. For example, an input channel selection field, or a device mode fields are considered part of the configuration space.
- the Data Fields, on which reading or writing to has no effect.
For each of the control actions, one or more abstract actions is added to the alphabet $A$ of the model. For example, from a Power Down boolean register field, two actions can be defined: PowerUp and PowerDown. These are called the controllable actions $A^C$. The uncontrollable actions $A^U$ of the device are also modeled, such as some internal action or hardware interrupts.
For the configurations, a set of atomic properties $P$ is defined such that each atomic property corresponds to a valuation of a register field. The properties are used as guards in the device model, but are attached to states of the application settings model.
Sometimes, the datasheet imposes constraints on the changing of certain register fields in certain states, or it simply does not make sense to allow the modification of some registers while the device is busy. These restrictions are modeled by adding new properties to the states in which changing a register field is allowed, and some application settings model generation rules.
With all these guidelines, it is possible to produce a device model which corresponds to the behavior described in the device datasheet. The use of additional properties is allowed, to mark particular states of the device.
In a nutshell, the device model exposes to the application designer:
- a set of configuration properties $P_{cfg}$. These properties can be further grouped into sets of semantically related properties. For instance, a 1-bit interrupt mask can be split into two properties interrupt and polling.
- a set of synchronization rules, in the form of $(P_{sync}, g)$, where $P_{sync} \subseteq P$ and $g \in \gamma_P$: if one of the properties in $P_{sync}$ is used, then the corresponding GLTS must add $g$ as a guard for every of its transitions. For instance, one might define a rule $(interruptSync, \{interrupt, polling\})$.
- a set of additional informative properties $P_{info}$ about the state of the device, such as PowerDown, Idle, Busy, Waiting, etc.
Modeling the application settings Once the model of the device is defined, the application designer has to define how it will be used by the application. The application settings are modeled by a global mode which is split into several independent sub-modes. These sub-modes can represent runtime behavior — e.g. Low-Power, Sleep — or statically defined properties — e.g. Channel groups in Analog-Digital Conversion, Types of frames in CAN/LIN/SPI communication, etc.
Formally, the global mode $M$ divided into several sub-modes $M = \{m_1, \ldots, m_n\}$. Each of these sub-modes have a set of possible values: $m_i \in \{m_1, \ldots, m_n\}$. Each value $m_i$ of a sub-mode is mapped to a set of atomic properties among those exposed by the device model, representing the required configuration of the device in that sub-mode.
Even though it is possible to split valuations of a register field into several properties (for example, a 1-bit interrupt mask can be split into two properties interrupt and polling), there is an implicit restriction that only one of these properties can tag a sub-mode. Adding both properties to a sub-mode would render the sub-mode inaccessible, because of the way deterministic GLTS are defined.
These sub-modes are independent in the sense that they have no influence on each other, but they are linked by the synchronization constraints of the device.
Once defined, each sub-mode $m_i$ is transformed into a GLTS $(Q_{m_i}, q_{0m_i}, A_{m_i}, F_{m_i}, E_{m_i}, I_{m_i})$ by the following method:
1. each valuation $m_i$ of the sub-mode is mapped to a state $Q_{m_i}$ of the GLTS;
2. all properties tagging any sub-mode tags the corresponding state;
3. a default reset state $q_{0m_i}$ with no properties attached to it is added;
4. the alphabet of actions $A_{m_i}$ is derived from the state names, e.g. toLowPower, toReset, etc.
5. the transitions from and to every state are generated with respect to the synchronization rules by adding a conjunction of all the required guards to every transition, i.e. for all synchronization rules $(P_{sync}^k, g_k)$ and all transitions $(q_1, q_2) \in E_{m_i}$,
$$I_{m_i}(q_1) \cup P_{sync}^k \implies \exists g' \text{ such as } g = g' \land g_k.$$
actions taken by the driver is to configure the device into a defined mode before doing any work.
The global mode GLTS $M$ is obtained by computing the asynchronous product of all sub-modes.
### 3.2 Driver generation
Once the model of the device and its configuration are defined, well developed control and game theory techniques [3, 8, 6] are used in order to generate the driver model. Although the problem defined by the GLTS model could be reduced to a shortest path problem in a graph, this methodology uses a more generic, model-agnostic approach which can be easily extended to timed models by simply changing the underlying modeling and game rules. But first let us define the outline of a driver.
**Anatomy of the driver** In this model, a driver consists of an arena $G_D$, a set of objectives $O$. An objective represents a set of atomic properties which the configured device is to satisfy, for instance the power down or idle state, a busy state while converting a certain analog input, or the end of the sending of a given frame over the network.
For each objective, the driver has a strategy, i.e. a sequence of actions to take in order to get from the current state to an objective state. For this model it is sufficient to consider only memoryless strategies, i.e. strategies in which the actions to take are dictated only by the current state, and not the sequence of states which led to the current one. These strategies are computed with respect to the model of the configured system which represents the possible behaviors of the device and any mode change in the application settings.
At any point in time, the driver has only one active objective, and is taking actions to fulfill this objective. When it is reached, the driver does not take any action until the objective is changed.
More formally, given a model $D$ of the device, we want to generate a controller $C$ — the driver — such that the system $D|C$ composed of the device controlled by the driver satisfies a given property $\varphi$, expressed by the LTL property for all paths:
$$\varphi = \Diamond A, \text{with } A \subseteq P.$$
**Problem 4 (Control problem)** Given $D$ and $\varphi$, is there any driver (or controller) $C$ such that $D|C \models \varphi$?
#### Generating the game arena and the game
The model of all possible behaviors of the configured device — including changing sub modes — is called the arena $G_a$. It is obtained from the semantics of the asynchronous product $\Pi_{async}$ of the device model $D$ and the driver modes $M$. The product $\Pi_{async}$ is computed as described in definition 2.
Taking the semantics of this product, we obtain the following LTS:
$$(Q^a, q_0^a, A_a, \rightarrow_a).$$
The game arena is derived from this LTS by partitioning the alphabet $A_a$ of actions in $A_a^C$ and $A_a^U$. The alphabet of controllable actions groups the controllable actions of the device and all the sub-mode change actions: $A_a^C = A_a^C \cup A_{MC}$. The alphabet of uncontrollable actions is the uncontrollable actions of the device only: $A_a^U = A_a^U$.
Assuming the initial device model is correctly defined, taking the semantics of the product ensures that any non-specified behavior is not accessible.
The problem reduces to an untimed two-player safety game between the driver, performing controllable actions of the device and all sub modes switches, and the device performing its uncontrollable actions. There are several algorithms to compute a strategy which resolves this game.
One of the most used is the algorithm defined in [6], with the controllable predecessor method.
Intuitively, this method computes iteratively the set of states for which a strategy exists — these are called winning states — starting from the set of goal states. At each iteration, the algorithm adds to the set of winning states all its controllable predecessors.
A controllable predecessor of a set $S$ of states is a state for which there exists at least a controllable action $a^c \in A^C$ to $S$ and all uncontrollable actions $a^u \in A^U$ also lead to $S$.
More formally, the controllable predecessor set $\pi(S)$ of $S \subseteq Q$ is defined as follows:
$$\forall q \in Q \setminus S, q \in \pi(S) \text{ if and only if}$$
$$\forall q' \in S, a \in A^C \text{ s.t. } (q,a,q') \in \rightarrow \lor$$
$$\forall q'' \in Q, \forall b \in A^U \text{ s.t. } (q,b,q'') \in \rightarrow, q'' \in S.$$
(1)
When computing the controllable predecessors, the algorithm deduces a strategy to execute in order to reach the goal states.
This algorithm ends when it has reached a fixpoint, i.e. when it cannot add any new state to the winning states. The remaining states which could not be added are the lossing states. In these states there is no action to take in order to go to a winning state, whatever the device does.
More formally, the algorithm is as follows:
```plaintext
Win_0 \leftarrow \text{Goal}
\text{i} \leftarrow 1
repeat
Win_i = Win_{i-1} \cup \pi(Win_{i-1})
i \leftarrow i + 1
until Win_i = Win_{i+1};
```
**Algorithm 1 (Computing the winning states)**
Future work will lift part of the constraints, allowing the driver to wait for an uncontrollable action because it will happen eventually, whereas the current hypothesis allows the device to withhold the interrupt and lock the driver indefinitely.
From the computation it is possible to derive a memoryless strategy for each objective: each state is either a goal state — the driver has nothing to do —, a losing state — the driver cannot do anything and may fail into some error recovery mode — or the driver has a controllable action to take in order to reach one of the goal states.
4 Example
In this section, we will apply the methodology to a simple example. Let us consider a simple and generic Analog to Digital Converter. This device is part of almost all micro-controllers, and its role is to sample analog signals and convert them into digital values. Usually, a single ADC has several input channels. It can sample and convert its inputs either sequentially or in parallel.
The example ADC has the following features:
- two different clock modes, one half speed and one full-speed;
- a power-down mode, only in which the clock configuration can be changed;
- multiple input channels, converted sequentially in a conversion chain;
- the conversion of each of the channels can be enabled or disabled, while the device is idle or shutdown;
- two conversion modes: oneshot, in which only one conversion chain is performed, and continuous, in which conversion chains are performed indefinitely until the user stops the conversion (the last chain still ends the normal way);
- the device triggers an End Of Conversion (eoc) interrupt at the end of each channel conversion, and an End of Chain (ech) interrupt at the end of a chain.
**Modeling the device** For this high-level model, the granularity is set at chain conversion level, so all the single channel conversions are abstracted.
From the specification, the following alphabet of actions is derived: abort, ech, sleep, start, stop and wakeup.
From the register description, the following properties are defined:
- Clock configuration: clkFull and clkHalf, for the two values of the speed, and clkCfg for the synchronization constraints.
- Conversion configuration: Os, and Cont, for the oneshot/continuous setting, and convCfg for the synchronization constraints.
- Informative properties: Idle, poweroff and busy.
The device model is straightforward, as presented in figure 3.
**Modeling the configuration** Once the driver model is defined, we can define the application configuration, or the driver modes. For this example, the application usage is as follows:
- The driver shall perform conversions fast, so only the clkFull setting will be used.
- The driver will convert two groups of signals: one is to be monitored continuously, with the Cont setting, while the other corresponds to on demand conversions, using the Os setting.
These modes are then translated into GLTS, following the method defined in 3. For the example, the general driver modes is divided into two sub-modes: the conversion sub-mode and the clock sub-mode. Here only the conversion sub-mode is detailed.
First three states are defined: reset, G1 and G2. G1 is tagged with clkfull and Os, while G2 is tagged with clkfull and Cont. Since these states involve the conversion properties, all the transitions must have convSync in their guard.
The resulting automaton is presented figure 4. Note that all these transitions will be controllable for the driver, since they represent changes of its internal mode.
**Defining driver objectives** For this example, the application needs to perform two types of conversions: one for the group G1 and one for the group G2. We will also consider a low-power mode of the driver, where the device is switched off. These two conversion groups and the low-power are then translated into three driver objectives:
1. Go to a state labeled by poweroff
2. Go to a state labeled by G1 and busy
3. Go to a state labeled by G2 and busy
**Generating the arena and computing strategies** Once all the components of the configured system are defined in terms of GLTS, the arena of the game is generated. First,
Table 1. Computed strategies on the arena
<table>
<thead>
<tr>
<th>State</th>
<th>Objective 1</th>
<th>Objective 2</th>
<th>Objective 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Win</td>
<td>Take toClkFull</td>
<td>Take toClkFull</td>
</tr>
<tr>
<td>1</td>
<td>Win</td>
<td>Take wakeup</td>
<td>Take wakeup</td>
</tr>
<tr>
<td>2</td>
<td>Take sleep</td>
<td>Take toG1</td>
<td>Take toG2</td>
</tr>
<tr>
<td>3</td>
<td>Win</td>
<td>Take toClkFull</td>
<td>Take toClkFull</td>
</tr>
<tr>
<td>4</td>
<td>Win</td>
<td>Take wakeup</td>
<td>Take wakeup</td>
</tr>
<tr>
<td>5</td>
<td>Take sleep</td>
<td>Take start</td>
<td>Take toG2</td>
</tr>
<tr>
<td>6</td>
<td>Take abort</td>
<td>Win</td>
<td>Take abort</td>
</tr>
<tr>
<td>7</td>
<td>Win</td>
<td>Take toClkFull</td>
<td>Take toClkFull</td>
</tr>
<tr>
<td>8</td>
<td>Win</td>
<td>Take wakeup</td>
<td>Take wakeup</td>
</tr>
<tr>
<td>9</td>
<td>Take sleep</td>
<td>Take toG2</td>
<td>Take start</td>
</tr>
<tr>
<td>10</td>
<td>Take abort</td>
<td>Take abort</td>
<td>Win</td>
</tr>
<tr>
<td>11</td>
<td>Loose</td>
<td>Loose</td>
<td>Win</td>
</tr>
</tbody>
</table>
**Figure 4. Conversion sub-mode GLTS.** This GLTS is generated with 3 modes: One Shot mode (1), Continuous mode (2) and the default reset mode.
all the models are composed into an asynchronous product.
The semantics of the product is shown figure 5. In any of the states the driver can take the controllable actions, represented with solid lines, and the uncontrollable actions are represented in dashed lines.
This product model is then processed with the driver objectives in order to generate adequate strategies. The computed strategies for all the objectives are presented in table 1.
These strategies are similar, except for one loosing state for the first two objectives. This is due to the untimed nature and the worst-case hypothesis of the game. In this context, a strategy wins if the driver can force a behavior whatever the device does. But here the untimed strategy does not work because the zeno behavior where the device does the ech action infinitely often prevents the driver to act. This behavior is obviously unrealistic: in reality, the ech interrupt has a minimum period, so the driver has time to cancel an ongoing conversion, or the related interrupt can be masked.
Future improvements of this work will consider timed models of the device, which are more complex to analyze and to compute strategies for.
## 5 Conclusion
We have developed a generic methodology and supporting models for device driver synthesis. It is designed specifically for embedded real-time systems, with low complexity and small memory footprint, and can be adapted to more complex models.
It relies on a particularity of such systems, which are to be completely defined at compile-time. It is possible to reduce the amount of generated code by performing optimisations at the model level — cutting unreachable states when applying the possible strategies — thus producing only necessary and sufficient code.
**References**
Figure 5. Generated model of the arena
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00942323/document", "len_cl100k_base": 7352, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 31034, "total-output-tokens": 8248, "length": "2e12", "weborganizer": {"__label__adult": 0.0007905960083007812, "__label__art_design": 0.0008039474487304688, "__label__crime_law": 0.0007357597351074219, "__label__education_jobs": 0.000713348388671875, "__label__entertainment": 0.00013196468353271484, "__label__fashion_beauty": 0.0003561973571777344, "__label__finance_business": 0.0003685951232910156, "__label__food_dining": 0.0007662773132324219, "__label__games": 0.0016794204711914062, "__label__hardware": 0.0229034423828125, "__label__health": 0.0011920928955078125, "__label__history": 0.0005364418029785156, "__label__home_hobbies": 0.0003170967102050781, "__label__industrial": 0.001708984375, "__label__literature": 0.0003466606140136719, "__label__politics": 0.0005464553833007812, "__label__religion": 0.0011434555053710938, "__label__science_tech": 0.222412109375, "__label__social_life": 0.00010579824447631836, "__label__software": 0.0063629150390625, "__label__software_dev": 0.73193359375, "__label__sports_fitness": 0.0007910728454589844, "__label__transportation": 0.0029201507568359375, "__label__travel": 0.00034737586975097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32812, 0.02053]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32812, 0.47105]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32812, 0.8913]], "google_gemma-3-12b-it_contains_pii": [[0, 1043, false], [1043, 5012, null], [5012, 9220, null], [9220, 13759, null], [13759, 19053, null], [19053, 24422, null], [24422, 28365, null], [28365, 32412, null], [32412, 32812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1043, true], [1043, 5012, null], [5012, 9220, null], [9220, 13759, null], [13759, 19053, null], [19053, 24422, null], [24422, 28365, null], [28365, 32412, null], [32412, 32812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32812, null]], "pdf_page_numbers": [[0, 1043, 1], [1043, 5012, 2], [5012, 9220, 3], [9220, 13759, 4], [13759, 19053, 5], [19053, 24422, 6], [24422, 28365, 7], [28365, 32412, 8], [32412, 32812, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32812, 0.06481]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
454a5416a78346a03531e8dba950321979894345
|
Statistics on Storage Management in a Lazy Functional Language Implementation
John Wild Hugh Glaser Pieter Hartel*
The aim of the FAST project is to provide an implementation of a lazy functional language on a transputer array. An important component of this system is a highly optimising compiler and runtime system for a single transputer. Efficient storage management is crucial in such an implementation, and this paper explores the demands placed on the storage manager by our compiled code. Statistics are presented illustrating the lifetime characteristics of cells, a break down of claimed cells by type, and other information which is of interest to the designer of a storage management system. We conclude that most cells are short lived, and that cell turnover is quite high. In addition, application cells are found to die much younger than cells of other types. We also examine the effect of vector apply cells on suspension forming activities. Finally we explore the possibility of using contextual information when predicting the lifetime of application and vector application cells, and suggest a way of using this information in a storage management policy.
1 INTRODUCTION
The FAST Project, funded by the Department of Trade and Industry of the UK, aims to provide an implementation of a lazy functional language on a transputer array. An important component of this system is a highly optimising compiler to a single transputer. Our collaborators at Imperial College, London, are developing a methodology for process distribution which is a variant of the process description language Caliban. Caliban requires an optimised implementation of Haskell as one of its components. At Southampton we are working on this component.
In functional programming systems efficient storage management is crucial because all memory allocation and reclamation is controlled by the system. The compiler we have developed makes strenuous efforts to minimise the storage management overhead, performing extensive analysis in order to provide
---
*Pieter Hartel is on sabbatical leave from the University of Amsterdam
information which allows the storage management overhead to be minimised. Examples of this analysis include: the derivation and use of strictness information to avoid building suspensions; a cheap eagerness analysis to avoid building suspensions for small computations; a representation analysis which maintains data in the runtime stack, rather than the heap; and analysis to allow the allocation of data statically at compile time. The storage management subsystem can benefit from the analysis in two ways: a significant reduction in the total number of cells claimed, and extra information about the use of a given cell when it is claimed. To exploit this extra information to the full, it is necessary to understand the patterns of cell usage in real, executing lazy functional programs.
In this paper we examine the behaviour of functional programs from the perspective of a storage management system associated with our optimising compiler. In the remainder of this introduction we explain the terminology and concepts we use to describe our results. The next section explains our measurement techniques and discusses the benchmark programs. We follow this with a discussion of our results during which we outline storage management techniques which exploit these findings. Finally, we summarise our findings and outline future research topics in this area.
1.1 Functional Languages
Pure lazy functional languages provide the programmer with a clean, powerful and higher level programming language which differs from conventional imperative languages in many respects:
- Programs are side effect free, so the system may choose a suitable evaluation strategy, provided that the termination properties of the program are preserved.
- Evaluation order and storage management decisions are made by the system, allowing components of the graph containing suspended computations to be concurrently evaluated, because all functions are side effect free.
- Functions may be specialised by partially applying them to an argument, yielding a new and less general function.
- Lazy semantics dictate that values are computed only when they are definitely required.
- Data structures may contain suspended computations which can generate the required amount of the data structure, allowing infinite data objects to be defined. Thus data structures used are flexible, with data space frequently allocated at run time.
We use many powerful compilation techniques in our compiler [4]. Our analysis greatly reduces our dependence on the storage manager and graph reducer to the extent that some smaller problems can execute without using either of the services. However, these analyses do not obviate the need for a storage manager and graph reduction subsystem, which closely interact.
Suspension building, along with data structure growth, place direct demands on the storage manager and reducer. The reducer activates suspensions, yielding values which may cause further interaction with the garbage collector. An activated suspension can cause recursive invocations of the reduction mechanism too, as structures grow by demanding activation of further suspensions.
With improved analysis comes a reduction in the total number of cell claims and more information about the use and probable lifetime of a claimed cell. To exploit this information a more sophisticated reducer and storage manager are required.
1.2 Storage Management
Where possible, our compiler allocates parts of data structures statically at compile time. Failing this, it attempts to store data on the runtime stack by passing untagged objects by value to functions and by holding some intermediate values in the function’s local variables. Any remaining storage requirements are satisfied by the storage manager using cells allocated from a global heap. The storage manager is also responsible for collecting garbage generated by the reduction process along with any unreferenced intermediate computations which have been built in the heap.
Garbage collection methods can be divided into two broad categories, scanning collectors, and reference counting collectors.[1]
In a scanning collector, only the live program graph is traversed in order to identify all reachable cells. The remaining cells can then be collected and reused. Normally many scans of the live program graph are required during program execution. A scanning collector must know where to find all the pointers into the heap. Assuming that this may be achieved cheaply, the cost of a single scan is proportional to the size of live program graph at the time the scan is invoked. By keeping the live program graph as small as possible, and scanning the graph at the best possible moment, the costs of a scanning collector could be minimised. As a scanning collector visits all live cells during every scan, some scanning collectors also copy and compact the live data into a second space, resulting in improved locality in virtual memory systems, and a de-fragmentation of the heap. A relatively large store allows us to increase the interval between scans, resulting in fewer scans.
In contrast, a reference counting collector traverses the dead program graph once and needs to maintain reference counts on all live cells. Therefore, when implementing a reference counting collector, the total number of cells dying during execution is of interest. This can be computed from the total cell claims
and the number of live cells after execution has completed. Also of interest is the cost of maintaining the reference counts on the live cells. This is dependent on the ability of the compiler to reduce the number of times cell reference counts need to be updated.
Reference counts are usually maintained in the cell, necessitating at least a read operation when a pointer to a cell is destroyed. In a virtual memory environment, this causes visits to dead cells resulting in undesirable paging activity. Reference counting allows dead cells to be re-used immediately, and avoids pauses associated with the graph traversal phase of scanning collectors.
Earlier work \cite{3} suggests that in programs with a large numbers of short lived cells, and in systems where the live graph size is less than half the size of the main heap, scanning collector costs are favourable. Of great interest therefore, is the size of the live program graph, and also the lifetime characteristics of the cells claimed.
### 1.3 Graph Reduction
Graph reduction is the basic mechanism for evaluating functional programs. Evaluation carried out solely by this method incurs a large number of cell claims, at significant runtime cost. Our approach aims to avoid graph reduction altogether, making direct function calls throughout. Only in instances where it is not possible for the compiler to do this do we need our graph reduction machinery. This consists of a fixed set of graph reduction operations which, in conjunction with the CONS primitive, form a set of operations through which all cell claims are made. This interface is beneficial when studying the storage manager because cells allocated through a particular primitive are used in specific ways.
- CONS(hd,tl) - returns a pointer to a cell with contents \textit{hd} and \textit{tl}
- TAG(val) - flags \textit{val} as reduced.
- VAP(fun,a1 ...) - returns an application of \textit{fun} to \textit{a1} ...
- BIND(papp, arg) - applies partial application \textit{papp} to \textit{arg}
- REDUCE(susp) - evaluates suspension \textit{susp} to Head Normal Form (HNF).
- UPDATE(root, val) - Overwrites suspension \textit{root} with a value, \textit{val}.
CONS is used to form structures which hold the program answer. Because some data structures may contain suspensions or evaluated values, tags are required so that a suspension can be recognised and activated when its value is demanded. CONS returns a pointer to a cons cell which in turn contains pointers to the two arguments.
When a value such as a number is placed in a data structure in a context where a suspension might be found, the data item is flagged using TAG, indicating that no further reduction is required. TAG takes any built in data type, such as a CONS cell, number or character, and tags it indicating that it is a HNF.
Total applications are formed using a combination of VAP and/or BIND primitives. VAP is only used when it is known that the first argument is a function descriptor cell.
A partial application may be formed from calls to BIND, or from a call to VAP with insufficient arguments to fully parameterise the function argument fun. BIND takes a partial application (or function) papp, and applies it to argument arg, returning either a partial or total application. If the argument to BIND is a partial application missing only one argument, then a total application is returned.
Once a total or partial application has been formed using BIND or VAP, it may be passed to a function, or embedded in a data structure by passing it to the CONS primitive. If and when a value is required, a total application (or suspension) can be evaluated by the REDUCE primitive. The compiler generates code which uses VAP wherever possible. The VAP primitive interface is such that the storage manager may be sure that the first argument is a function and not a partial application and that the only point of reference to this application will be from the top of the spine.
REDUCE takes a tagged object, typically the result of a BIND or VAP operation, and returns the object evaluated to HNF. Arguments to REDUCE that are already in HNF are returned unchanged. When a shared suspension is evaluated, REDUCE calls UPDATE to copy the value of the evaluated suspension over the root application node so that future accesses avoid re-evaluating the suspension. The first argument to UPDATE is the root node of a suspension, and the second is a value, which may be a partial application or a tagged base value. UPDATE copies this on top of the application node, destroying any pointers in that node.
As an example, we show the evaluation of the expression \( \text{sqr}(\text{min } 5 \ 3) \) using our graph reduction machinery. Assuming that this expression is used in a lazy context, the compiler generates the following code:
\[
\text{VAP(sqr\_fun, VAP(min\_fun, \ 5, \ 3))}
\]
The VAP primitive takes two or more arguments. The first argument is a function specialised for use in a lazy context, and the remaining arguments are arguments to the function to be placed in the suspension returned by VAP. When evaluated, the above code will generate the suspension illustrated in Figure 1.
When the value of this suspension is required, REDUCE is called to evaluate it to a HNF. Because the compile time analysis found that \( \text{sqr} \) is strict in its argument, a recursive call to REDUCE is made to evaluate the \( \text{min } 5 \ 3 \) suspension before entering the body of \( \text{sqr} \). The recursive call collects the arguments to \( \text{min} \)
and updates the root of the min suspension with the result of applying min to 5 and 3. This update orphans the two children of the application node marked with a # in Figure 2, rewriting it into a tagged cell containing 3. The update is necessary because the suspension to min may be shared, and by updating the suspension repeated evaluation of this min suspension can be avoided. If the child nodes were allocated from the heap and are unshared, they will become garbage. The recursive call to reduce is now complete, and control is returned to the top level reduce which will begin the evaluation of sqr.
Figure 2: Reduction steps when evaluating a suspension of $sqr(\text{min} \ 5 \ 3)$
Finally, $sqr \ 3$ is evaluated, and the last step of the evaluation is to update the suspension of $sqr$ with 9. Of the six nodes left unattached, min\_fun, sqr\_fun and the constants 3 and 5 from the min suspension are statically allocated. This leaves two nodes (the application nodes for min\_fun) which were application nodes at birth and one of which has been updated with the constant 3, to be reclaimed by the storage manager when free memory becomes scarce.
2 METHODOLOGY
The FAST runtime system was designed to make statistics collection as simple as possible. The runtime system is written in C, and the compiler generates an intermediate language which we translate to C for the purpose of gathering statistics. A selection of run time and compile time options allow various sets of statistics to be generated during execution. These are generated by a traversal of the live program graph after each cell claim. This graph is not connected, and so pointers in the stack frame must be examined to ensure that all live cells are reached.
New cells are timestamped on allocation, and any cells which have become garbage in between this and the previous cell claim are noted and logged by age and type at birth. We log the birth type because APP and VAP cells may change their type as the result of an update operation. After execution is complete, a final scan of the heap is made, and then the cell lifetime information accumulated during execution is output for analysis. From a storage management point of view, the demand for a new cell is a significant event, requiring an action from the storage manager. Because of this we use the number of cells claimed as the time axis on our plots and measure the lifetime of a cell in terms of the number of cell claims made while the cell is still referenced. A study of combinator systems [2] used the number of reduction steps and also the number of cells claimed as the time axis, and found that the results from both were similar, with the number of reduction steps approximately equal to the number of cell claims. Despite the fact that our compiled code claims many times fewer cells than combinator implementations, it is interesting to note that the number of times we call the REDUCE primitive is still roughly equal to the total number of cells claimed.
We consider the death of a cell to be less important than its birth, because a dead cell makes no demands on a scanning garbage collector.
2.1 Benchmarks
The programs we have used to gather the statistics in this paper are:
1. paraff 5 enumerates in order of increasing size the first five paraffin molecules, similar to Turner's original program in KRC; [8]
2. em script runs a simple script through a functional implementation of the UNIX text editor. The script reads 3 copies of a file into the editor buffer causing the graph to expand rapidly early on in the computation; [6]
3. lambda ( S K K ) evaluates to I on an implementation of the λ-K calculus. This is an interactive interpreter for the lambda calculus. The input is parsed and built into a tree structure, and then reduction rules are applied to yield the answer;[5]
4. sched 7 calculates an optimum schedule of 7 parallel jobs with a branch and bound algorithm. [10]
3 RESULTS
Throughout the compiler our analysis concentrates on avoiding unnecessary cell claims and the reduction of the live program graph size. As physical memory sizes continue to increase, scanning collectors become more attractive [4]. This motivated us to provide the following statistics to facilitate our goal of designing a more optimal scanning collector.
We first consider the use of vector application cells in our system, and in the light of these results we then provide a more detailed analysis of cell lifetime and graph size in a vector application implementation of the runtime system.
3.1 Suspensions and Partial Applications
The runtime system must be able to build suspensions, passing them to, and returning them from functions. The classic and simplest method of building suspensions is to use chains of binary application (AP) cells. These handle partial applications without added complications, and have been used successfully in pure graph reduction systems for some time [7]. The main drawback with this method is the large number of application cells, which need to be allocated and collected.
When generating code to build multiple argument total applications in a lazy context, the compiler can allocate a single large vector application node (VAP), thereby reducing the number of cells claimed.
A hierarchy of partially filled VAPs, or a hybrid VAP and AP scheme is necessary to support partial applications. A partially filled VAP system would complicate the REDUCE function, which would need to test for both partially filled VAP cells and for a partial application in the fun field of the VAP cell. To avoid this we chose to implement all partial applications with binary AP cells.
The frequency information that we collect allows us to compute the relative costs of the schemes (see Table 1). We found that, on average, VAP cells are used in 95% of all suspensions. In the remaining cases, most partial applications had an average of 4 arguments. Partially filled VAPs would result in a negligible saving in terms of store use, and the REDUCE function would be complicated.
3.2 Graph size
The graphs in Figure 3 illustrate the number of live heap cells throughout the execution of each of the benchmark programs. The Y-axis shows the number of live heap cells, plotted against the number of cells claimed (X-axis). This
Figure 3: Live graph plots for the benchmark programs
Program Vector Applications Partial Applications
<table>
<thead>
<tr>
<th></th>
<th>private</th>
<th>shared</th>
<th>mean arity</th>
<th>shared</th>
<th>mean arity</th>
</tr>
</thead>
<tbody>
<tr>
<td>paraff</td>
<td>120</td>
<td>1698</td>
<td>1.5</td>
<td>133</td>
<td>4.8</td>
</tr>
<tr>
<td>em</td>
<td>130</td>
<td>40281</td>
<td>2.0</td>
<td>6</td>
<td>3</td>
</tr>
<tr>
<td>lambda</td>
<td>136</td>
<td>2720</td>
<td>1.6</td>
<td>142</td>
<td>1.7</td>
</tr>
<tr>
<td>sched</td>
<td>1386</td>
<td>4908</td>
<td>1.7</td>
<td>227</td>
<td>5.6</td>
</tr>
</tbody>
</table>
Table 1: Suspension forming activities in the example programs
X-axis plot is effectively time (see earlier discussion in section 2), as each cell claim marks a significant event for the storage manager.
Initially, the live graph size is zero, then the graphs grow as intermediate results are demanded, computed and stored, becoming part of the final answer. When execution terminates, the live graph is the program result. Depending on the nature of the intermediate computations and the size and structure of the answer, the graph can take many shapes.
The paraff program gives a relatively straight line plot because it steadily computes more and more of the answer, without requiring significant amounts of heap based intermediate computation at any particular stage. The sched program uses a branch and bound algorithm, with relatively space hungry intermediate computations which compute a compact answer, causing the large peaks. However, with some programs, em (the editor) for example, the profile of the live graph can change significantly reflecting changes in input data. At the start of the edit, three copies of a file are read into the editor buffer. The three steps at the start of the em cell lifetime plot reflect this activity.
The largest live graph size indicates the minimum store size in which a specific program would execute. This minimum store size, expressed as a percentage of the total number of claims, gives an indication of the character of the algorithm. A low percentage indicates a high turnover, with large numbers of cells used to compute intermediate values.
### 3.3 Breakdown of Claims and graph size by type
In our experiments we found that roughly a third of all cell claims were suspension related. The remaining cells claimed were used as CONS cells, numbers and tags. Within this remainder, the breakdown became more application dependent, with up to half of the total claims being CONS cells. This can be seen in Figure 4.
### 3.4 Cell lifetime
The graphs in Figure 5 plot the number of cells against the number of claims survived, giving a picture of cell lifetimes. Note that both axes are plotted using
logarithmic scales. From these plots it is apparent that the majority of cells die young. In the smallest benchmark, \textit{paraff}, 70% of claimed cells survived fewer than 30% of the total cell claims. In \textit{lambda}, which claims almost twice as many cells, 70% of claimed cells survive fewer than 13% of total claims, while the two largest programs, \textit{em} and \textit{sched}, 70% of cells survived fewer than 2% of the total claims. This suggests that storage managers which exploit cell lifetime information are better suited to larger applications.
Experiments with earlier versions of the compiler indicated that many of the cell claims now avoided by the analysis were previously short lived cells. However, it is not obvious whether this trend will continue as more analyses are added to the compiler.
Compile time analysis also allows us to distinguish between two types of vector application cells, private and shared, at the point of allocation. Private VAP cells are VAP cells which are never placed in a data structure. The compiler maintains the integrity of private VAPs by ensuring that they are only passed as arguments to other functions when the context prevents sharing. Our results show that very few private VAP cells live to a great age. This is because the updated cells usually change to tag cells which survive for as long as the result of that computation is required in a data structure. In contrast, non-updated AP and VAP cells become garbage soon after the suspension is evaluated.
A Generational approach to garbage collection, \textit{Generation Scavenging}[9]
Figure 5: Cell lifetime plots for the benchmark programs.
has been shown to be an efficient storage management technique in systems with a large number of short lived cells. In the following sections we introduce a generational based collection scheme, and then we consider optimisations to the general model in two ways.
3.4.1 Generational Garbage Collection.
Generational collectors work by dividing the memory into four areas:
- New Space.
- Past Survivor Space.
- Future Survivor Space.
- Old Space.
Generational collectors reclaim both circular structures and variable sized cells without problem by performing a scavenge phase which copies and compacts all live cells from New and Past Survivor space into Future Survivor space using an iterative, breadth first algorithm.
All new cells are allocated from New Space. Old Space contains only long lived cells, and is collected after a long period of time, usually several hours.
Past Survivor space contains new cells which have survived many scavenges, but which are not considered old enough to be promoted to Old Space. A tenuring policy is employed to determine when to promote a cell from Past Survivor Space to Old Space. In some Object Oriented Programming Systems [9], the tenuring policy used consists of examining a counter which indicates the number of scavenges survived, and promoting the cell to Old space when this reaches a threshold value.
In order that a scavenge may proceed without disrupting cells in Old space, a set of pointers, called the Remembered Set, is maintained by the storage manager. This is a set of pointers which reference all objects in Old Space which contain pointers to New or Past Survivor space. To keep this set consistent, all writes to Old Space must be checked for pointers to objects in New and Past Survivor Space. Should an old cell become overwritten with a pointer, the cell must be added to the remembered set, so that a scavenge of New and Past Survivor Space can complete without examining Old Space.
3.4.2 Exploiting cell lifetime statistics in a tenuring policy.
A cell is promoted from Past Survivor Space to Old Space only after it has survived a sufficient number of scavenges. It is important that this promotion occurs at an optimal moment. If delayed unnecessarily, the cost of every scan of the New and Past Survivor Space will be increased. Promoting cells too early could result in Old Space overflowing, and an excessively large remembered set.
Too large a remembered set will cause dead fragments of graph to be retained in Past Survivor space, again slowing down the compaction phase of the collector.
Since cells of different types were found to exhibit differing lifetime profiles, the tenuring policy can be made sensitive to the current cell type to provide early promotion of cells which are most likely to live to a great age, e.g. CONS cells, and delayed promotion of the shorter lived VAP and AP cells. The shortest lived of all cells were found to be private VAP and AP cells, suggesting that a delayed tenuring policy could make use of privacy information too.
3.4.3 A streamlined scavenger with type sensitive tenuring and fixed remembered set.
In a functional language system, only cells allocated as shared VAP or AP's can ever be updated (i.e., written to a second time). Only the UPDATE operation performs such rewriting. Out of all cells claimed, AP and VAP cells typically comprise of 35-50% of all claims (Figure 4). A majority of the shared APs and VAPs (over 80% in all the benchmarks) will be written to, changing their type and becoming non writable. This suggests an approach where all cells which currently contain shared VAP or AP cells are never promoted to Old Space. This would then avoid the problem of writes to old space, removing the need for a test on all write operations and the associated modification of the remembered set should the updated cell reside in Old Space.
4 CONCLUSIONS AND FUTURE WORK
We have presented results illustrating the behaviour of the storage system during the execution of a selection of functional programs compiled by our compiler. We examined the lifetime of the cells claimed, and subdivided the cell claims by type, comparing the lifetimes of the various types. We concluded that most cells are short lived, and that cell turnover is high (i.e., the number of cells claimed is much higher than the active cell population). In addition, application cells are found to die much younger than cells of other types. We also examined the effect of vector application cells on suspension forming activities, addressing the costs in terms of store utilisation, number of cell claims, allocation cost, and the effect on cell lifetime. The effects were beneficial overall.
Finally we discussed the possibility of using contextual information when predicting the lifetime of application and vector application cells, to provide the information for an effective storage management policy.
Work continues on the refinement of the run-time systems to make better use of all the information provided by the compile-time analysis. We have started work on the automatic vectorisation of lists, which will give constant time look-up for vectors represented as lists. A finer grained breakdown of cell lifetime based on the type of tagged data objects, such as boolean, character and number...
has the potential to produce further refinements of the tenuring policy.
References
|
{"Source-Url": "http://doc.utwente.nl/55748/1/WPDP91_statistics.pdf", "len_cl100k_base": 5931, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31906, "total-output-tokens": 6973, "length": "2e12", "weborganizer": {"__label__adult": 0.0004088878631591797, "__label__art_design": 0.0002892017364501953, "__label__crime_law": 0.0003306865692138672, "__label__education_jobs": 0.0004401206970214844, "__label__entertainment": 7.092952728271484e-05, "__label__fashion_beauty": 0.00017571449279785156, "__label__finance_business": 0.00021839141845703125, "__label__food_dining": 0.0003685951232910156, "__label__games": 0.0005292892456054688, "__label__hardware": 0.0013942718505859375, "__label__health": 0.0006518363952636719, "__label__history": 0.00026345252990722656, "__label__home_hobbies": 0.00010263919830322266, "__label__industrial": 0.0004467964172363281, "__label__literature": 0.00027060508728027344, "__label__politics": 0.000293731689453125, "__label__religion": 0.0005168914794921875, "__label__science_tech": 0.0230865478515625, "__label__social_life": 9.053945541381836e-05, "__label__software": 0.00406646728515625, "__label__software_dev": 0.96484375, "__label__sports_fitness": 0.0003757476806640625, "__label__transportation": 0.0007123947143554688, "__label__travel": 0.00021922588348388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30731, 0.0251]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30731, 0.57508]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30731, 0.92436]], "google_gemma-3-12b-it_contains_pii": [[0, 2130, false], [2130, 4549, null], [4549, 7572, null], [7572, 10092, null], [10092, 13147, null], [13147, 14308, null], [14308, 17001, null], [17001, 19468, null], [19468, 19522, null], [19522, 22115, null], [22115, 23724, null], [23724, 23782, null], [23782, 26199, null], [26199, 29104, null], [29104, 30731, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2130, true], [2130, 4549, null], [4549, 7572, null], [7572, 10092, null], [10092, 13147, null], [13147, 14308, null], [14308, 17001, null], [17001, 19468, null], [19468, 19522, null], [19522, 22115, null], [22115, 23724, null], [23724, 23782, null], [23782, 26199, null], [26199, 29104, null], [29104, 30731, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30731, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30731, null]], "pdf_page_numbers": [[0, 2130, 1], [2130, 4549, 2], [4549, 7572, 3], [7572, 10092, 4], [10092, 13147, 5], [13147, 14308, 6], [14308, 17001, 7], [17001, 19468, 8], [19468, 19522, 9], [19522, 22115, 10], [22115, 23724, 11], [23724, 23782, 12], [23782, 26199, 13], [26199, 29104, 14], [29104, 30731, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30731, 0.04762]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
7d98f105f36d43f5f926900a8942ff8baf9eed38
|
Chapter 17. Creating New Domains
Authors: Mike Chen
Christopher Hylands
Thomas M. Parks
Other Contributors: Wan-Teh Chang
Michael C. Williamson
17.1 Introduction
One of Ptolemy’s strengths is the ability to combine heterogeneous models of computation into one system. In Ptolemy, a model of computation corresponds to a Domain. The code for each Domain interacts with the Ptolemy kernel. This overview describes the general structure of the various classes that are used by a Domain in its interaction with the kernel. The Ptolemy User’s Manual has a more complete overview of this information.
A functional block, such as an adder or an FFT, is called a Star in Ptolemy terminology, (see “Writing Stars for Simulation” on page 2-1 for more information). A collection of connected Stars form a Galaxy (see Chapter 2 of the User’s Manual for more information). Ptolemy supports graphical hierarchy so that an entire Galaxy can be formed and used as a single function block icon. The Galaxy can then be connected to other Stars or Galaxies to create another Galaxy. Usually, all the Stars of a Galaxy are from the same Domain but it is possible to connect Stars of one domain to a Galaxy of another domain using a Worm-Hole.
A Universe is a complete executable system. A Universe can be either a single Galaxy or a collection of disconnected Galaxies. To run a Universe, each Galaxy also needs a Target. In simulation domains, a Target is essentially a collection of methods to compute a schedule and run the various Stars of a Galaxy. Some Domains have more than one possible scheduling algorithm available and the Target is used to select the desired scheduler. In code generation domains, a Target also computes a schedule and runs the individual Stars, but each Star only generates code to be executed later. Code generation Targets also handle compiling, loading, and running the generated code on the target architecture.
At a lower level are the connections between Blocks. A Block is a Star or Galaxy. Each Block has a number of input and output terminals which are attached to a Block through its PortHoles. A special PortHole, called a MultiPortHole, is used to make multiple connections but with only one terminal. Two Blocks are not directly connected through their PortHoles. Rather, their PortHoles are connected to an intermediary object called a Geodesic. In simulation domains, data is passed between PortHoles (through the Geodesic) using container objects called Particles. Ptolemy uses a system where Particles are used and recycled instead of created and deleted when needed. Particles are obtained from a production and storage class called a Plasma, which creates new Particles if there are no old ones to reuse. Particles that have completed their task are returned to the
Plasma, which may reissue them at a later request. Graphically, the Star to Star connection is depicted below:
The classes defined above provide most of the functionality necessary for a working domain. One additional class needed by all domains is a Scheduler to compute the order of execution of the Stars in the Galaxy.
Therefore, creating a new Ptolemy simulation domain will typically involve writing new classes for Stars, PortHoles, WormHoles, Targets, and Schedulers.
Creating a new domain is a fairly involved process, and not to be done lightly. The first thing that many users want to do when they see Ptolemy is create a new domain. However, it is often the case that the functionality they need is already in either the SDF or DE domains, or they can merely add a Target or Scheduler rather than an entire domain.
17.2 A closer look at the various classes
A simulation Domain can use the various classes mentioned above as they exist in the Ptolemy kernel or it can redefine them as needed. For example, in the SDF domain, the classes SDFStar, SDFPortHole, SDFScheduler, SDFDomain, SDFTarget, and SDFWormhole have all been defined. Most of those classes inherit much of their functionality from the corresponding kernel classes but the Domain creator is free to make major changes as well. The kernel Geodesic, Plasma, and Particle classes are used without modification, but other domains such as the CG domain have derived a subclass from Geodesic. The Domain creator needs to decide whether or not existing Ptolemy classes can be used without change, therefore it is a good idea to understand what functionality the kernel classes provide.
The following is a brief description of the various classes that either need to be defined or are used by a Domain. Note that we only provide a functional description of some of the major methods of each class and not a complete description of all methods.
17.2.1 Target
A Target is an object that manages the execution of the Stars in a Domain.
Major methods:
- **run()**
Called to execute a schedule.
- **wrapup()**
Called at the end of an execution to clean up.
- **setup()**
Called by initialize() (which is inherited from the Block class, which is a common base class for many of Ptolemy’s classes). Sets each Star to point to this Target and sets up the Scheduler.
Major objects contained are:
- **gal**
A pointer to the Galaxy being executed.
- **sched**
A pointer to the Scheduler that is being used.
For further information about Targets, see some of the existing domains.
17.2.2 Domain
Declares the type of various components of the Domain, like which type of WormHole, PortHole, Star, etc. is used by the Domain.
Major methods:
- **newWorm()**
Create a WormHole of the appropriate type for this Domain.
- **newFrom()**
Create an EventHorizon (an object that is used to interface to other Domains, used with WormHoles) that translates data from a Universal format to a Domain specific one.
- **newTo()**
Create an EventHorizon that translates data from a Domain specific format to a Universal one.
- **newNode()**
Returns a Geodesic of the appropriate type for this Domain.
17.2.3 Star
A Star is an object derived from class Block that implements an atomic function.
Major methods:
- **run()**
What to do to run the star.
For example, the DataFlowStar class (a parent class to many of the dataflow domain stars such as SDFStar and DDFStar) defines this function to make each input PortHole obtain Particles from the Geodesic, execute the go() method of each Star, and then have each output PortHole put its Particles into the Geodesic.
17.2.4 PortHole
PortHoles are data members of Stars and are where streams of Particles enter or leave the Stars. Each PortHole always handles Particles of one type, so two connected PortHoles need to decide which data type they will use if they are not the same. There is a
base class called GenericPort which provides some basic methods that derived classes should redefine as well as some data members commonly needed by all PortHole types.
Major methods:
- **isItInput()**
Return TRUE if the PortHole class is an input type.
- **isItOutput()**
Return TRUE if the PortHole class is an output type.
- **isItMulti()**
Return TRUE if the PortHole class is a MultiPortHole.
- **connect()**
Connect this PortHole to a Geodesic (create one if needed) and tell that Geodesic to connect itself to both this PortHole and the destination PortHole. Also provides the number of delays on this connection.
- **initialize()**
Initialize the PortHole. In the case of output PortHoles, this function will usually initialize the connected Geodesic as well. Resolve the type of Particles with the PortHole it is connected to.
- **receiveData()**
What to do to receive data from the Geodesic.
- **sendData()**
What to do to send data to the Geodesic.
- **putParticle()**
Put a particle from the buffer into the Geodesic.
- **getParticle()**
Get a particle from the Geodesic and put it into the buffer.
- **numXfer()**
Returns numberTokens, the number of Particles transferred per execution.
- **numTokens()**
Returns the number of Particles inside the Geodesic.
- **numInitDelays()**
Returns the number of initial delay on the Geodesic.
- **geo()**
Returns a pointer to the Geodesic this PortHole is connected to.
- **setDelay()**
Set the delay on the Geodesic.
Major data members:
- **myType**
Data type of particles in this porthole.
- **myGeodesic**
The Geodesic that this PortHole is connected to.
- **myPlasma**
A pointer to the Plasma used to request new Particles.
- **myBuffer**
Usually a CircularBuffer used to store incoming or outgoing Particles.
- **farSidePort**
The PortHole that we are connected to.
- **bufferSize**
The size of the Buffer.
- **numberTokens**
The number of Particles consumed or generated each time we access the Geodesic.
Note that PortHoles are generally separated into input PortHoles and output
PortHoles. They aren’t designed to handle bidirectional traffic.
17.2.5 Geodesic
Models a FIFO buffer (usually) between two PortHoles. Major methods:
- **setSourcePort()** Set the source PortHole and the delay on this connection. A delay is usually implemented as an initial Particle in the Geodesic’s buffer, but this can be changed depending on the desired functionality.
- **setDestPort()** Set the destination PortHole.
- **disconnect()** Disconnect from the given PortHole.
- **setDelay()** Set the number of delays on this connection.
- **initialize()** Initialize the buffer in this Geodesic. This means either clear it or insert the number of initialParticles needed to match the number of delays on this connection (theseParticles are taken from the source PortHoles’s Plasma).
- **put()** Put a Particle into the buffer
- **get()** Get a Particle from the buffer. **incCount()** and **decCount()** are used by a Scheduler to simulate an execution.
- **numInit()** Return the number of initial particles.
Major data members:
- **originatingPort** A pointer to the source PortHole.
- **destinationPort** A pointer to the destination PortHole.
- **pstack** The buffer, implemented as a ParticleStack.
- **sz** The number of Particles in the buffer.
- **numInitialParticles** The number of initial delays.
17.2.6 Plasma
There are container object for unused Particles. There is one global instance of a Plasma for each type of Particle defined in the kernel. This class is usually only used by theDomains and not changed by the authors of newDomains.
Major methods:
- **put()** Return an unused Particle to the Plasma.
- **get()** Get an unused Particle (or create one if needed).
17.2.7 Particle
The various Particle types supported by Ptolemy. Currently, the types are **Float**,
Int, Complex, Fix, and Message. The Message Particle is used to carry Messages (inside Envelopes) which can be almost anything. For example, the Matrix class is transferred using Message Particles. These classes are also only used as-is by the Domains and not redefined for new domains.
### 17.2.8 Scheduler
Sets up the execution by determining the order in which each Star of the Galaxy will fire. Execution is performed using two main methods -- setup() and run(). Schedulers can be timed or untimed, depending on the Domain’s model of execution. This class will usually be different for each domain, although some domains reuse the Scheduler of another domain, if the Scheduler is appropriate for the new domain’s model of computation.
**Major methods:**
- `setup()`: Checks the Stars in the Galaxy, initializes them, and creates a schedule.
- `run()`: Run the schedule computed in setup()
**Major data members**
- `myGalaxy`: The pointer to the Galaxy that the Scheduler is working on.
- `myTarget`: The pointer to the Target which is controlling the execution.
### 17.3 What happens when a Universe is run
Now that you have some idea of what classes exist in the Ptolemy kernel, this section will try to explain flow of control when a Universe is run. By knowing this, you will get an idea of what additions or changes might be needed to get the functionality you desire and how the code of your new domain will fit in.
First off, a little more about the basics of Ptolemy classes. Almost every object class in Ptolemy is derived from the `NamedObj` class. This class simply provides support for a Name field, a longer Description field, and a pointer to a Parent Block. Also, the method `initialize()` is declared here to be purely virtual, so every object should have some kind of initialization function.
The `Block` class is derived from `NamedObj` and is the main base class for most actors in Ptolemy. It has I/O constructs like PortHoles and MultiPortHoles, state/parameter constructs like State, and defines execution methods such as setup(), run() and wrapup(). The Block also provides a virtual function to access an associated Scheduler.
A simulation universe is generally of type `DataFlowStar`. When a universe is run, the flow of control is as follows, using the SDF domain as an example:
```cpp
PTcl::dispatcher()
PTcl::run()
PTcl::computeSchedule()
Runnable::initTarget()
Block::initialize()
SDFTarget::setup()
Target::setup()
SDFScheduler::setup()
```
Notice at this point that we have called two domain-specific methods, namely
SDFTarget::setup() and SDFScheduler::setup(). The Target can have a choice of
more than one Scheduler and in this case it called the default SDFScheduler. We continue
here with a more detailed description of a very important function:
```
SDFScheduler::setup()
checkConnectivity() // Checks that the galaxy is
// properly connected.
prepareGalaxy() // Initializes the portHoles of each star and
// the geodesics that connect them.
checkStars() // Verifies that the type of the Stars are
// compatible with this Scheduler.
repetitions() // Solves the balance equations for the
// system and calculates how many times
// each star should be fired for
// one iteration (specific to dataflow).
computeSchedule() // Compute the actual schedule
adjustSampleRates() // Set the number of tokens transferred
// between EventHorizons if this schedule
// is for a WormHole.
```
The order of various operations can be different for each scheduler. For example, a
new domain may require that the PortHoles be initialized after the repetitions were calcu-
lated but before the schedule was computed. The domain writer may wish to define a new
function prepareForScheduling() that would call the setup() function of each Star
without initializing the Star’s PortHoles.
Expanding prepareGalaxy() in more detail:
```
SDFScheduler:: prepareGalaxy()
galaxy()->initialize() // Initialize the galaxy.
InterpGalaxy::initialize() // Causes the initialization of delays
// and the setup of bus widths.
Galaxy::initSubblocks() // Calls initialize() of each star.
DataFlowStar::initialize()// This is a general initialize.
// function for data flow stars.
// Your own Star class might
// redefine it. Sets the number
// of input Ports and clears
// some parameters.
Block::initialize() // Initializes the PortHoles and States
// of the Block/Star. Calls the user
// defined setup() function of each
// star after the portholes and
// geodesics have been initialized.
PortHole::initialize() // General PortHole initialization;
// again you can redefine it for a
// domain specific PortHole.
// Resolves the type of Particles
// to be sent. Allocates a
// buffer and a Plasma. Request
// empty Particles from the Plasma
// to initialize the buffer.
Geodesic::initialize() // General Geodesic initialization,
```
// called by output PortHole only.
// Clears the buffer and adds any
// initial Particles for delays.
After the schedule is set up and all the actors in the Universe have been initialized, the flow of control is as follows:
PTcl::run()
PTcl::computeSchedule() // Described above.
PTcl::cont()
uuniverse->setStopTime() // Used to set the number of
// iterations to be run.
universe->run()
InterpUniverse::run()
Runnable::run()
target->run()
sched->run()
SDFScheduler::run() // The domain specific Scheduler’s
// run() function.
Let’s look at what a typical scheduler does when it runs a star.
SDFScheduler::run() // Checks if there has been an error
// in the last iteration. Calls
// runOnce() for each iteration.
runOnce() // Goes through each Star on the
// schedule (which is a list of Stars
// computed by setup() ) and calls
// star->run().
star->run()
DataFlowStar::run() // The SDF domain uses the general
// DataFlowStar
// run() function. A new Domain
// might want to redefine this.
..Ports->receiveData() // Calls receiveData() for each of
// the PortHoles for this Star.
// Output PortHoles would do nothing
// in this case but input PortHoles
// would get Particles from the
// Geodesic.
Star::run()
SimControl::doPreActions() // Execute pre-actions for a star.
go() // Call the Star specific go() function
// that will process the input data
// and generate data to be put in the
// output PortHoles.
SimControl::doPostActions() // Execute post-actions for a star
..Ports->sendData() // Calls sendData() for each of the
// PortHoles for this Star.
// Input PortHoles would do nothing
// in this case but output PortHoles
// would put their Particles into
// the Geodesic and refill their
// buffers with empty Particles
// from the Plasma.
17.4 Recipe for writing your own domain
This section describes some of the template files we have made so that you don’t have to start coding from scratch. We also discuss which classes and methods of those classes that a new domain must define.
17.4.1 Introduction
The first thing to do is to think through what you want this domain to do. You should have some idea of how the your Stars will exchange data and what kind of Scheduler is needed. You should also understand the existing Ptolemy domains so that you can decide whether your domain can reuse some of the code that already exists. Also, read Chapter 1 so you understand the general classes in the Ptolemy kernel and how the domain methods interact.
17.4.2 Creating the files
The mkdom script at $PTOLEMY/bin/mkdom can be used to generate template files for a new domain. mkdom takes one argument, the name of the domain, which case insensitive, mkdom converts the what ever you pass to it as a domain name to upper and lower case internally. Here, we assume that you have set up a parallel development tree, as documented in chapter 1, or you are working in the directory tree where Ptolemy was untar’d.
1. To use mkdom, create a directory with the name of your domain in the src/domains directory. In this example, we are creating a domain called yyy:
mkdir $PTOLEMY/src/domains/yyy
2. cd to that directory and then run mkdom:
cd $PTOLEMY/src/domains/yyy
$PTOLEMY/bin/mkdom yyy
17.4.3 Required classes and methods for a new domain
mkdom will create copies of key files in $PTOLEMY/src/domains/yyy/kernel and a Nop star in $PTOLEMY/src/domains/yyy/stars. The template files have various comments about which methods you need to redefine. The template files also define many function for you automatically. If you aren’t clear as to how to define the methods in each class, it is best to try look at the existing Ptolemy domains as examples.
YYYDomain.cc
This file will be setup for you automatically so that you shouldn’t need to modify much. The various methods here return WormHoles and EventHorizons which should be defined in YYYWormhole. A node is usually a type of Geodesic that allows multiple connections, such as AutoForkNode. You can define your own YYYGeodesic or simply use the kernel’s AutoForkNode if that is suitable (this is what SDF does).
YYYWormhole.{h,cc}
Various methods to interface your new domain with others must be defined if you wish to use your domain with other domains.
However, if you don’t need to mix domains, then you may skip these files. Wormholes translate different notions of time or concurrency. Since some domains are timed (like DE) and others are not (like SDF), you must be able to convert from one to another.
YYYGeodesic.{h,cc}
Currently we set the Geodesic to be the kernel’s AutoForkNode. If the kernel’s Geodesic class offers all the functionality you need, then this doesn’t need to be changed. Otherwise try looking at some of the pre-existing domains for examples.
YYYPortHole.{h,cc}
Define input PortHole and output PortHole, as well as MultiPortHole, specific to your domain. The only required methods are generated for you, but you’ll likely want to define many more support methods. Look at the kernel PortHole, DFPortHole, and SDFPortHole for examples.
YYYStar.{h,cc} Domain-specific class definition. Again, all the required methods have been defined but you’ll want to add much more. Refer to Star, DataFlowStar, and SDFStar as examples.
YYYScheduler.{h,cc}
This is where much of the action goes. You’ll need to define the function setup(), run(), and setStopTime().
### 17.4.4 Building an object directory tree
Ptolemy can support multiple machine architectures from one source tree, the object files from each architecture go into $PTOLEMY/obj.$PTARCH directories. Currently, there are two ways to build the $PTOLEMY/obj.$PTARCH directory tree: MAKEARCH and mkPtolemyTree. To build object files for your new domain in $PTOLEMY/obj.$PTARCH, you will have to set up either or both of these ways. Typically, you first use MAKEARCH because it can operate on an existing Ptolemy tree, and once everything works, then you and other users run mkPtolemyTree to setup parallel development trees on the new domain.
#### MAKEARCH
$PTOLEMY/MAKEARCH is a /bin/csh script that creates or updates the object tree in an already existing Ptolemy tree. To add a domain to MAKEARCH, edit the file and look for a similar domain, and add appropriately. A little trial and error may be necessary, but the basic idea is simple: MAKEARCH traverses directories and creates subdirectories as it sees fit. Note that if MAKEARCH is under version control, you may need to do chmod a+x MAKEARCH when you check it back out, or it won’t be executable.
Continuing with our example:
3. Edit MAKEARCH and add your domain yyy to the list of experimental domains:
```
set EXPDOMAINS=(cg56 cgc vhdlb vhdl mdsdf hof ipus yyy)
```
This will cause a stars and kernel directory to be created in $PTOLEMY/obj.$PTARCH/domains/yyy when MAKEARCH is run.
4. Run MAKEARCH:
cd $PTOLEMY; csh -f MAKEARCH
If you get a message like:
cxh@watson 181% csh -f MAKEARCH
making directory /users/ptolemy/obj.sol2/domains/yyy
mkdir: Failed to make directory "yyy"; Permission denied
yyy: No such file or directory
The you may need to remove your obj.$PTARCH tree, as MAKEARCH has probably traversed down a parallel tree created by mkPtolemyTree and come up in a directory that you do not own.
mkPtolemyTree
$PTOLEMY/bin/mkPtolemyTree is a tclsh script that creates a new parallel Ptolemy tree. Note that mkPtolemyTree cannot be run in an already existing Ptolemy development tree. The file $PTOLEMY/mk/stars.mk controls what directories mkPtolemyTree creates, you need not actually edit the mkPtolemyTree script. To create pigiRpc binaries with your new domain in it, you will need to modify stars.mk, so adding support for mkPtolemyTree is fairly trivial.
$PTOLEMY/mk/stars.mk
Follow the style for domain addition that you see in this file for the other domains. A few things to keep in mind:
- You should list the new domain before any other domain library that the new domain depends on.
- You should make sure to define the make variables to pull in other domain libraries as necessary. You may need MDSDF=1 definition for example.
- mkPtolemyTree uses the CUSTOM_DIRS makefile variable to determine what directories to create, so be sure to add your directories here.
Continuing with our example of adding the yyy domain:
5. Edit $PTOLEMY/mk/stars.mk and add your entry:
YYYDIR = $(CROOT)/src/domains/cg56
ifdef YYY
CUSTOM_DIRS += $(YYYDIR)/kernel $(YYYDIR)/stars
# Have to create this eventually
PALETTES += $PTOLEMY/src/domains/yyy/icons/main.pal
STARS += $(LIBDIR)/yyystars.o
LIBS += -lyyystars -lyyy
LIBFILES += $(LIBDIR)/libyyystars.$(LIBSUFFIX) $(LIBDIR)/libyyy.$(LIBSUFFIX)
endif
$PTOLEMY/mk/ptbin.mk
In $PTOLEMY/mk/ptbin.mk, add your domain to the FULL definition. This causes your domain to be built in whenever a full pigiRpc binary is created.
Building a pigiRpc
6. To build a pigiRpc with your domain, first build and install your domain’s kernel and star libraries:
cd $PTOLEMY/obj.$PTARCH/domains/yyy
make depend
make install
If your domain depends on other domains, you will have to build in those directories as well. You may find it easier to do cd $PTOLEMY; make install, though this could take 3 hours. An alternative would be to create a parallel directory tree using mkPtolemyTree.
7. If you have not recompiled from scratch, or run mkPtolemyTree, you may also need to do:
cd $PTOLEMY/obj.$PTARCH/pigilib; make ptkRegisterCmds.o
8. Then build your pigiRpc. You can either build a full pigiRpc with all of the domains, or you can create a override.mk in $PTOLEMY/obj.$PTARCH/pigiRpc which will pull in only the domains you want.
$PTOLEMY/obj.$PTARCH/pigiRpc/override.mk could contain:
YYY=1
DEFAULT_DOMAIN=YYY
USERFLAGS=
VERSION_DESC="YYY Domain Only"
To build your binary, do:
cd $PTOLEMY/obj.$PTARCH/pigiRpc; make
If you don’t have all the libraries built, you may get an error message:
make: *** No rule to make target `..../lib.so/libcg56dsps stars.so’, needed by `pigiRpc’. Stop.
The workaround is to do:
cd $PTOLEMY/obj.$PTARCH/pigiRpc; make PIGI=pigiRpc
9. See “Creating a pigiRpc that includes your own stars” on page 1-7 for details on how to use your new pigiRpc binary.
10. To verify that your new domain has been installed, start pigi with the -console option:
cd $PTOLEMY; pigi -rpc $PTOLEMY/obj.$PTARCH/pigiRpc/pigiRpc -console
and then type:
domains
into the console window prompt. Below is the sample output for the yyy example domain:
pigi> domains
YYY
pigi> knownlist
Nop
pigi>
|
{"Source-Url": "https://ptolemy.eecs.berkeley.edu/ptolemyclassic/almagest/docs/prog/pdf/domains.pdf", "len_cl100k_base": 6397, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 33696, "total-output-tokens": 7213, "length": "2e12", "weborganizer": {"__label__adult": 0.00029277801513671875, "__label__art_design": 0.0002593994140625, "__label__crime_law": 0.00016188621520996094, "__label__education_jobs": 0.00045871734619140625, "__label__entertainment": 6.777048110961914e-05, "__label__fashion_beauty": 0.00011497735977172852, "__label__finance_business": 0.00015485286712646484, "__label__food_dining": 0.0003020763397216797, "__label__games": 0.0008802413940429688, "__label__hardware": 0.0012989044189453125, "__label__health": 0.0002384185791015625, "__label__history": 0.0002199411392211914, "__label__home_hobbies": 0.00010859966278076172, "__label__industrial": 0.0004401206970214844, "__label__literature": 0.00017142295837402344, "__label__politics": 0.00018155574798583984, "__label__religion": 0.00042176246643066406, "__label__science_tech": 0.018829345703125, "__label__social_life": 7.075071334838867e-05, "__label__software": 0.00687408447265625, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.0002493858337402344, "__label__transportation": 0.00042724609375, "__label__travel": 0.00018215179443359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26217, 0.01117]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26217, 0.66914]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26217, 0.85341]], "google_gemma-3-12b-it_contains_pii": [[0, 2802, false], [2802, 4720, null], [4720, 6726, null], [6726, 8895, null], [8895, 10706, null], [10706, 13192, null], [13192, 15590, null], [15590, 17351, null], [17351, 19837, null], [19837, 22300, null], [22300, 24314, null], [24314, 26060, null], [26060, 26217, null], [26217, 26217, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2802, true], [2802, 4720, null], [4720, 6726, null], [6726, 8895, null], [8895, 10706, null], [10706, 13192, null], [13192, 15590, null], [15590, 17351, null], [17351, 19837, null], [19837, 22300, null], [22300, 24314, null], [24314, 26060, null], [26060, 26217, null], [26217, 26217, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26217, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26217, null]], "pdf_page_numbers": [[0, 2802, 1], [2802, 4720, 2], [4720, 6726, 3], [6726, 8895, 4], [8895, 10706, 5], [10706, 13192, 6], [13192, 15590, 7], [15590, 17351, 8], [17351, 19837, 9], [19837, 22300, 10], [22300, 24314, 11], [24314, 26060, 12], [26060, 26217, 13], [26217, 26217, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26217, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
ca63099c3d93bbee0325385cdb9e357547086229
|
Extensible Virtual Environment Systems Using System of Systems Engineering Approach
Manuel Oliveira
University College London
m.oliveira@cs.ucl.ac.uk
Joao Pereira
INESC-ID
jap@inesc.pt
Abstract
The development of Virtual Environment (VE) systems is a challenging endeavor with a complex problem domain. The experience in the past decade has helped contribute significantly to various measures of software quality of the resulting VE systems. However, the resulting solutions remain monolithic in nature without addressing successfully the issue of system interoperability and software aging. This paper argues that the problem resides in the traditional system centric approach and that an alternative approach based on system of systems engineering is necessary. As a result, the paper presents a reference architecture based on layers, where only the core is required for deployment and all others are optional. The paper also presents an evaluation methodology to assess the validity of the resulting architecture, which was applied to the proposed core layer and involving individual sessions with 12 experts in developing VE systems.
1. Introduction
A Virtual Environment (VE), within the context of this paper, consists of an alternate reality existing in the digital real where people come together to play, work and socialize together. This means that an online game qualifies as a VE. The VE community has recognized that there is no ideal VE system that addresses entire problem domain with all the desired functionality. This leads to the current state of affairs where there are a significant number of different VE systems, each catering a particular set of functional requirements. When the set of user requirements divert from the targeted initial user functionality supported by the VE system, then either the initial functionality is scoped to the one supported by the chosen VE solution or significant work is carried out that potentially may lead to a new solution, as in the case of [16]. Many of the difficulties associated to VE system development are derived from the complexity of the problem domain associated to VE. The adoption of software engineering principles and methodologies has allowed for gradual improvement to the software quality of various VE systems in terms of stability and flexibility. However, the monolithic nature of VE system architecture remains in place, making it difficult to achieve code interoperability, which would allow a developer to combine elements from different systems. As a result, the choice of VE system still places constraints on the supported functionality. This is clearly evident in the clone phenomena of the game engines that support a particular genre, which normally results in a new game engine if new functionality is necessary.
This paper presents an approach to develop VE systems that are more resilient to the effects of software aging [19] and promote code interoperability between different solutions. The next section presents a short overview of the related work within the VE community, but also some lessons are drawn from the operating systems community. The proposed reference architecture based on layers is provided in section 3, which then leads to description of its foundational layer (section 4). The results of an evaluation methodology are discussed in section 5. In section 6, examples of other components from the other layers are briefly discussed. Finally some conclusions are drawn in section 7.
2. Related Work
The related work touches on some of the existing VE systems, but also draws some lessons from operating systems community.
2.1. Virtual Environment Systems
The Distributed Interactive VE (DIVE) [10] system scoped the problem domain to collaborative VEs with particular focus on close high fidelity social
communication within small groups of users. The DIVE system supports a partial replication policy of the data model across a peer-to-peer network architecture using multicast as the communication model to distribute changes to the database. Each host initiates their session by loading the entire world from various file locations across the network or through state transfer. Any local changes are communicated to other remote hosts by means of events and consequently, all updates from remote hosts are received as events via the network. An important design decision was to develop a system that supported rapid development of a VE by means of content development combined with scripting to support the dynamics of an alternate reality. Although DIVE has greatly benefited from software engineering practices and achieved high internal flexibility, the system remains monolithic from an external perspective without the possibility of extracting a sub-system to integrate into another VE system.
The VRJuggler platform [2] is a toolkit to build a VE system that is portable, flexible and configurable at runtime. The core of the toolkit is the vjKernel, which alleviates the developer of considering low-level details regarding the management of system resources such as devices and processes. The main aim of VRJuggler is to provide a Virtual Platform (VP) that makes devices, and their configuration, transparent to developers. This allows the developers to build a VE system disregarding the target configuration of the hardware and expect the applications to work as a result of the decoupling. The VRJuggler is not a turn-key application that supports users within a VE. A key limitation, resulting from a design decision, is the inability of supporting multi-users across a network. However, VRJuggler does support multiple users locally sharing the same hardware responsible for the rendering albeit each user having their own devices. Although the system achieves the target goals, VRJuggler enforces a non-flexible framework for rendering.
The VHD++ [20] main focus is to provide an open flexible system as the on within the community of VEs and the games industry. The VHD++ system takes a three layered approach, consisting of a system, simulation and application. However, all layers are mandatory. The semantics increase from the system towards the application, which implies an increase in productivity whilst reducing the flexibility. The VHD++ core includes some high semantics, such as the scheduler that fails to support real-time as intended, thus introducing code hematomas since the application is required to deal with timing issues. The richness of features supported by the kernel raises the potential for implementation dilemmas.
The main motivation of MAVERIK [11] is to provide a VE system that abstracts the rendering process from the spatial data structure and particular processing methods. As with other solutions, design choices were made that do not support code interoperability albeit the high flexibility to support multiple rendering techniques. The implementation is based on a single processing loop, which imposes performance constraints and although the pipeline is highly flexible techniques such as global illumination and transparency are not easily supported.
The Bamboo [21] VE system aims to provide a flexible open system to facilitate the development of VE systems. The foundational layer is the Netscape Portable Runtime (NSPR), which provides a platform abstraction over the different operating systems, thereby facilitating some code that is cross-platform. The underlying design principle of Bamboo is to decompose a system into well specified building blocks encompassed into Modules, which consists of a well-defined interface used by the Kernel for management of the system. During run-time the Bamboo Kernel uses configuration files to locate the Modules to be dynamically linked, resolving any dependencies. The Kernel uses Language Loaders, as plug-ins, as an abstraction from the particular implementation language used in the development of a Module. It is not the intent of Bamboo to be used as a VE system. Instead the high semantics is delegated to the development of new Modules or the usage of existing Modules.
The High Level Architecture (HLA) [12] is a de facto standard of a software architecture for building large scale VEs based on simulation components. Unlike the previous solutions, HLA has both reusability and interoperability as two of its system attributes to allow mix of simulation components from different sources. Although HLA has become a standard and has achieved some of its design goals, its usage remains within the domain of military simulation projects. The findings of [1] corroborate this tacit understanding, identifying the cost/benefit ratio as the main culprit. The adoption of HLA implies significant costs in development resources and reuse is impeded by a very steep learning curve coupled with an overly complex architecture with performance constraints and semantic interoperability problems.
2.2. Operating Systems
With operating systems, the initial monolithic architectures raised serious problems such as flexibility, extensibility, reliability and maintenance. In response to the drawbacks, new architectures emerged,
namely the concept of micro-kernel [7] that consisted on a minimal kernel where the services and policies are delegated to the user space of an operating system. The first generation of micro-kernels (ie: Choices [5]) produced disappointing results since their utility was compromised by the poor performance and failure to adhere to the core design principles of small, simple and flexible kernels. A careful analysis [14] demonstrates that the fallacy related to poor performance is due to poor selection of implementation strategies. This realisation led to a second generation [15] of micro-kernels (ie: Exokernel[9]) where major restructuring was done to increase the performance whilst addressing the shortcomings in the design. To address the concerns of flexibility, some solutions, such as Apertos [22], have adopted open implementation (OI) as a foundational design paradigm, but the associated performance penalty to the approach is a significant deterrent for wider adoption. The performance concerns led to the Exokernel design where abstractions were deemed to be the cause of the capitulation of the micro-kernel concept, thus all abstractions were eliminated [8]. The result was tight coupling of the operating system to a particular hardware configuration and consequently large applications burdened with repeated common functionality. Therefore, the challenge is to achieve the best equilibrium between flexibility and performance.
3. A System of Systems Approach
A key strategy towards handling complexity of a problem domain is system decomposition [18]. Although this has been used successfully to deal with internal modifiability of a system, code interoperability between different solutions remains an extreme difficult challenge that remains unsolved. This is largely due to the single system engineering approach, which leads to solutions with closed boundaries and consequently may be viewed as monolithic, forcing a person to either adopt the entire solution and be constrained by the associated sub-domain, or to develop a new solution tailored to the desired functionality that could not be addressed by the extensibility constraints of an existing solution, as reported in the case study of [16]. The proposed architecture, presented in UML2.0 diagram of Figure 1, takes decomposition further resorting to a system of systems engineering approach, where all the architecture is functionally coherent, but each subsystem is considered independently from one another and totally decoupled. The low level APIs aggregate the common libraries that are accessible to the remainder of the system, such as OpenGL and BSD Sockets.

The reference architecture identifies five main layers, each graphically represented by the UML icon of a package:
- The Virtual Environment Platform (VEP) is the core of the reference architecture and common to all VE systems. The design of the core should avoid any potential implementation dilemmas, thus being totally devoid of any semantic connotation of any particular subset of the problem domain of VE systems. As a result, the main functionality of the VEP is resource management based on an extensible kernel that is supported by a security model;
- The traditional core functionality of VE systems is to be captured and deployed as components that correspond to systems. These are aggregated together in the Core Virtual Environment Platform Components (CVEPC), of which Rendering subsystems (either one integrated solution, or a collection of different subsystems), the sensorial interface framework (both input and output devices) and the network subsystem are clear candidates. A distinguishing factor that qualifies a component as CVEPC is the fact that a VE system will be severely hampered in supporting the
functionality of VE system (ie: without a Network subsystem, it is not possible to support multiple users and no system can do without at least a visual rendering engine). Therefore these component subsystems should be carefully designed to support a wide range of applications since implementation dilemmas will reduce their potential of adoption and reuse. However, anyone of these components is optional and may be replaced by monolithic components from more semantically rich layers;
• The Virtual Environment Components (VEC) comprises all the components that are closer coupled to the particular application sub-domain being addressed by the system. This is the case of the Physics, Animation and Avatar to name but just a few;
• The Virtual Environment Application (VEA) layer contains all the remainder elements that are tightly coupled to the particular application domain of the VE system. So for example, the behaviors of geometric objects within the environment would be a likely candidate;
• The Sensorial Interface is a self-contained layer that is coupled to the underlying hardware, whilst providing a generic abstraction in terms of input/output for the other layers.
4. Virtual Environment Platform
The Virtual Environment Platform (VEP) aims to provide what is common to VE systems, thus much of the identified components should be present in different forms in each existing system. The design of the VEP requires the elimination of all abstractions with the slightest semantic connotation to a process or method not common to any virtual environment system. However, the platform must support dynamic extensibility beyond the base functionality of the system. An initial step to determine the scope and nature of the system requirements of the VEP is to derive some scenarios.
4.1. Scenarios
A survey [6] of software architecture methodologies demonstrates that a common element is the use of scenarios to aid the different stakeholders to agree and prioritise the required functionality of the system.
Table 1 – Final revised scenarios for VEP
<table>
<thead>
<tr>
<th>Nº</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>A VE system can be built from a set of components provided by different sources.</td>
</tr>
</tbody>
</table>
There were multiple iteration cycles to validate, refine and add scenarios defining what would be the minimal required functionality to be supported by the VEP resulting in the scenarios of Table 1.
4.2. Reference Architecture
The VEP reference architecture (Figure 2) was implemented in the Java language as Java Adaptive Dynamic Environment (JADE), which some interfaces are described in [17] (the platform has been further refined but without any changes to architecture).
4.2.1. Security. The topic of security is an often neglected requirement within the VE community. This is exemplified by the Distributed Interactive
Simulation (DIS) where the target informs the simulation if it was hit, thus invulnerability is simply achieved by ignoring any HIT messages. With the possibility of run-time extensibility, security gains new significance as it is necessary to avoid rogue components to execute malicious or unauthorized behavior. The security subsystem needs to support authentication, access control, availability, integrity and auditing, but the framework should be based on a policy based mechanism that can be textually configured.
_In JADE, the choice was made to extend the security manager of the Java Virtual Machine (JVM) with its fine grain policies._
4.2.2. Resource Management. The resource management is without doubt the most complex part of the VEP with the following four subsystems:
- **Configuration.** With dynamic systems that are highly flexible it becomes necessary to provide an automated mechanism for configuration of the system and its resources. The interface reduces itself to setting up the parser, the syntax and interpreter to be used. This framework is common to most configuration implementation strategies and scripting engines, thus by establishing a common bootstrap procedure. In addition, a method should exist to permit polling of the state of the Configuration subsystem.
- _In JADE, the Configuration subsystem supports a command line mechanism and a nested SAX parser used in the configuration of all components of the VE system, albeit each component having its own semantic interpreter._
- **Communication.** It is necessary for the various elements within a VE to communicate with each other. The use of direct interface leads to tight coupling, which compromises interoperability and reduces software aging resilience. An alternative approach is to adopt a communication mechanism where the elements within a system do not require prior knowledge of the target interfaces whilst supporting asynchronous and asynchronous communication. This can be achieved by means of events that provide information concerning the source and what type of event was triggered. The essence of any event model is the Publisher/Subscriber pattern and it should be independent if the system is executing on a single or multiple processes/machines.
- _In JADE, two variations of an event model are available, centralized and delegation._
- **Namespace.** In any VE system, there are numerous resources with a wide and diverse nature, ranging from a simple image as a texture to a complex mathematical model encoded in specific programming language. When considering code based resources, it is necessary to validate with the security manager if the resource can be integrated into the system and with what privileges. Therefore, a main building block of VEP is Namespace, which corresponds to an abstraction providing a context for resources. In addition to a clear interface to manage namespace, finding or retrieving resources is done based on the use of a Search Criteria. Since the result of a Search Criteria use is a Namespace, it is possible to chain searches with different relational operations, such as AND, OR, XOR.
- _In JADE there is a default implementation for a local object registry, a SQL database based on object/relational mapping and a distributed namespace based on the Common Object Request Broker Architecture (CORBA; along with a library of search criteria._
4.2.3. Executive Kernel. The Executive Kernel is responsible for the runtime management of the various components during the lifetime of a VE system instantiation. The kernel is itself a Namespace that has a Communication model (event model), a Configuration framework, a Resource Locator and a Security Manager.
_In JADE, any of the subsystems are replaceable at compile-, link- or run-time._
5. Evaluation
Taking into consideration that VEP is not a monolithic system presented itself as a turn-key solution, it is necessary additional resource expenditure to develop the customized functionality of a particular VE system. The evaluation focuses on the quality attributes of the corresponding software architecture. Therefore, some of the traditional attributes measured in software architecture evaluations are not pertinent since they are implementation dependent.
Table 2 - Outline of evaluation methodology
<table>
<thead>
<tr>
<th>Nº</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>This step presents overview of evaluation method and the documentation concerning the reference architecture and its core layer - VEP.</td>
</tr>
<tr>
<td>2</td>
<td>Questionnaire that helps identify the stakeholder profile.</td>
</tr>
<tr>
<td>3</td>
<td>This step presents the vision that drives the reference architecture and analysis of related work.</td>
</tr>
<tr>
<td>4</td>
<td>This step consists of the identification of the main scenarios to be supported by the software architecture.</td>
</tr>
<tr>
<td>5</td>
<td>This step presents the software architecture of the VEP, using JADE as the reference implementation.</td>
</tr>
<tr>
<td>6</td>
<td>Revision of the scenarios identified in step 4 and their reprioritisation, selecting the most important ones.</td>
</tr>
<tr>
<td>7</td>
<td>Discussion of the architecture to validate if the final set of scenarios of step 6 can be supported by JADE. The result may identify some architectural changes.</td>
</tr>
<tr>
<td>8</td>
<td>Overall evaluation by the stakeholder of the architecture in fulfilling the initial scenarios and having the flexibility to support the revised scenarios.</td>
</tr>
<tr>
<td>9</td>
<td>Questionnaire that quantitatively captures the evaluation.</td>
</tr>
<tr>
<td>10</td>
<td>This step wraps up the process by collating all the information generated and synthesizing the results.</td>
</tr>
</tbody>
</table>
The proposed evaluation methodology is based on an extension of the Software Architecture Analysis Method (SAAM) [13], consisting of 10 well-defined steps as outlined in Table 2, which are aggregated into four distinctive phases:
- **Briefing.** Presentation of all relevant documentation pertaining evaluation method and the VE architecture and various layers. Steps 1 and 2.
- **Analysis.** Discussion of scenarios driving the VE system requirements to be addressed by the architecture. Steps 3, 4 and 5.
- **Brainstorm.** Revision of scenarios and impact on architecture, leading to evaluation. Steps 6, 7 and 8.
- **Synthesis.** Final evaluation of the expert and cross-analysis of the evaluation results from all experts. Step 9 and 10.
The aim of the evaluation was to target experts within the field of VEs or games. Therefore, the recruitment of subjects was done based on scheduled working sessions with each expert.
5.1. Questionnaires
The questionnaire from step 2 consisted of a questionnaire with 24 questions aiming to determine the experience of the expert concerning VE system development. In addition, the questionnaire would capture their inclination towards software quality and their awareness of the problems that plague the traditional development process.
The questionnaire from step 9 consisted of 16 questions with the aim to collate qualitatively the assessment of experts concerning the VE reference architecture and VEP. In particular, four questions were targeted at qualifying their predisposition to adopting the JADE in initiating future projects.
The responses were based on a Likert scale, between 1 and 7. The lower-end of the scale normally corresponds to when the respondent has a negative response to the question, whilst the upper-end of the scale indicates that the respondent has a positive response to the question. The questions were carefully structured on a qualitative approach since it would be difficult to homogenise a quantitative assessment across all stakeholders.
5.2. Experts
The evaluation method was carried out with the twelve individuals from different organizations and with different roles, albeit sharing a deep expertise in developing VE systems.
Each assessment consisted of an individual working session with an expert. The shortest session lasted 105 minutes, whilst the longest went beyond 310 minutes, but the median was 145 minutes.
5.3. Analysis
All the experts approached the problem of evaluating the reference architecture and then stressing JADE by considering how their current and future work could adopt the architectures. The maturity of JADE was demonstrated by the fact that no architectural changes were identified and any new
scenario was either foreseen or easily accommodated without any modifications.
Considering the varied background of the projects, from crowd simulation, online games, avatars, collaborative VEs, research in presence and co-presence, etc, JADE demonstrated to fulfill all the necessary requirements of different sub-domains of the problem domain associated to VE.
The evaluation also identified a keen interest to have further information and more details concerning the taxonomy and architectural guidelines for potential use in future projects involving the development of a VE system.
The final assessment indicates that the respondents agreed on the utility of a VEP and its potential for promoting reusability and interoperability in the development of VE systems. There was also agreement that JADE was deemed an appropriate reference implementation for the VEP. In addition to the informal evaluation methodology, the reference architecture and JADE reference implementation has been adopted successfully for different student projects, which has resulted in a rich library of components.
6. Components
Although VEP is the foundational layer that needs to be present, all the remainder layers are optional. This section will describe some examples from the other layers.
6.1. Core Virtual Environment Components
TreacleWell [23] is a component framework that supports networking functionality. It has four distinctive elements:
- **Connectors.** These elements provide a clear abstraction to the network for data communication, mapping the internal representation to the network device;
- **Messages.** This is an abstraction on the data to be handled by the various elements in TreacleWell, but the developer may choose on a particular implementation strategy (default is array of bytes);
- **Buffers.** These elements are repositories for messages, with multiple implementation strategies: sets, heaps, FIFO queues, tuple space, etc;
- **Flows.** These are containers of smaller FlowElements that the developer may connect together in an directed graph, supported by a centralized event model to ensure total decoupling.
A library of the four identified elements is available for a developer to build together a Well for data communication using an XML configuration file(s).
6.2. Virtual Environment Components
Some of the Virtual Environment Components developed to ease the development of VE systems are:
- **Meta Interest Management (MIM).** This component provides a framework for management of receiver interest independent of the particular interest policy used. MIM provides policies for static spatial interest and for aura based interest. MIM can be integrated with TreacleWell for the different communication architectures associated to the various interest models;
- **Meta Unified Datamodel (MUD).** The MUD framework is built upon the Model-View-Controller pattern, resulting in Node-Visual-Behavior. The resulting Module embodies the datamodel of a VE application and manages the data, which departs from the traditional approach of a single scenegraph;
- **Perceptual Network Metaphors (PNM)** [24]. This component provides a framework to bridge the problems of network communication and its impact on the immersiveness of the user. The approach taken is to integrate seamlessly into the perceptual feedback cycle of the user the additional information concerning the state of the network via a metaphor, such as the weather (congested network represented by rain in the VE).
7. Conclusions and Future Work
The adoption of software engineering principles has significantly improved the flexibility of VE systems, but interoperability remains a difficult challenge to solve, with solutions continuing to have a monolithic architecture that is coupled to a particular scope of the VE problem domain.
This paper presented a reference architecture based on a system of systems development approach and provided an overview of its foundational layer. Although the individual architectural elements are common to most systems, the adopted approach presented in the paper aims to provide a solution that addresses the issues of interoperability and software aging in addition to the classic software attributes of existing systems.
The overall result of the evaluation methodology recognizes the potential of the proposed VE reference architecture and the associated VEP to achieve the goal
of system interoperability whilst mitigating the effects of software aging.
8. Acknowledgements
This work was sponsored by the Portuguese Foundation for Science and Technology. The work has benefited from the useful and insightful comments from Anthony Steed, Jon Crowcroft and Mel Slater.
9. References
|
{"Source-Url": "http://www.inesc-id.pt/pt/indicadores/Ficheiros/4591.pdf", "len_cl100k_base": 5505, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24926, "total-output-tokens": 7226, "length": "2e12", "weborganizer": {"__label__adult": 0.0004372596740722656, "__label__art_design": 0.0004379749298095703, "__label__crime_law": 0.0003132820129394531, "__label__education_jobs": 0.0008115768432617188, "__label__entertainment": 9.268522262573242e-05, "__label__fashion_beauty": 0.0001710653305053711, "__label__finance_business": 0.0002334117889404297, "__label__food_dining": 0.0003597736358642578, "__label__games": 0.0018177032470703125, "__label__hardware": 0.00112152099609375, "__label__health": 0.0005106925964355469, "__label__history": 0.0003578662872314453, "__label__home_hobbies": 6.628036499023438e-05, "__label__industrial": 0.0003762245178222656, "__label__literature": 0.0003178119659423828, "__label__politics": 0.000255584716796875, "__label__religion": 0.0004987716674804688, "__label__science_tech": 0.030181884765625, "__label__social_life": 8.07642936706543e-05, "__label__software": 0.006591796875, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.0003750324249267578, "__label__transportation": 0.0004854202270507813, "__label__travel": 0.00026035308837890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33283, 0.02753]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33283, 0.3246]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33283, 0.91232]], "google_gemma-3-12b-it_contains_pii": [[0, 3829, false], [3829, 9165, null], [9165, 13004, null], [13004, 15971, null], [15971, 19770, null], [19770, 24400, null], [24400, 28824, null], [28824, 33283, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3829, true], [3829, 9165, null], [9165, 13004, null], [13004, 15971, null], [15971, 19770, null], [19770, 24400, null], [24400, 28824, null], [28824, 33283, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33283, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33283, null]], "pdf_page_numbers": [[0, 3829, 1], [3829, 9165, 2], [9165, 13004, 3], [13004, 15971, 4], [15971, 19770, 5], [19770, 24400, 6], [24400, 28824, 7], [28824, 33283, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33283, 0.1087]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.